Jan 27 15:08:22 crc systemd[1]: Starting Kubernetes Kubelet... Jan 27 15:08:22 crc restorecon[4585]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 15:08:22 crc restorecon[4585]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 15:08:23 crc restorecon[4585]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 15:08:23 crc restorecon[4585]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 27 15:08:24 crc kubenswrapper[4697]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 27 15:08:24 crc kubenswrapper[4697]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 27 15:08:24 crc kubenswrapper[4697]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 27 15:08:24 crc kubenswrapper[4697]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 27 15:08:24 crc kubenswrapper[4697]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 27 15:08:24 crc kubenswrapper[4697]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.332768 4697 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.338979 4697 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.339000 4697 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.339007 4697 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.339013 4697 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.339019 4697 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.339025 4697 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.339029 4697 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.339035 4697 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.339040 4697 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.339047 4697 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.339052 4697 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.339059 4697 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.339065 4697 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.339073 4697 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.339079 4697 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.339084 4697 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.339089 4697 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.339094 4697 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.339098 4697 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.339103 4697 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.339108 4697 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.339114 4697 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.339119 4697 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.339124 4697 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.339128 4697 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.339134 4697 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.339139 4697 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.339144 4697 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.339150 4697 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.339156 4697 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.339162 4697 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.339168 4697 feature_gate.go:330] unrecognized feature gate: Example Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.339173 4697 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.339178 4697 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.339183 4697 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.339188 4697 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.339194 4697 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.339200 4697 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.339205 4697 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.339210 4697 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.339216 4697 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.339222 4697 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.339227 4697 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.339232 4697 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.339237 4697 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.339241 4697 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.339246 4697 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.339251 4697 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.339256 4697 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.339262 4697 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.339267 4697 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.339272 4697 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.339277 4697 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.339282 4697 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.339286 4697 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.339291 4697 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.339296 4697 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.339300 4697 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.339305 4697 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.339310 4697 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.339315 4697 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.339319 4697 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.339324 4697 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.339329 4697 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.339335 4697 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.339341 4697 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.339347 4697 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.339353 4697 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.339359 4697 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.339365 4697 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.339371 4697 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.339484 4697 flags.go:64] FLAG: --address="0.0.0.0" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.339496 4697 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.339507 4697 flags.go:64] FLAG: --anonymous-auth="true" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.339517 4697 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.339525 4697 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.339532 4697 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.339541 4697 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.339553 4697 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.339558 4697 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.339564 4697 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.339570 4697 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.339577 4697 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.339584 4697 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.339590 4697 flags.go:64] FLAG: --cgroup-root="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.339596 4697 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.339602 4697 flags.go:64] FLAG: --client-ca-file="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.339608 4697 flags.go:64] FLAG: --cloud-config="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.339614 4697 flags.go:64] FLAG: --cloud-provider="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.339620 4697 flags.go:64] FLAG: --cluster-dns="[]" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.339626 4697 flags.go:64] FLAG: --cluster-domain="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.339632 4697 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.339638 4697 flags.go:64] FLAG: --config-dir="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.339643 4697 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.339649 4697 flags.go:64] FLAG: --container-log-max-files="5" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.339657 4697 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.339663 4697 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.339668 4697 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.339674 4697 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.339680 4697 flags.go:64] FLAG: --contention-profiling="false" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.339685 4697 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.339691 4697 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.339697 4697 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.339702 4697 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.339710 4697 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.339716 4697 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.339722 4697 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.339727 4697 flags.go:64] FLAG: --enable-load-reader="false" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.339733 4697 flags.go:64] FLAG: --enable-server="true" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.339739 4697 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.339746 4697 flags.go:64] FLAG: --event-burst="100" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.339752 4697 flags.go:64] FLAG: --event-qps="50" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.339757 4697 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.339763 4697 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.339769 4697 flags.go:64] FLAG: --eviction-hard="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.339777 4697 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.339803 4697 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.339809 4697 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.339815 4697 flags.go:64] FLAG: --eviction-soft="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.339821 4697 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.339826 4697 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.339836 4697 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.339842 4697 flags.go:64] FLAG: --experimental-mounter-path="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.339848 4697 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.339853 4697 flags.go:64] FLAG: --fail-swap-on="true" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.339859 4697 flags.go:64] FLAG: --feature-gates="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.339866 4697 flags.go:64] FLAG: --file-check-frequency="20s" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.339872 4697 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.339877 4697 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.339883 4697 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.339889 4697 flags.go:64] FLAG: --healthz-port="10248" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.339895 4697 flags.go:64] FLAG: --help="false" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.339900 4697 flags.go:64] FLAG: --hostname-override="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.339906 4697 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.339911 4697 flags.go:64] FLAG: --http-check-frequency="20s" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.339917 4697 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.339925 4697 flags.go:64] FLAG: --image-credential-provider-config="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.339930 4697 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.339936 4697 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.339942 4697 flags.go:64] FLAG: --image-service-endpoint="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.339947 4697 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.339953 4697 flags.go:64] FLAG: --kube-api-burst="100" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.339959 4697 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.339965 4697 flags.go:64] FLAG: --kube-api-qps="50" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.339970 4697 flags.go:64] FLAG: --kube-reserved="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.339975 4697 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.339981 4697 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.339987 4697 flags.go:64] FLAG: --kubelet-cgroups="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.339993 4697 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.339999 4697 flags.go:64] FLAG: --lock-file="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.340004 4697 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.340010 4697 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.340016 4697 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.340025 4697 flags.go:64] FLAG: --log-json-split-stream="false" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.340030 4697 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.340036 4697 flags.go:64] FLAG: --log-text-split-stream="false" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.340042 4697 flags.go:64] FLAG: --logging-format="text" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.340048 4697 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.340054 4697 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.340060 4697 flags.go:64] FLAG: --manifest-url="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.340065 4697 flags.go:64] FLAG: --manifest-url-header="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.340073 4697 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.340078 4697 flags.go:64] FLAG: --max-open-files="1000000" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.340085 4697 flags.go:64] FLAG: --max-pods="110" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.340091 4697 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.340096 4697 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.340102 4697 flags.go:64] FLAG: --memory-manager-policy="None" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.340107 4697 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.340113 4697 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.340119 4697 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.340125 4697 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.340139 4697 flags.go:64] FLAG: --node-status-max-images="50" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.340145 4697 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.340150 4697 flags.go:64] FLAG: --oom-score-adj="-999" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.340156 4697 flags.go:64] FLAG: --pod-cidr="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.340162 4697 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.340171 4697 flags.go:64] FLAG: --pod-manifest-path="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.340177 4697 flags.go:64] FLAG: --pod-max-pids="-1" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.340183 4697 flags.go:64] FLAG: --pods-per-core="0" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.340188 4697 flags.go:64] FLAG: --port="10250" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.340194 4697 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.340199 4697 flags.go:64] FLAG: --provider-id="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.340205 4697 flags.go:64] FLAG: --qos-reserved="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.340211 4697 flags.go:64] FLAG: --read-only-port="10255" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.340216 4697 flags.go:64] FLAG: --register-node="true" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.340222 4697 flags.go:64] FLAG: --register-schedulable="true" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.340228 4697 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.340238 4697 flags.go:64] FLAG: --registry-burst="10" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.340243 4697 flags.go:64] FLAG: --registry-qps="5" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.340249 4697 flags.go:64] FLAG: --reserved-cpus="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.340254 4697 flags.go:64] FLAG: --reserved-memory="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.340261 4697 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.340267 4697 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.340274 4697 flags.go:64] FLAG: --rotate-certificates="false" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.340279 4697 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.340285 4697 flags.go:64] FLAG: --runonce="false" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.340290 4697 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.340296 4697 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.340302 4697 flags.go:64] FLAG: --seccomp-default="false" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.340308 4697 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.340313 4697 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.340319 4697 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.340325 4697 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.340332 4697 flags.go:64] FLAG: --storage-driver-password="root" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.340339 4697 flags.go:64] FLAG: --storage-driver-secure="false" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.340346 4697 flags.go:64] FLAG: --storage-driver-table="stats" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.340353 4697 flags.go:64] FLAG: --storage-driver-user="root" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.340360 4697 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.340367 4697 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.340374 4697 flags.go:64] FLAG: --system-cgroups="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.340380 4697 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.340389 4697 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.340395 4697 flags.go:64] FLAG: --tls-cert-file="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.340401 4697 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.340407 4697 flags.go:64] FLAG: --tls-min-version="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.340413 4697 flags.go:64] FLAG: --tls-private-key-file="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.340418 4697 flags.go:64] FLAG: --topology-manager-policy="none" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.340424 4697 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.340430 4697 flags.go:64] FLAG: --topology-manager-scope="container" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.340436 4697 flags.go:64] FLAG: --v="2" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.340444 4697 flags.go:64] FLAG: --version="false" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.340451 4697 flags.go:64] FLAG: --vmodule="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.340458 4697 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.340464 4697 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.340607 4697 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.340613 4697 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.340619 4697 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.340625 4697 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.340630 4697 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.340636 4697 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.340641 4697 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.340646 4697 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.340651 4697 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.340656 4697 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.340665 4697 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.340670 4697 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.340675 4697 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.340680 4697 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.340685 4697 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.340690 4697 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.340695 4697 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.340700 4697 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.340704 4697 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.340709 4697 feature_gate.go:330] unrecognized feature gate: Example Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.340714 4697 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.340719 4697 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.340723 4697 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.340728 4697 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.340733 4697 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.340738 4697 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.340743 4697 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.340747 4697 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.340752 4697 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.340756 4697 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.340761 4697 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.340766 4697 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.340771 4697 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.340777 4697 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.340805 4697 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.340812 4697 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.340818 4697 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.340823 4697 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.340829 4697 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.340834 4697 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.340840 4697 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.340848 4697 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.340856 4697 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.340862 4697 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.340867 4697 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.340872 4697 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.340877 4697 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.340882 4697 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.340887 4697 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.340891 4697 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.340896 4697 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.340901 4697 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.340906 4697 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.340911 4697 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.340915 4697 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.340920 4697 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.340925 4697 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.340930 4697 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.340934 4697 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.340939 4697 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.340944 4697 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.340948 4697 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.340953 4697 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.340958 4697 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.340963 4697 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.340969 4697 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.340974 4697 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.340979 4697 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.340984 4697 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.340989 4697 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.340995 4697 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.341011 4697 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.351758 4697 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.351814 4697 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.352032 4697 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.352049 4697 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.352058 4697 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.352064 4697 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.352071 4697 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.352078 4697 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.352085 4697 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.352093 4697 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.352099 4697 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.352106 4697 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.352120 4697 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.352127 4697 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.352135 4697 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.352142 4697 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.352148 4697 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.352156 4697 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.352162 4697 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.352169 4697 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.352175 4697 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.352183 4697 feature_gate.go:330] unrecognized feature gate: Example Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.352189 4697 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.352196 4697 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.352203 4697 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.352217 4697 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.352226 4697 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.352233 4697 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.352241 4697 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.352248 4697 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.352255 4697 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.352262 4697 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.352270 4697 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.352280 4697 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.352290 4697 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.352298 4697 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.352305 4697 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.352317 4697 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.352324 4697 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.352331 4697 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.352338 4697 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.352345 4697 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.352353 4697 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.352362 4697 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.352375 4697 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.352383 4697 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.352390 4697 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.352398 4697 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.352404 4697 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.352420 4697 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.352430 4697 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.352437 4697 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.352445 4697 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.352453 4697 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.352460 4697 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.352467 4697 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.352477 4697 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.352485 4697 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.352492 4697 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.352500 4697 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.352507 4697 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.352514 4697 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.352530 4697 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.352537 4697 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.352545 4697 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.352553 4697 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.352560 4697 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.352567 4697 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.352574 4697 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.352581 4697 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.352590 4697 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.352600 4697 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.352609 4697 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.352622 4697 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.353079 4697 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.353096 4697 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.353112 4697 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.353120 4697 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.353129 4697 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.353137 4697 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.353144 4697 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.353151 4697 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.353158 4697 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.353167 4697 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.353174 4697 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.353181 4697 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.353188 4697 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.353195 4697 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.353202 4697 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.353218 4697 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.353227 4697 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.353235 4697 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.353242 4697 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.353252 4697 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.353260 4697 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.353267 4697 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.353274 4697 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.353281 4697 feature_gate.go:330] unrecognized feature gate: Example Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.353292 4697 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.353301 4697 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.353309 4697 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.353325 4697 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.353334 4697 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.353407 4697 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.353417 4697 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.353425 4697 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.353432 4697 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.353440 4697 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.353448 4697 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.353456 4697 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.353507 4697 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.353551 4697 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.353560 4697 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.353851 4697 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.353866 4697 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.353874 4697 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.353882 4697 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.353890 4697 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.353897 4697 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.353905 4697 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.353914 4697 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.353924 4697 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.353932 4697 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.353939 4697 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.353948 4697 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.353956 4697 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.353964 4697 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.353971 4697 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.353978 4697 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.353986 4697 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.353994 4697 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.354002 4697 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.354010 4697 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.354016 4697 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.354024 4697 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.354031 4697 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.354038 4697 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.354045 4697 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.354052 4697 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.354059 4697 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.354067 4697 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.354074 4697 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.354081 4697 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.354088 4697 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.354096 4697 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.354109 4697 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.354442 4697 server.go:940] "Client rotation is on, will bootstrap in background" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.360611 4697 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.360759 4697 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.362695 4697 server.go:997] "Starting client certificate rotation" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.362735 4697 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.362930 4697 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2026-01-07 02:04:55.271695975 +0000 UTC Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.363029 4697 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.391616 4697 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.394831 4697 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 27 15:08:24 crc kubenswrapper[4697]: E0127 15:08:24.397470 4697 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.245:6443: connect: connection refused" logger="UnhandledError" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.419074 4697 log.go:25] "Validated CRI v1 runtime API" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.460260 4697 log.go:25] "Validated CRI v1 image API" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.462539 4697 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.468583 4697 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-27-15-02-09-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.468630 4697 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:41 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:42 fsType:tmpfs blockSize:0}] Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.480059 4697 manager.go:217] Machine: {Timestamp:2026-01-27 15:08:24.477911892 +0000 UTC m=+0.650311693 CPUVendorID:AuthenticAMD NumCores:8 NumPhysicalCores:1 NumSockets:8 CpuFrequency:2799998 MemoryCapacity:25199476736 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:69bca9ab-721f-415b-ad88-6626c7795f3c BootID:74b869f4-b1e4-4686-af4e-9516e0fb5017 Filesystems:[{Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:12599738368 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:41 Capacity:2519945216 Type:vfs Inodes:615221 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:42 Capacity:1073741824 Type:vfs Inodes:3076108 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:12599738368 Type:vfs Inodes:3076108 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:5039898624 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:429496729600 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:5e:6d:8f Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:5e:6d:8f Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:9e:ab:9a Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:6c:3c:87 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:ae:a0:71 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:45:6a:4c Speed:-1 Mtu:1496} {Name:eth10 MacAddress:76:68:90:ba:d0:19 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:fa:9e:41:e1:1a:d0 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:25199476736 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.480246 4697 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.480353 4697 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.480572 4697 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.480735 4697 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.480768 4697 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.481029 4697 topology_manager.go:138] "Creating topology manager with none policy" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.481040 4697 container_manager_linux.go:303] "Creating device plugin manager" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.481517 4697 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.481554 4697 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.481736 4697 state_mem.go:36] "Initialized new in-memory state store" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.481845 4697 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.486878 4697 kubelet.go:418] "Attempting to sync node with API server" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.486907 4697 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.486924 4697 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.486938 4697 kubelet.go:324] "Adding apiserver pod source" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.486950 4697 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.493923 4697 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.495666 4697 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.496817 4697 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.245:6443: connect: connection refused Jan 27 15:08:24 crc kubenswrapper[4697]: E0127 15:08:24.496985 4697 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.245:6443: connect: connection refused" logger="UnhandledError" Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.497379 4697 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.245:6443: connect: connection refused Jan 27 15:08:24 crc kubenswrapper[4697]: E0127 15:08:24.497559 4697 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.245:6443: connect: connection refused" logger="UnhandledError" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.497896 4697 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.500411 4697 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.500443 4697 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.500452 4697 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.500461 4697 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.500476 4697 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.500484 4697 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.500492 4697 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.500505 4697 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.500517 4697 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.500527 4697 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.500550 4697 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.500559 4697 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.501690 4697 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.502255 4697 server.go:1280] "Started kubelet" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.503087 4697 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.503089 4697 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.503483 4697 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.245:6443: connect: connection refused Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.503734 4697 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 27 15:08:24 crc systemd[1]: Started Kubernetes Kubelet. Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.506674 4697 server.go:460] "Adding debug handlers to kubelet server" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.507156 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.507191 4697 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.508015 4697 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.508041 4697 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.507615 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 04:09:09.541110763 +0000 UTC Jan 27 15:08:24 crc kubenswrapper[4697]: E0127 15:08:24.508324 4697 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.509525 4697 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.510345 4697 factory.go:55] Registering systemd factory Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.510375 4697 factory.go:221] Registration of the systemd container factory successfully Jan 27 15:08:24 crc kubenswrapper[4697]: E0127 15:08:24.511255 4697 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.245:6443: connect: connection refused" interval="200ms" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.511354 4697 factory.go:153] Registering CRI-O factory Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.511365 4697 factory.go:221] Registration of the crio container factory successfully Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.511413 4697 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.511387 4697 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.245:6443: connect: connection refused Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.511434 4697 factory.go:103] Registering Raw factory Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.511474 4697 manager.go:1196] Started watching for new ooms in manager Jan 27 15:08:24 crc kubenswrapper[4697]: E0127 15:08:24.511472 4697 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.245:6443: connect: connection refused" logger="UnhandledError" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.512603 4697 manager.go:319] Starting recovery of all containers Jan 27 15:08:24 crc kubenswrapper[4697]: E0127 15:08:24.513973 4697 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.245:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188e9efb017e4e0d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-27 15:08:24.502218253 +0000 UTC m=+0.674618054,LastTimestamp:2026-01-27 15:08:24.502218253 +0000 UTC m=+0.674618054,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.525767 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.525865 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.525878 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.525889 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.525901 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.525915 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.525925 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.525937 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.525950 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.525963 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.525972 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.525985 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.525996 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.526013 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.526024 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.526054 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.526063 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.526072 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.526083 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.526093 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.526104 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.526170 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.526183 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.526197 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.526209 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.526222 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.526236 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.526253 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.526263 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.526276 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.526285 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.526299 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.526310 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.526321 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.526334 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.526344 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.526356 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.526367 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.526378 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.526392 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.526402 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.526414 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.526426 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.526436 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.526450 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.526462 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.528523 4697 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.528610 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.528634 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.528648 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.528667 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.528682 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.528701 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.528729 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.528755 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.528861 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.528919 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.528943 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.528963 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.528987 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.529004 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.529024 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.529043 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.529087 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.529112 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.529130 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.529185 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.529210 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.529332 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.529368 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.529408 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.529459 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.529477 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.529648 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.529678 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.531204 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.531271 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.531329 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.531398 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.531697 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.531851 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.531874 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.531889 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.531905 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.531919 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.531931 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.531944 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.531957 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.531973 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.531983 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.531995 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.532010 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.532024 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.532038 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.532051 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.532088 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.532102 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.532114 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.532126 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.532138 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.532151 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.532164 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.532187 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.532198 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.532210 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.532239 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.532255 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.532274 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.532293 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.532310 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.532324 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.532336 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.532350 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.532362 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.532373 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.532386 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.532396 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.532407 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.532423 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.532437 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.532450 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.532463 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.532477 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.532491 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.532504 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.532515 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.532526 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.532536 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.532546 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.532560 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.532570 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.532585 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.532598 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.532610 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.532625 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.532637 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.532648 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.532661 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.532675 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.532688 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.532701 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.533049 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.533068 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.533084 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.533096 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.533111 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.533124 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.533139 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.533153 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.533166 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.533178 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.533191 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.533205 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.533217 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.533228 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.533241 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.533253 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.533266 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.533280 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.533295 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.533311 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.533325 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.533347 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.533360 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.533372 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.533385 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.533398 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.533411 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.533422 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.533434 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.533446 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.533459 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.533506 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.533521 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.533534 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.533547 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.533561 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.533575 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.533592 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.533608 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.533622 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.533634 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.533647 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.533659 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.533671 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.533683 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.533696 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.533707 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.533719 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.533731 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.534530 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.534570 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.534588 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.534615 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.534631 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.534647 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.534665 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.534682 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.534699 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.534717 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.534732 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.534749 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.534820 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.534857 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.534873 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.534889 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.534906 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.534924 4697 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.534937 4697 reconstruct.go:97] "Volume reconstruction finished" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.534949 4697 reconciler.go:26] "Reconciler: start to sync state" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.542741 4697 manager.go:324] Recovery completed Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.552399 4697 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.553848 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.554001 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.554115 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.554924 4697 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.555058 4697 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.555196 4697 state_mem.go:36] "Initialized new in-memory state store" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.565004 4697 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.567016 4697 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.567072 4697 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.567110 4697 kubelet.go:2335] "Starting kubelet main sync loop" Jan 27 15:08:24 crc kubenswrapper[4697]: E0127 15:08:24.567164 4697 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 27 15:08:24 crc kubenswrapper[4697]: W0127 15:08:24.568497 4697 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.245:6443: connect: connection refused Jan 27 15:08:24 crc kubenswrapper[4697]: E0127 15:08:24.568553 4697 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.245:6443: connect: connection refused" logger="UnhandledError" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.592142 4697 policy_none.go:49] "None policy: Start" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.593727 4697 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.593751 4697 state_mem.go:35] "Initializing new in-memory state store" Jan 27 15:08:24 crc kubenswrapper[4697]: E0127 15:08:24.609095 4697 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.643136 4697 manager.go:334] "Starting Device Plugin manager" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.643197 4697 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.643216 4697 server.go:79] "Starting device plugin registration server" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.643733 4697 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.643754 4697 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.644039 4697 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.644108 4697 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.644115 4697 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 27 15:08:24 crc kubenswrapper[4697]: E0127 15:08:24.651749 4697 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.667530 4697 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.667645 4697 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.668869 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.668901 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.668910 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.669043 4697 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.669288 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.669346 4697 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.669808 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.669826 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.669834 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.669917 4697 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.670148 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.670194 4697 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.670273 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.670296 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.670305 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.671921 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.671938 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.671957 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.671969 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.671958 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.672058 4697 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.672068 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.672458 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.672517 4697 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.673239 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.673271 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.673282 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.673441 4697 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.673480 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.673497 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.673507 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.673854 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.673882 4697 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.674336 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.674363 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.674375 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.674526 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.674552 4697 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.675362 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.675385 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.675386 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.675408 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.675393 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.675420 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:08:24 crc kubenswrapper[4697]: E0127 15:08:24.712101 4697 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.245:6443: connect: connection refused" interval="400ms" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.737079 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.737166 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.737197 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.737244 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.737266 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.737325 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.737369 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.737449 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.737489 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.737507 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.737528 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.737545 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.737587 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.737625 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.737661 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.744497 4697 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.746558 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.746595 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.746621 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.746649 4697 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 27 15:08:24 crc kubenswrapper[4697]: E0127 15:08:24.747285 4697 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.245:6443: connect: connection refused" node="crc" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.839329 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.839401 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.839436 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.839471 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.839503 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.839532 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.839560 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.839569 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.839646 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.839606 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.839707 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.839714 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.839742 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.839755 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.839780 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.839779 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.839841 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.839847 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.839860 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.839886 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.839878 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.839935 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.839958 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.839935 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.839993 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.840062 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.840105 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.840155 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.840207 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.840254 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.948299 4697 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.950180 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.950236 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.950258 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:08:24 crc kubenswrapper[4697]: I0127 15:08:24.950297 4697 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 27 15:08:24 crc kubenswrapper[4697]: E0127 15:08:24.950838 4697 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.245:6443: connect: connection refused" node="crc" Jan 27 15:08:25 crc kubenswrapper[4697]: I0127 15:08:25.016406 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 27 15:08:25 crc kubenswrapper[4697]: I0127 15:08:25.022489 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 15:08:25 crc kubenswrapper[4697]: I0127 15:08:25.054686 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 15:08:25 crc kubenswrapper[4697]: W0127 15:08:25.071996 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-436f2fa7994156d6a10e97035a8bb1391f116851d367bb668bb3abf103dacb7e WatchSource:0}: Error finding container 436f2fa7994156d6a10e97035a8bb1391f116851d367bb668bb3abf103dacb7e: Status 404 returned error can't find the container with id 436f2fa7994156d6a10e97035a8bb1391f116851d367bb668bb3abf103dacb7e Jan 27 15:08:25 crc kubenswrapper[4697]: W0127 15:08:25.074188 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-63e762087a1e7bb7fc0475c0fe061ae384b66803e4b1d2d8c3ff312ba13708ec WatchSource:0}: Error finding container 63e762087a1e7bb7fc0475c0fe061ae384b66803e4b1d2d8c3ff312ba13708ec: Status 404 returned error can't find the container with id 63e762087a1e7bb7fc0475c0fe061ae384b66803e4b1d2d8c3ff312ba13708ec Jan 27 15:08:25 crc kubenswrapper[4697]: I0127 15:08:25.076695 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 15:08:25 crc kubenswrapper[4697]: W0127 15:08:25.078263 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-01c011c92286c4a7149a5abc21cae28a2be78b7a7dbeaa0daec3b9097bd2ceac WatchSource:0}: Error finding container 01c011c92286c4a7149a5abc21cae28a2be78b7a7dbeaa0daec3b9097bd2ceac: Status 404 returned error can't find the container with id 01c011c92286c4a7149a5abc21cae28a2be78b7a7dbeaa0daec3b9097bd2ceac Jan 27 15:08:25 crc kubenswrapper[4697]: I0127 15:08:25.083118 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 15:08:25 crc kubenswrapper[4697]: W0127 15:08:25.088281 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-d05e01330e190648841f643ebb4779e3e3d2c49f1226458b9cc2dff7f1dc3823 WatchSource:0}: Error finding container d05e01330e190648841f643ebb4779e3e3d2c49f1226458b9cc2dff7f1dc3823: Status 404 returned error can't find the container with id d05e01330e190648841f643ebb4779e3e3d2c49f1226458b9cc2dff7f1dc3823 Jan 27 15:08:25 crc kubenswrapper[4697]: W0127 15:08:25.105639 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-2e0aac68dee8edf990f816f8f8a009b42d7be3fcc96417947367e95418e2c183 WatchSource:0}: Error finding container 2e0aac68dee8edf990f816f8f8a009b42d7be3fcc96417947367e95418e2c183: Status 404 returned error can't find the container with id 2e0aac68dee8edf990f816f8f8a009b42d7be3fcc96417947367e95418e2c183 Jan 27 15:08:25 crc kubenswrapper[4697]: E0127 15:08:25.112853 4697 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.245:6443: connect: connection refused" interval="800ms" Jan 27 15:08:25 crc kubenswrapper[4697]: W0127 15:08:25.308432 4697 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.245:6443: connect: connection refused Jan 27 15:08:25 crc kubenswrapper[4697]: E0127 15:08:25.308530 4697 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.245:6443: connect: connection refused" logger="UnhandledError" Jan 27 15:08:25 crc kubenswrapper[4697]: I0127 15:08:25.351111 4697 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 15:08:25 crc kubenswrapper[4697]: I0127 15:08:25.352406 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:08:25 crc kubenswrapper[4697]: I0127 15:08:25.352443 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:08:25 crc kubenswrapper[4697]: I0127 15:08:25.352455 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:08:25 crc kubenswrapper[4697]: I0127 15:08:25.352484 4697 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 27 15:08:25 crc kubenswrapper[4697]: E0127 15:08:25.352765 4697 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.245:6443: connect: connection refused" node="crc" Jan 27 15:08:25 crc kubenswrapper[4697]: I0127 15:08:25.504050 4697 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.245:6443: connect: connection refused Jan 27 15:08:25 crc kubenswrapper[4697]: I0127 15:08:25.509260 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 14:49:40.220775211 +0000 UTC Jan 27 15:08:25 crc kubenswrapper[4697]: W0127 15:08:25.563225 4697 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.245:6443: connect: connection refused Jan 27 15:08:25 crc kubenswrapper[4697]: E0127 15:08:25.563323 4697 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.245:6443: connect: connection refused" logger="UnhandledError" Jan 27 15:08:25 crc kubenswrapper[4697]: I0127 15:08:25.573237 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"d05e01330e190648841f643ebb4779e3e3d2c49f1226458b9cc2dff7f1dc3823"} Jan 27 15:08:25 crc kubenswrapper[4697]: I0127 15:08:25.574417 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"01c011c92286c4a7149a5abc21cae28a2be78b7a7dbeaa0daec3b9097bd2ceac"} Jan 27 15:08:25 crc kubenswrapper[4697]: I0127 15:08:25.576622 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"63e762087a1e7bb7fc0475c0fe061ae384b66803e4b1d2d8c3ff312ba13708ec"} Jan 27 15:08:25 crc kubenswrapper[4697]: I0127 15:08:25.577590 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"436f2fa7994156d6a10e97035a8bb1391f116851d367bb668bb3abf103dacb7e"} Jan 27 15:08:25 crc kubenswrapper[4697]: I0127 15:08:25.578510 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"2e0aac68dee8edf990f816f8f8a009b42d7be3fcc96417947367e95418e2c183"} Jan 27 15:08:25 crc kubenswrapper[4697]: E0127 15:08:25.914306 4697 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.245:6443: connect: connection refused" interval="1.6s" Jan 27 15:08:25 crc kubenswrapper[4697]: W0127 15:08:25.946866 4697 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.245:6443: connect: connection refused Jan 27 15:08:25 crc kubenswrapper[4697]: E0127 15:08:25.946990 4697 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.245:6443: connect: connection refused" logger="UnhandledError" Jan 27 15:08:26 crc kubenswrapper[4697]: W0127 15:08:26.098929 4697 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.245:6443: connect: connection refused Jan 27 15:08:26 crc kubenswrapper[4697]: E0127 15:08:26.099126 4697 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.245:6443: connect: connection refused" logger="UnhandledError" Jan 27 15:08:26 crc kubenswrapper[4697]: I0127 15:08:26.153401 4697 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 15:08:26 crc kubenswrapper[4697]: I0127 15:08:26.155248 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:08:26 crc kubenswrapper[4697]: I0127 15:08:26.155303 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:08:26 crc kubenswrapper[4697]: I0127 15:08:26.155317 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:08:26 crc kubenswrapper[4697]: I0127 15:08:26.155355 4697 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 27 15:08:26 crc kubenswrapper[4697]: E0127 15:08:26.155901 4697 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.245:6443: connect: connection refused" node="crc" Jan 27 15:08:26 crc kubenswrapper[4697]: I0127 15:08:26.505165 4697 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.245:6443: connect: connection refused Jan 27 15:08:26 crc kubenswrapper[4697]: I0127 15:08:26.510501 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 04:50:31.379593737 +0000 UTC Jan 27 15:08:26 crc kubenswrapper[4697]: I0127 15:08:26.557642 4697 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 27 15:08:26 crc kubenswrapper[4697]: E0127 15:08:26.558659 4697 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.245:6443: connect: connection refused" logger="UnhandledError" Jan 27 15:08:26 crc kubenswrapper[4697]: I0127 15:08:26.582541 4697 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="455c5c73e0c973f4f29466798aaa9e03b0a1768678f818b93faad8a79b5c43b9" exitCode=0 Jan 27 15:08:26 crc kubenswrapper[4697]: I0127 15:08:26.582603 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"455c5c73e0c973f4f29466798aaa9e03b0a1768678f818b93faad8a79b5c43b9"} Jan 27 15:08:26 crc kubenswrapper[4697]: I0127 15:08:26.582717 4697 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 15:08:26 crc kubenswrapper[4697]: I0127 15:08:26.583948 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:08:26 crc kubenswrapper[4697]: I0127 15:08:26.583993 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:08:26 crc kubenswrapper[4697]: I0127 15:08:26.584011 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:08:26 crc kubenswrapper[4697]: I0127 15:08:26.586012 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"6fd615105781bcf4614f8a58cf63eeb89020db12e822192bd652a5ff23e25a2f"} Jan 27 15:08:26 crc kubenswrapper[4697]: I0127 15:08:26.586065 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"52ad05a5c3b7640af677ede45c27c40da5d118e28a9d45de0ffa60a05684121c"} Jan 27 15:08:26 crc kubenswrapper[4697]: I0127 15:08:26.586079 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"ee5c74f4e3f1154431027a743528e81ec4bed30037b30a858870f74993da4691"} Jan 27 15:08:26 crc kubenswrapper[4697]: I0127 15:08:26.586091 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"1b23c092c5d493951a1f6dbbf0482f102f36a830133d843f3c574afba2e1d50d"} Jan 27 15:08:26 crc kubenswrapper[4697]: I0127 15:08:26.586080 4697 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 15:08:26 crc kubenswrapper[4697]: I0127 15:08:26.586969 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:08:26 crc kubenswrapper[4697]: I0127 15:08:26.587002 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:08:26 crc kubenswrapper[4697]: I0127 15:08:26.587013 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:08:26 crc kubenswrapper[4697]: I0127 15:08:26.588655 4697 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="1d9c79b1675802dcd1800cdbf3562832c4d201ff1b4d7ab4504118a41a245453" exitCode=0 Jan 27 15:08:26 crc kubenswrapper[4697]: I0127 15:08:26.588740 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"1d9c79b1675802dcd1800cdbf3562832c4d201ff1b4d7ab4504118a41a245453"} Jan 27 15:08:26 crc kubenswrapper[4697]: I0127 15:08:26.588776 4697 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 15:08:26 crc kubenswrapper[4697]: I0127 15:08:26.589938 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:08:26 crc kubenswrapper[4697]: I0127 15:08:26.589960 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:08:26 crc kubenswrapper[4697]: I0127 15:08:26.589969 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:08:26 crc kubenswrapper[4697]: I0127 15:08:26.590549 4697 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="a040b13703e44987138aeacce4da557d3046864eac3120c090d95959796ba68c" exitCode=0 Jan 27 15:08:26 crc kubenswrapper[4697]: I0127 15:08:26.590591 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"a040b13703e44987138aeacce4da557d3046864eac3120c090d95959796ba68c"} Jan 27 15:08:26 crc kubenswrapper[4697]: I0127 15:08:26.590742 4697 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 15:08:26 crc kubenswrapper[4697]: I0127 15:08:26.591949 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:08:26 crc kubenswrapper[4697]: I0127 15:08:26.591978 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:08:26 crc kubenswrapper[4697]: I0127 15:08:26.591990 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:08:26 crc kubenswrapper[4697]: I0127 15:08:26.592206 4697 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="b800aa3d330bd36d5613b410b0b73f5d175f0ec70a76d4eb479dcb0db8957a72" exitCode=0 Jan 27 15:08:26 crc kubenswrapper[4697]: I0127 15:08:26.592242 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"b800aa3d330bd36d5613b410b0b73f5d175f0ec70a76d4eb479dcb0db8957a72"} Jan 27 15:08:26 crc kubenswrapper[4697]: I0127 15:08:26.592297 4697 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 15:08:26 crc kubenswrapper[4697]: I0127 15:08:26.593072 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:08:26 crc kubenswrapper[4697]: I0127 15:08:26.593115 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:08:26 crc kubenswrapper[4697]: I0127 15:08:26.593132 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:08:26 crc kubenswrapper[4697]: I0127 15:08:26.595469 4697 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 15:08:26 crc kubenswrapper[4697]: I0127 15:08:26.596234 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:08:26 crc kubenswrapper[4697]: I0127 15:08:26.596260 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:08:26 crc kubenswrapper[4697]: I0127 15:08:26.596267 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:08:26 crc kubenswrapper[4697]: E0127 15:08:26.810813 4697 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.245:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188e9efb017e4e0d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-27 15:08:24.502218253 +0000 UTC m=+0.674618054,LastTimestamp:2026-01-27 15:08:24.502218253 +0000 UTC m=+0.674618054,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 27 15:08:27 crc kubenswrapper[4697]: I0127 15:08:27.504041 4697 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.245:6443: connect: connection refused Jan 27 15:08:27 crc kubenswrapper[4697]: I0127 15:08:27.511579 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 09:47:10.848629937 +0000 UTC Jan 27 15:08:27 crc kubenswrapper[4697]: E0127 15:08:27.514909 4697 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.245:6443: connect: connection refused" interval="3.2s" Jan 27 15:08:27 crc kubenswrapper[4697]: I0127 15:08:27.597315 4697 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="c6fc33ccddbde32f28fee077d2abeda2e39cf7874e8f789e91a211a8a95a2313" exitCode=0 Jan 27 15:08:27 crc kubenswrapper[4697]: I0127 15:08:27.597438 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"c6fc33ccddbde32f28fee077d2abeda2e39cf7874e8f789e91a211a8a95a2313"} Jan 27 15:08:27 crc kubenswrapper[4697]: I0127 15:08:27.597439 4697 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 15:08:27 crc kubenswrapper[4697]: I0127 15:08:27.598546 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:08:27 crc kubenswrapper[4697]: I0127 15:08:27.598598 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:08:27 crc kubenswrapper[4697]: I0127 15:08:27.598612 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:08:27 crc kubenswrapper[4697]: I0127 15:08:27.599828 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"0baf6862ad66d010a3e2ca21560d76f0de57cf5afc64cc594d4b6204f5653904"} Jan 27 15:08:27 crc kubenswrapper[4697]: I0127 15:08:27.599853 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"9af166859a55cb5f718a1750f4ce20f5c4259e1adad06c609ce66a907974b3ab"} Jan 27 15:08:27 crc kubenswrapper[4697]: I0127 15:08:27.599866 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"fdb01f592a7ee00906befc039b4ac006fa96e5d36ae7cf4029af12500c42d0a8"} Jan 27 15:08:27 crc kubenswrapper[4697]: I0127 15:08:27.599883 4697 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 15:08:27 crc kubenswrapper[4697]: I0127 15:08:27.600528 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:08:27 crc kubenswrapper[4697]: I0127 15:08:27.600563 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:08:27 crc kubenswrapper[4697]: I0127 15:08:27.600573 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:08:27 crc kubenswrapper[4697]: I0127 15:08:27.602682 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"6cd6b4774eb9f0e9e586080499fe34cd307cdd0257abf0e45e717093cbef8d28"} Jan 27 15:08:27 crc kubenswrapper[4697]: I0127 15:08:27.602723 4697 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 15:08:27 crc kubenswrapper[4697]: I0127 15:08:27.603917 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:08:27 crc kubenswrapper[4697]: I0127 15:08:27.603950 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:08:27 crc kubenswrapper[4697]: I0127 15:08:27.603961 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:08:27 crc kubenswrapper[4697]: I0127 15:08:27.606066 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"772509e08b1dcc68190d81e10a93fe348af55fdc71dbab2f0cadffd65089c044"} Jan 27 15:08:27 crc kubenswrapper[4697]: I0127 15:08:27.606107 4697 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 15:08:27 crc kubenswrapper[4697]: I0127 15:08:27.606114 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"dc09ec12a81a4e2954a0d1146819e9f9b4fc1fd442a3e9c930ea213aff875eb9"} Jan 27 15:08:27 crc kubenswrapper[4697]: I0127 15:08:27.606131 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"aa7833382543ce12d026eb8bbc6fb93276a1105a0cc34d215e719591be740f80"} Jan 27 15:08:27 crc kubenswrapper[4697]: I0127 15:08:27.606143 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"841fe2379065903ddc38b4968c1764a6c83d13f42c7587f20be81d8539199c94"} Jan 27 15:08:27 crc kubenswrapper[4697]: I0127 15:08:27.607628 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:08:27 crc kubenswrapper[4697]: I0127 15:08:27.607664 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:08:27 crc kubenswrapper[4697]: I0127 15:08:27.607673 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:08:27 crc kubenswrapper[4697]: I0127 15:08:27.720117 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 15:08:27 crc kubenswrapper[4697]: I0127 15:08:27.756014 4697 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 15:08:27 crc kubenswrapper[4697]: I0127 15:08:27.757773 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:08:27 crc kubenswrapper[4697]: I0127 15:08:27.757859 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:08:27 crc kubenswrapper[4697]: I0127 15:08:27.757875 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:08:27 crc kubenswrapper[4697]: I0127 15:08:27.757912 4697 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 27 15:08:27 crc kubenswrapper[4697]: E0127 15:08:27.758766 4697 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.245:6443: connect: connection refused" node="crc" Jan 27 15:08:27 crc kubenswrapper[4697]: W0127 15:08:27.774580 4697 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.245:6443: connect: connection refused Jan 27 15:08:27 crc kubenswrapper[4697]: E0127 15:08:27.774668 4697 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.245:6443: connect: connection refused" logger="UnhandledError" Jan 27 15:08:27 crc kubenswrapper[4697]: W0127 15:08:27.780844 4697 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.245:6443: connect: connection refused Jan 27 15:08:27 crc kubenswrapper[4697]: E0127 15:08:27.780981 4697 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.245:6443: connect: connection refused" logger="UnhandledError" Jan 27 15:08:27 crc kubenswrapper[4697]: W0127 15:08:27.815276 4697 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.245:6443: connect: connection refused Jan 27 15:08:27 crc kubenswrapper[4697]: E0127 15:08:27.815374 4697 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.245:6443: connect: connection refused" logger="UnhandledError" Jan 27 15:08:27 crc kubenswrapper[4697]: W0127 15:08:27.970763 4697 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.245:6443: connect: connection refused Jan 27 15:08:27 crc kubenswrapper[4697]: E0127 15:08:27.970882 4697 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.245:6443: connect: connection refused" logger="UnhandledError" Jan 27 15:08:28 crc kubenswrapper[4697]: I0127 15:08:28.058617 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 15:08:28 crc kubenswrapper[4697]: I0127 15:08:28.071044 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 15:08:28 crc kubenswrapper[4697]: I0127 15:08:28.512013 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 19:47:15.8606674 +0000 UTC Jan 27 15:08:28 crc kubenswrapper[4697]: I0127 15:08:28.614116 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"3a476130be05ce9f6b77a8c4d6e7d5b70c09a080100f8168ccc054b0b900edb8"} Jan 27 15:08:28 crc kubenswrapper[4697]: I0127 15:08:28.614202 4697 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 15:08:28 crc kubenswrapper[4697]: I0127 15:08:28.615536 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:08:28 crc kubenswrapper[4697]: I0127 15:08:28.615589 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:08:28 crc kubenswrapper[4697]: I0127 15:08:28.615607 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:08:28 crc kubenswrapper[4697]: I0127 15:08:28.616637 4697 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="052d6ea90d7fff038a147dffda9efd4688f2a53ee7c627569d63d50e02b5ce2d" exitCode=0 Jan 27 15:08:28 crc kubenswrapper[4697]: I0127 15:08:28.616750 4697 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 15:08:28 crc kubenswrapper[4697]: I0127 15:08:28.616761 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"052d6ea90d7fff038a147dffda9efd4688f2a53ee7c627569d63d50e02b5ce2d"} Jan 27 15:08:28 crc kubenswrapper[4697]: I0127 15:08:28.617246 4697 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 15:08:28 crc kubenswrapper[4697]: I0127 15:08:28.617337 4697 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 15:08:28 crc kubenswrapper[4697]: I0127 15:08:28.617575 4697 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 15:08:28 crc kubenswrapper[4697]: I0127 15:08:28.618625 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:08:28 crc kubenswrapper[4697]: I0127 15:08:28.618643 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:08:28 crc kubenswrapper[4697]: I0127 15:08:28.618675 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:08:28 crc kubenswrapper[4697]: I0127 15:08:28.618691 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:08:28 crc kubenswrapper[4697]: I0127 15:08:28.618651 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:08:28 crc kubenswrapper[4697]: I0127 15:08:28.618750 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:08:28 crc kubenswrapper[4697]: I0127 15:08:28.618961 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:08:28 crc kubenswrapper[4697]: I0127 15:08:28.619016 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:08:28 crc kubenswrapper[4697]: I0127 15:08:28.619034 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:08:28 crc kubenswrapper[4697]: I0127 15:08:28.619526 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:08:28 crc kubenswrapper[4697]: I0127 15:08:28.619547 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:08:28 crc kubenswrapper[4697]: I0127 15:08:28.619560 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:08:29 crc kubenswrapper[4697]: I0127 15:08:29.512542 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 21:21:12.818713016 +0000 UTC Jan 27 15:08:29 crc kubenswrapper[4697]: I0127 15:08:29.623085 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"3889f71127ca21494f008231cfc9d7f1a3c106ea419b6abd1ccebeccdcc749a4"} Jan 27 15:08:29 crc kubenswrapper[4697]: I0127 15:08:29.623127 4697 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 15:08:29 crc kubenswrapper[4697]: I0127 15:08:29.623139 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"f35bc4d5235d591cc0e2af6294fb97d2fc6d7e84ce8a179acac6ea4ea4b9e5ea"} Jan 27 15:08:29 crc kubenswrapper[4697]: I0127 15:08:29.623155 4697 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 15:08:29 crc kubenswrapper[4697]: I0127 15:08:29.623157 4697 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 15:08:29 crc kubenswrapper[4697]: I0127 15:08:29.623183 4697 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 15:08:29 crc kubenswrapper[4697]: I0127 15:08:29.623157 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"db92f9a5afaba8bbda4d8ea9bcb99b5ad334aef7f68b0cf85da3fbf0a1816d8d"} Jan 27 15:08:29 crc kubenswrapper[4697]: I0127 15:08:29.623257 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"3eb075712ef240052637eb573fea6b47aced80df441ea60774ed40e4e35c8fd8"} Jan 27 15:08:29 crc kubenswrapper[4697]: I0127 15:08:29.623271 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"d0b10d0995f70041a7467f958a2d131f342a916cce576ad086a33b41bc2864fb"} Jan 27 15:08:29 crc kubenswrapper[4697]: I0127 15:08:29.623271 4697 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 15:08:29 crc kubenswrapper[4697]: I0127 15:08:29.623344 4697 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 15:08:29 crc kubenswrapper[4697]: I0127 15:08:29.623951 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:08:29 crc kubenswrapper[4697]: I0127 15:08:29.623971 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:08:29 crc kubenswrapper[4697]: I0127 15:08:29.623979 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:08:29 crc kubenswrapper[4697]: I0127 15:08:29.624103 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:08:29 crc kubenswrapper[4697]: I0127 15:08:29.624127 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:08:29 crc kubenswrapper[4697]: I0127 15:08:29.624104 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:08:29 crc kubenswrapper[4697]: I0127 15:08:29.624138 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:08:29 crc kubenswrapper[4697]: I0127 15:08:29.624149 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:08:29 crc kubenswrapper[4697]: I0127 15:08:29.624159 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:08:29 crc kubenswrapper[4697]: I0127 15:08:29.624325 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:08:29 crc kubenswrapper[4697]: I0127 15:08:29.624341 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:08:29 crc kubenswrapper[4697]: I0127 15:08:29.624350 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:08:29 crc kubenswrapper[4697]: I0127 15:08:29.889122 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 27 15:08:30 crc kubenswrapper[4697]: I0127 15:08:30.390564 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 15:08:30 crc kubenswrapper[4697]: I0127 15:08:30.513218 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 23:12:42.063283105 +0000 UTC Jan 27 15:08:30 crc kubenswrapper[4697]: I0127 15:08:30.627264 4697 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 15:08:30 crc kubenswrapper[4697]: I0127 15:08:30.627325 4697 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 15:08:30 crc kubenswrapper[4697]: I0127 15:08:30.627274 4697 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 15:08:30 crc kubenswrapper[4697]: I0127 15:08:30.628475 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:08:30 crc kubenswrapper[4697]: I0127 15:08:30.628510 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:08:30 crc kubenswrapper[4697]: I0127 15:08:30.628522 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:08:30 crc kubenswrapper[4697]: I0127 15:08:30.628483 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:08:30 crc kubenswrapper[4697]: I0127 15:08:30.628599 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:08:30 crc kubenswrapper[4697]: I0127 15:08:30.628619 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:08:30 crc kubenswrapper[4697]: I0127 15:08:30.953379 4697 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 27 15:08:30 crc kubenswrapper[4697]: I0127 15:08:30.959743 4697 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 15:08:30 crc kubenswrapper[4697]: I0127 15:08:30.961017 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:08:30 crc kubenswrapper[4697]: I0127 15:08:30.961058 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:08:30 crc kubenswrapper[4697]: I0127 15:08:30.961068 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:08:30 crc kubenswrapper[4697]: I0127 15:08:30.961093 4697 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 27 15:08:31 crc kubenswrapper[4697]: I0127 15:08:31.418383 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 15:08:31 crc kubenswrapper[4697]: I0127 15:08:31.418567 4697 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 15:08:31 crc kubenswrapper[4697]: I0127 15:08:31.418611 4697 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 15:08:31 crc kubenswrapper[4697]: I0127 15:08:31.421611 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:08:31 crc kubenswrapper[4697]: I0127 15:08:31.421650 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:08:31 crc kubenswrapper[4697]: I0127 15:08:31.421677 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:08:31 crc kubenswrapper[4697]: I0127 15:08:31.513512 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 23:13:57.514650304 +0000 UTC Jan 27 15:08:31 crc kubenswrapper[4697]: I0127 15:08:31.598050 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 15:08:31 crc kubenswrapper[4697]: I0127 15:08:31.629548 4697 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 15:08:31 crc kubenswrapper[4697]: I0127 15:08:31.629570 4697 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 15:08:31 crc kubenswrapper[4697]: I0127 15:08:31.630611 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:08:31 crc kubenswrapper[4697]: I0127 15:08:31.630647 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:08:31 crc kubenswrapper[4697]: I0127 15:08:31.630658 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:08:31 crc kubenswrapper[4697]: I0127 15:08:31.630683 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:08:31 crc kubenswrapper[4697]: I0127 15:08:31.630697 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:08:31 crc kubenswrapper[4697]: I0127 15:08:31.630705 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:08:32 crc kubenswrapper[4697]: I0127 15:08:32.513956 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 23:47:31.810387676 +0000 UTC Jan 27 15:08:33 crc kubenswrapper[4697]: I0127 15:08:33.349066 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 15:08:33 crc kubenswrapper[4697]: I0127 15:08:33.349286 4697 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 15:08:33 crc kubenswrapper[4697]: I0127 15:08:33.350262 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:08:33 crc kubenswrapper[4697]: I0127 15:08:33.350286 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:08:33 crc kubenswrapper[4697]: I0127 15:08:33.350294 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:08:33 crc kubenswrapper[4697]: I0127 15:08:33.514839 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 23:49:45.757091505 +0000 UTC Jan 27 15:08:33 crc kubenswrapper[4697]: I0127 15:08:33.920087 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 15:08:33 crc kubenswrapper[4697]: I0127 15:08:33.920224 4697 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 15:08:33 crc kubenswrapper[4697]: I0127 15:08:33.920260 4697 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 15:08:33 crc kubenswrapper[4697]: I0127 15:08:33.921196 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:08:33 crc kubenswrapper[4697]: I0127 15:08:33.921231 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:08:33 crc kubenswrapper[4697]: I0127 15:08:33.921245 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:08:34 crc kubenswrapper[4697]: I0127 15:08:34.355848 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 15:08:34 crc kubenswrapper[4697]: I0127 15:08:34.419131 4697 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 15:08:34 crc kubenswrapper[4697]: I0127 15:08:34.419222 4697 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 15:08:34 crc kubenswrapper[4697]: I0127 15:08:34.515447 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 22:44:06.542979338 +0000 UTC Jan 27 15:08:34 crc kubenswrapper[4697]: I0127 15:08:34.636109 4697 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 15:08:34 crc kubenswrapper[4697]: I0127 15:08:34.637361 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:08:34 crc kubenswrapper[4697]: I0127 15:08:34.637415 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:08:34 crc kubenswrapper[4697]: I0127 15:08:34.637427 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:08:34 crc kubenswrapper[4697]: E0127 15:08:34.652018 4697 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 27 15:08:35 crc kubenswrapper[4697]: I0127 15:08:35.516341 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 11:55:10.362867521 +0000 UTC Jan 27 15:08:36 crc kubenswrapper[4697]: I0127 15:08:36.516933 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 06:51:13.057200214 +0000 UTC Jan 27 15:08:37 crc kubenswrapper[4697]: I0127 15:08:37.517965 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 11:48:23.336471264 +0000 UTC Jan 27 15:08:38 crc kubenswrapper[4697]: I0127 15:08:38.504991 4697 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Jan 27 15:08:38 crc kubenswrapper[4697]: I0127 15:08:38.518335 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-06 19:25:42.271979496 +0000 UTC Jan 27 15:08:39 crc kubenswrapper[4697]: I0127 15:08:39.400677 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 27 15:08:39 crc kubenswrapper[4697]: I0127 15:08:39.400960 4697 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 15:08:39 crc kubenswrapper[4697]: I0127 15:08:39.402310 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:08:39 crc kubenswrapper[4697]: I0127 15:08:39.402355 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:08:39 crc kubenswrapper[4697]: I0127 15:08:39.402367 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:08:39 crc kubenswrapper[4697]: I0127 15:08:39.515262 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 27 15:08:39 crc kubenswrapper[4697]: I0127 15:08:39.520129 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 12:38:34.927031441 +0000 UTC Jan 27 15:08:39 crc kubenswrapper[4697]: I0127 15:08:39.548694 4697 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 27 15:08:39 crc kubenswrapper[4697]: I0127 15:08:39.548751 4697 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 27 15:08:39 crc kubenswrapper[4697]: I0127 15:08:39.554816 4697 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 27 15:08:39 crc kubenswrapper[4697]: I0127 15:08:39.554879 4697 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 27 15:08:39 crc kubenswrapper[4697]: I0127 15:08:39.647827 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 27 15:08:39 crc kubenswrapper[4697]: I0127 15:08:39.649425 4697 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="3a476130be05ce9f6b77a8c4d6e7d5b70c09a080100f8168ccc054b0b900edb8" exitCode=255 Jan 27 15:08:39 crc kubenswrapper[4697]: I0127 15:08:39.649509 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"3a476130be05ce9f6b77a8c4d6e7d5b70c09a080100f8168ccc054b0b900edb8"} Jan 27 15:08:39 crc kubenswrapper[4697]: I0127 15:08:39.649555 4697 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 15:08:39 crc kubenswrapper[4697]: I0127 15:08:39.649668 4697 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 15:08:39 crc kubenswrapper[4697]: I0127 15:08:39.650329 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:08:39 crc kubenswrapper[4697]: I0127 15:08:39.650353 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:08:39 crc kubenswrapper[4697]: I0127 15:08:39.650361 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:08:39 crc kubenswrapper[4697]: I0127 15:08:39.650471 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:08:39 crc kubenswrapper[4697]: I0127 15:08:39.650515 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:08:39 crc kubenswrapper[4697]: I0127 15:08:39.650527 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:08:39 crc kubenswrapper[4697]: I0127 15:08:39.651151 4697 scope.go:117] "RemoveContainer" containerID="3a476130be05ce9f6b77a8c4d6e7d5b70c09a080100f8168ccc054b0b900edb8" Jan 27 15:08:39 crc kubenswrapper[4697]: I0127 15:08:39.662713 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 27 15:08:40 crc kubenswrapper[4697]: I0127 15:08:40.520297 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 02:28:25.123060486 +0000 UTC Jan 27 15:08:40 crc kubenswrapper[4697]: I0127 15:08:40.653454 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 27 15:08:40 crc kubenswrapper[4697]: I0127 15:08:40.655521 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"3144c28de6be75231118993ba779a42bcc9032d51e927df649d3abb602ffa5dd"} Jan 27 15:08:40 crc kubenswrapper[4697]: I0127 15:08:40.655579 4697 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 15:08:40 crc kubenswrapper[4697]: I0127 15:08:40.655671 4697 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 15:08:40 crc kubenswrapper[4697]: I0127 15:08:40.656412 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:08:40 crc kubenswrapper[4697]: I0127 15:08:40.656439 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:08:40 crc kubenswrapper[4697]: I0127 15:08:40.656448 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:08:40 crc kubenswrapper[4697]: I0127 15:08:40.656500 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:08:40 crc kubenswrapper[4697]: I0127 15:08:40.656517 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:08:40 crc kubenswrapper[4697]: I0127 15:08:40.656527 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:08:41 crc kubenswrapper[4697]: I0127 15:08:41.521401 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 13:06:09.511888301 +0000 UTC Jan 27 15:08:41 crc kubenswrapper[4697]: I0127 15:08:41.598908 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 15:08:41 crc kubenswrapper[4697]: I0127 15:08:41.657864 4697 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 15:08:41 crc kubenswrapper[4697]: I0127 15:08:41.659408 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:08:41 crc kubenswrapper[4697]: I0127 15:08:41.659476 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:08:41 crc kubenswrapper[4697]: I0127 15:08:41.659499 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:08:42 crc kubenswrapper[4697]: I0127 15:08:42.521924 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 23:41:07.982319071 +0000 UTC Jan 27 15:08:43 crc kubenswrapper[4697]: I0127 15:08:43.355014 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 15:08:43 crc kubenswrapper[4697]: I0127 15:08:43.355207 4697 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 15:08:43 crc kubenswrapper[4697]: I0127 15:08:43.356465 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:08:43 crc kubenswrapper[4697]: I0127 15:08:43.356521 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:08:43 crc kubenswrapper[4697]: I0127 15:08:43.356539 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:08:43 crc kubenswrapper[4697]: I0127 15:08:43.361741 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 15:08:43 crc kubenswrapper[4697]: I0127 15:08:43.522116 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 15:46:20.766880693 +0000 UTC Jan 27 15:08:43 crc kubenswrapper[4697]: I0127 15:08:43.664093 4697 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 15:08:43 crc kubenswrapper[4697]: I0127 15:08:43.665322 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:08:43 crc kubenswrapper[4697]: I0127 15:08:43.665382 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:08:43 crc kubenswrapper[4697]: I0127 15:08:43.665394 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:08:44 crc kubenswrapper[4697]: I0127 15:08:44.364920 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 15:08:44 crc kubenswrapper[4697]: I0127 15:08:44.365062 4697 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 15:08:44 crc kubenswrapper[4697]: I0127 15:08:44.366187 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:08:44 crc kubenswrapper[4697]: I0127 15:08:44.366213 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:08:44 crc kubenswrapper[4697]: I0127 15:08:44.366222 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:08:44 crc kubenswrapper[4697]: I0127 15:08:44.419137 4697 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 15:08:44 crc kubenswrapper[4697]: I0127 15:08:44.419205 4697 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 15:08:44 crc kubenswrapper[4697]: I0127 15:08:44.523414 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 01:17:16.581089445 +0000 UTC Jan 27 15:08:44 crc kubenswrapper[4697]: E0127 15:08:44.542860 4697 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Jan 27 15:08:44 crc kubenswrapper[4697]: I0127 15:08:44.544933 4697 trace.go:236] Trace[178662088]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (27-Jan-2026 15:08:32.189) (total time: 12355ms): Jan 27 15:08:44 crc kubenswrapper[4697]: Trace[178662088]: ---"Objects listed" error: 12355ms (15:08:44.544) Jan 27 15:08:44 crc kubenswrapper[4697]: Trace[178662088]: [12.355255049s] [12.355255049s] END Jan 27 15:08:44 crc kubenswrapper[4697]: I0127 15:08:44.544966 4697 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 27 15:08:44 crc kubenswrapper[4697]: I0127 15:08:44.545195 4697 trace.go:236] Trace[712889487]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (27-Jan-2026 15:08:31.187) (total time: 13357ms): Jan 27 15:08:44 crc kubenswrapper[4697]: Trace[712889487]: ---"Objects listed" error: 13357ms (15:08:44.544) Jan 27 15:08:44 crc kubenswrapper[4697]: Trace[712889487]: [13.357495346s] [13.357495346s] END Jan 27 15:08:44 crc kubenswrapper[4697]: I0127 15:08:44.545320 4697 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 27 15:08:44 crc kubenswrapper[4697]: E0127 15:08:44.546444 4697 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Jan 27 15:08:44 crc kubenswrapper[4697]: I0127 15:08:44.546494 4697 trace.go:236] Trace[1592983840]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (27-Jan-2026 15:08:34.201) (total time: 10344ms): Jan 27 15:08:44 crc kubenswrapper[4697]: Trace[1592983840]: ---"Objects listed" error: 10344ms (15:08:44.546) Jan 27 15:08:44 crc kubenswrapper[4697]: Trace[1592983840]: [10.344836712s] [10.344836712s] END Jan 27 15:08:44 crc kubenswrapper[4697]: I0127 15:08:44.546508 4697 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 27 15:08:44 crc kubenswrapper[4697]: I0127 15:08:44.547730 4697 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 27 15:08:44 crc kubenswrapper[4697]: I0127 15:08:44.550567 4697 trace.go:236] Trace[740704887]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (27-Jan-2026 15:08:32.060) (total time: 12490ms): Jan 27 15:08:44 crc kubenswrapper[4697]: Trace[740704887]: ---"Objects listed" error: 12489ms (15:08:44.550) Jan 27 15:08:44 crc kubenswrapper[4697]: Trace[740704887]: [12.490067925s] [12.490067925s] END Jan 27 15:08:44 crc kubenswrapper[4697]: I0127 15:08:44.550594 4697 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 27 15:08:44 crc kubenswrapper[4697]: I0127 15:08:44.566193 4697 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 27 15:08:44 crc kubenswrapper[4697]: I0127 15:08:44.584345 4697 csr.go:261] certificate signing request csr-46pc8 is approved, waiting to be issued Jan 27 15:08:44 crc kubenswrapper[4697]: I0127 15:08:44.594060 4697 csr.go:257] certificate signing request csr-46pc8 is issued Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.504078 4697 apiserver.go:52] "Watching apiserver" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.507864 4697 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.508263 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c"] Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.508720 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.508890 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.508926 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.508997 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 15:08:45 crc kubenswrapper[4697]: E0127 15:08:45.509005 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.509232 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 15:08:45 crc kubenswrapper[4697]: E0127 15:08:45.509280 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.509566 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:08:45 crc kubenswrapper[4697]: E0127 15:08:45.509614 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.510246 4697 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.516607 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.516686 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.517246 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.517463 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.517650 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.518313 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.524090 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.524063 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 22:51:04.613679236 +0000 UTC Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.524284 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.524618 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.553961 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.554010 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.554063 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.554083 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.554102 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.554121 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.554140 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.554160 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.554181 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.554196 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.554211 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.554225 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.554239 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.554254 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.554270 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.554285 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.554299 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.554316 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.554332 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.554348 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.554363 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.554378 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.554394 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.554411 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.554426 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.554441 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.554456 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.554470 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.554486 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.554501 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.554495 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.554519 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.554588 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.554607 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.554628 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.554649 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.554666 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.554682 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.554697 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.554712 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.554728 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.554745 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.554759 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.554775 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.554819 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.554835 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.554850 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.554866 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.554880 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.554899 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.554914 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.554932 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.554950 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.554944 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.554964 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.555021 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.555045 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.555063 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.555080 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.555095 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.555113 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.555129 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.555147 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.555165 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.555182 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.555199 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.555216 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.555230 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.555246 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.555262 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.555279 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.555297 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.555356 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.555376 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.555392 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.555407 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.555424 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.555439 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.555454 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.555476 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.555498 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.555521 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.555542 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.555561 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.555584 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.555600 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.555606 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.555638 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.555656 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.555672 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.555689 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.555704 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.555721 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.555738 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.555758 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.555775 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.555808 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.555826 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.555843 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.555861 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.555876 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.555891 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.555907 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.555923 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.555938 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.555961 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.555978 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.555993 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.556007 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.556022 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.556039 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.556054 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.556085 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.556101 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.556118 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.556134 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.556151 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.556166 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.556181 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.556196 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.556212 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.556228 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.556243 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.556259 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.556274 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.556292 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.556308 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.556322 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.556383 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.556399 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.556398 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.556416 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.556432 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.556449 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.556465 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.556491 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.556530 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.556546 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.556564 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.556579 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.556596 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.556612 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.556620 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.556631 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.556652 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.556667 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.556687 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.556703 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.556721 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.556736 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.556752 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.556769 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.556801 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.556817 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.556821 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.556887 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.556905 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.556921 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.556938 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.556954 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.556969 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.556986 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.557003 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.557019 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.557042 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.557044 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.557060 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.557077 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.557094 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.557086 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.557111 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.557146 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.557186 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.557215 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.557242 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.557248 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.557267 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.557277 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.557292 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.557319 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.557344 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.557366 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.557386 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.557407 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.557430 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.557444 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.557449 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.557474 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.557500 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.557526 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.557550 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.557557 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.557575 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.557604 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.557631 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.557655 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.557679 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.557702 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.557704 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.557723 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.557749 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.557774 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.557818 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.557828 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.557849 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.557875 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.557901 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.557927 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.557940 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.557951 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.558007 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.558035 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.558065 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.558094 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.558109 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.558120 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.558149 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.558173 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.558198 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.558202 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.558225 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.558251 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.558280 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.558307 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.558332 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.558358 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.558430 4697 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.558448 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.558464 4697 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.558480 4697 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.558493 4697 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.558509 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.558523 4697 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.558536 4697 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.558549 4697 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.558563 4697 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.558578 4697 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.558592 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.558608 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.558621 4697 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.558634 4697 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.558648 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.578762 4697 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.589209 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.558280 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.558462 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.558551 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.558660 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.558705 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.558895 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.559047 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.559147 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.559507 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.559713 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.559825 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.559949 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: E0127 15:08:45.560019 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:08:46.060002071 +0000 UTC m=+22.232401852 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.590977 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.591388 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.560333 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.560508 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.560651 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.560770 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.560946 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.560982 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.561388 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.561489 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.561647 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.561659 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.561839 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.561981 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.562057 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.591586 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.591599 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.591669 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.591750 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.591751 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.562297 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.591774 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.562323 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.562438 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.562675 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.562459 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.562745 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.563212 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.563500 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.564117 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.564306 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.564693 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.564715 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.564733 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.564817 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.564947 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.564981 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.565244 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.565248 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.565280 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.565333 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.565417 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.565541 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.565704 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.565804 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.565960 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.566181 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.566232 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.566246 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.566256 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.566429 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.566462 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.592572 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.592629 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.591842 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.562252 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.566647 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.566669 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.566656 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.566674 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.566909 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.567218 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.567233 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.567271 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.567727 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.567797 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.567813 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.568068 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.568230 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.568420 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.568529 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.568964 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.569553 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.569752 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.569760 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.570055 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.570219 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.570332 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.570458 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.570512 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.570614 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.570717 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.570904 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.571097 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.571296 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.571519 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.571770 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.572109 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.572418 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.572546 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.572768 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.572837 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.572897 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.573160 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.573390 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.573407 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.573663 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.593045 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.573856 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.574158 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.574163 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.574170 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.574347 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.574355 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.574398 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.574511 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.574598 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.575098 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.575403 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.575634 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.575921 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.576114 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.576884 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.577298 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.577508 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.577664 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.577871 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.577987 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.578141 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.581880 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.582029 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.582154 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.582413 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.582492 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.582745 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.583101 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.583163 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.583275 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.583897 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.584170 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.584181 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.584233 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.584261 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.584489 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.584613 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.584726 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.584895 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.584984 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.585144 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.585392 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.585591 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.585634 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.585909 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.586073 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.586830 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.586868 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.588999 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.589094 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.589663 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.589986 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.590210 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.590319 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: E0127 15:08:45.590418 4697 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 15:08:45 crc kubenswrapper[4697]: E0127 15:08:45.593664 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 15:08:46.09364683 +0000 UTC m=+22.266046611 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.593143 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.590478 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.590587 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: E0127 15:08:45.590619 4697 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 15:08:45 crc kubenswrapper[4697]: E0127 15:08:45.593728 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 15:08:46.093722103 +0000 UTC m=+22.266121884 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.590627 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.566484 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.596253 4697 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-27 15:03:44 +0000 UTC, rotation deadline is 2026-11-22 03:42:44.336420417 +0000 UTC Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.596286 4697 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7164h33m58.740136674s for next certificate rotation Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.596427 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.596656 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.600517 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.605961 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.607710 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.608843 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.615865 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.617737 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.618108 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.618394 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.621650 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 15:08:45 crc kubenswrapper[4697]: E0127 15:08:45.621848 4697 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 15:08:45 crc kubenswrapper[4697]: E0127 15:08:45.621872 4697 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 15:08:45 crc kubenswrapper[4697]: E0127 15:08:45.621883 4697 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 15:08:45 crc kubenswrapper[4697]: E0127 15:08:45.621930 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 15:08:46.121916092 +0000 UTC m=+22.294315873 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.630315 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.631168 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.634644 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: E0127 15:08:45.638879 4697 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 15:08:45 crc kubenswrapper[4697]: E0127 15:08:45.639009 4697 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 15:08:45 crc kubenswrapper[4697]: E0127 15:08:45.639078 4697 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 15:08:45 crc kubenswrapper[4697]: E0127 15:08:45.639170 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 15:08:46.139153002 +0000 UTC m=+22.311552783 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.641965 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.642148 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.653940 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.659279 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.659501 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.659632 4697 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.659697 4697 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.659757 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.659852 4697 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.659925 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.659991 4697 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.660071 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.660141 4697 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.660210 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.660264 4697 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.660339 4697 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.660397 4697 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.660458 4697 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.660520 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.660574 4697 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.660635 4697 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.660693 4697 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.660751 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.660836 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.660899 4697 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.660958 4697 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.661015 4697 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.661086 4697 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.661157 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.661221 4697 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.661281 4697 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.661334 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.661394 4697 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.661451 4697 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.661525 4697 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.661673 4697 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.661751 4697 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.662069 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.662196 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.662626 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.662648 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.662660 4697 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.662670 4697 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.662679 4697 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.662690 4697 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.662700 4697 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.662709 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.662718 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.662726 4697 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.662734 4697 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.662745 4697 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.662755 4697 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.662764 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.662773 4697 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.662799 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.662808 4697 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.662816 4697 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.662824 4697 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.662832 4697 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.662840 4697 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.662849 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.662857 4697 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.662864 4697 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.662873 4697 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.662883 4697 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.662891 4697 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.662899 4697 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.662907 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.662918 4697 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.662927 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.662936 4697 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.662944 4697 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.662952 4697 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.662960 4697 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.662969 4697 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.662977 4697 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.662985 4697 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.662993 4697 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.663001 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.663010 4697 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.663018 4697 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.663028 4697 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.663038 4697 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.663049 4697 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.663058 4697 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.663067 4697 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.663075 4697 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.663083 4697 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.663092 4697 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.663102 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.663111 4697 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.663119 4697 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.663127 4697 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.663136 4697 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.663144 4697 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.663152 4697 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.663161 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.663170 4697 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.663178 4697 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.663187 4697 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.663195 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.663203 4697 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.663212 4697 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.663220 4697 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.663228 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.663237 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.663246 4697 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.663256 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.663265 4697 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.663273 4697 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.663281 4697 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.663289 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.663298 4697 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.663306 4697 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.663314 4697 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.663322 4697 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.663331 4697 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.663340 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.663349 4697 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.663358 4697 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.663365 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.663374 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.663385 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.663394 4697 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.663402 4697 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.663410 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.663418 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.663425 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.663434 4697 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.663441 4697 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.663450 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.663458 4697 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.663466 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.663475 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.663484 4697 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.663492 4697 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.663500 4697 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.663508 4697 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.663516 4697 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.663525 4697 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.663534 4697 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.663543 4697 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.663550 4697 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.663558 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.663566 4697 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.663574 4697 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.663582 4697 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.663590 4697 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.663598 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.663606 4697 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.663613 4697 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.663621 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.663631 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.663639 4697 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.663702 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.663731 4697 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.663743 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.663754 4697 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.663763 4697 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.663773 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.663799 4697 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.663809 4697 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.663817 4697 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.663825 4697 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.663834 4697 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.663842 4697 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.663850 4697 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.663858 4697 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.663867 4697 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.663876 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.663887 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.663895 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.663904 4697 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.663912 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.663920 4697 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.663929 4697 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.663938 4697 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.663950 4697 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.663962 4697 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.663974 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.663986 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.663998 4697 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.664006 4697 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.664014 4697 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.664022 4697 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.667207 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.676570 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.689225 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.698885 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.709430 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.719076 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.831524 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.839341 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 15:08:45 crc kubenswrapper[4697]: W0127 15:08:45.841766 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-446eb5f1c27c004307af0c8024ba4e4c327889ccd0098eccb60184cd2c3d6074 WatchSource:0}: Error finding container 446eb5f1c27c004307af0c8024ba4e4c327889ccd0098eccb60184cd2c3d6074: Status 404 returned error can't find the container with id 446eb5f1c27c004307af0c8024ba4e4c327889ccd0098eccb60184cd2c3d6074 Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.842941 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 15:08:45 crc kubenswrapper[4697]: W0127 15:08:45.851247 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-f96c5d0fe0bd29185a07a9a8ee7ad1b6db28bae02aeb312b3e426306762ba744 WatchSource:0}: Error finding container f96c5d0fe0bd29185a07a9a8ee7ad1b6db28bae02aeb312b3e426306762ba744: Status 404 returned error can't find the container with id f96c5d0fe0bd29185a07a9a8ee7ad1b6db28bae02aeb312b3e426306762ba744 Jan 27 15:08:45 crc kubenswrapper[4697]: W0127 15:08:45.864304 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-44398b41724d16e702865d57bd9bb2ca8f8e6585a9e84b2f150b430fbfa6d6c6 WatchSource:0}: Error finding container 44398b41724d16e702865d57bd9bb2ca8f8e6585a9e84b2f150b430fbfa6d6c6: Status 404 returned error can't find the container with id 44398b41724d16e702865d57bd9bb2ca8f8e6585a9e84b2f150b430fbfa6d6c6 Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.948588 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-bdclj"] Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.948930 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-bdclj" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.951642 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.951820 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.951934 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.964568 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.985542 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 15:08:45 crc kubenswrapper[4697]: I0127 15:08:45.997406 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.007932 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bdclj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed86f7b6-a042-470f-8da3-9cad4e65c550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f898q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bdclj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.041933 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.067234 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:08:46 crc kubenswrapper[4697]: E0127 15:08:46.067327 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:08:47.067312093 +0000 UTC m=+23.239711874 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.067425 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f898q\" (UniqueName: \"kubernetes.io/projected/ed86f7b6-a042-470f-8da3-9cad4e65c550-kube-api-access-f898q\") pod \"node-resolver-bdclj\" (UID: \"ed86f7b6-a042-470f-8da3-9cad4e65c550\") " pod="openshift-dns/node-resolver-bdclj" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.067509 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/ed86f7b6-a042-470f-8da3-9cad4e65c550-hosts-file\") pod \"node-resolver-bdclj\" (UID: \"ed86f7b6-a042-470f-8da3-9cad4e65c550\") " pod="openshift-dns/node-resolver-bdclj" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.069652 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.083044 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.168082 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.168135 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f898q\" (UniqueName: \"kubernetes.io/projected/ed86f7b6-a042-470f-8da3-9cad4e65c550-kube-api-access-f898q\") pod \"node-resolver-bdclj\" (UID: \"ed86f7b6-a042-470f-8da3-9cad4e65c550\") " pod="openshift-dns/node-resolver-bdclj" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.168158 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.168215 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.168240 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/ed86f7b6-a042-470f-8da3-9cad4e65c550-hosts-file\") pod \"node-resolver-bdclj\" (UID: \"ed86f7b6-a042-470f-8da3-9cad4e65c550\") " pod="openshift-dns/node-resolver-bdclj" Jan 27 15:08:46 crc kubenswrapper[4697]: E0127 15:08:46.168264 4697 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.168409 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/ed86f7b6-a042-470f-8da3-9cad4e65c550-hosts-file\") pod \"node-resolver-bdclj\" (UID: \"ed86f7b6-a042-470f-8da3-9cad4e65c550\") " pod="openshift-dns/node-resolver-bdclj" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.168268 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:08:46 crc kubenswrapper[4697]: E0127 15:08:46.168444 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 15:08:47.168426405 +0000 UTC m=+23.340826186 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 15:08:46 crc kubenswrapper[4697]: E0127 15:08:46.168362 4697 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 15:08:46 crc kubenswrapper[4697]: E0127 15:08:46.168493 4697 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 15:08:46 crc kubenswrapper[4697]: E0127 15:08:46.168507 4697 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 15:08:46 crc kubenswrapper[4697]: E0127 15:08:46.168559 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 15:08:47.168527528 +0000 UTC m=+23.340927309 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 15:08:46 crc kubenswrapper[4697]: E0127 15:08:46.168367 4697 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 15:08:46 crc kubenswrapper[4697]: E0127 15:08:46.168585 4697 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 15:08:46 crc kubenswrapper[4697]: E0127 15:08:46.168594 4697 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 15:08:46 crc kubenswrapper[4697]: E0127 15:08:46.168643 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 15:08:47.1686139 +0000 UTC m=+23.341013681 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 15:08:46 crc kubenswrapper[4697]: E0127 15:08:46.168370 4697 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 15:08:46 crc kubenswrapper[4697]: E0127 15:08:46.168679 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 15:08:47.168672061 +0000 UTC m=+23.341071842 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.192561 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f898q\" (UniqueName: \"kubernetes.io/projected/ed86f7b6-a042-470f-8da3-9cad4e65c550-kube-api-access-f898q\") pod \"node-resolver-bdclj\" (UID: \"ed86f7b6-a042-470f-8da3-9cad4e65c550\") " pod="openshift-dns/node-resolver-bdclj" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.273329 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-bdclj" Jan 27 15:08:46 crc kubenswrapper[4697]: W0127 15:08:46.283985 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poded86f7b6_a042_470f_8da3_9cad4e65c550.slice/crio-6adbed90a9d0c3ba26d0d93f2788a53bc12ed93f2ba3ee7783f73f5aed7246e3 WatchSource:0}: Error finding container 6adbed90a9d0c3ba26d0d93f2788a53bc12ed93f2ba3ee7783f73f5aed7246e3: Status 404 returned error can't find the container with id 6adbed90a9d0c3ba26d0d93f2788a53bc12ed93f2ba3ee7783f73f5aed7246e3 Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.320265 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-wz495"] Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.320584 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-wz495" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.322360 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.322933 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.324583 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.324610 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.324825 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-bcb9s"] Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.325481 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-rq89t"] Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.325700 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-bcb9s" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.325733 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-rq89t" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.326394 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.328901 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.328942 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.329244 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.329265 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.329332 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.329575 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.329717 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.336878 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.349661 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bdclj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed86f7b6-a042-470f-8da3-9cad4e65c550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f898q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bdclj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.361703 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.374138 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.386593 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wz495" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9bec8bc-b2a6-4865-83ca-692ae5c022a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wqhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wqhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wz495\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.398152 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.409115 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.419396 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.428568 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wz495" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9bec8bc-b2a6-4865-83ca-692ae5c022a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wqhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wqhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wz495\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.438759 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.446812 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.454469 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.462262 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bdclj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed86f7b6-a042-470f-8da3-9cad4e65c550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f898q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bdclj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.471554 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b7543bea-0b65-44e1-8c0c-bc1a13577d69-cnibin\") pod \"multus-additional-cni-plugins-bcb9s\" (UID: \"b7543bea-0b65-44e1-8c0c-bc1a13577d69\") " pod="openshift-multus/multus-additional-cni-plugins-bcb9s" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.471590 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/7fbc1c27-fba2-40df-95dd-3842bd1f1906-host-run-k8s-cni-cncf-io\") pod \"multus-rq89t\" (UID: \"7fbc1c27-fba2-40df-95dd-3842bd1f1906\") " pod="openshift-multus/multus-rq89t" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.471609 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/7fbc1c27-fba2-40df-95dd-3842bd1f1906-multus-daemon-config\") pod \"multus-rq89t\" (UID: \"7fbc1c27-fba2-40df-95dd-3842bd1f1906\") " pod="openshift-multus/multus-rq89t" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.471625 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b7543bea-0b65-44e1-8c0c-bc1a13577d69-system-cni-dir\") pod \"multus-additional-cni-plugins-bcb9s\" (UID: \"b7543bea-0b65-44e1-8c0c-bc1a13577d69\") " pod="openshift-multus/multus-additional-cni-plugins-bcb9s" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.471641 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/e9bec8bc-b2a6-4865-83ca-692ae5c022a6-rootfs\") pod \"machine-config-daemon-wz495\" (UID: \"e9bec8bc-b2a6-4865-83ca-692ae5c022a6\") " pod="openshift-machine-config-operator/machine-config-daemon-wz495" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.471655 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/7fbc1c27-fba2-40df-95dd-3842bd1f1906-multus-socket-dir-parent\") pod \"multus-rq89t\" (UID: \"7fbc1c27-fba2-40df-95dd-3842bd1f1906\") " pod="openshift-multus/multus-rq89t" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.471670 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/7fbc1c27-fba2-40df-95dd-3842bd1f1906-host-run-multus-certs\") pod \"multus-rq89t\" (UID: \"7fbc1c27-fba2-40df-95dd-3842bd1f1906\") " pod="openshift-multus/multus-rq89t" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.471685 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/7fbc1c27-fba2-40df-95dd-3842bd1f1906-host-var-lib-kubelet\") pod \"multus-rq89t\" (UID: \"7fbc1c27-fba2-40df-95dd-3842bd1f1906\") " pod="openshift-multus/multus-rq89t" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.471700 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e9bec8bc-b2a6-4865-83ca-692ae5c022a6-proxy-tls\") pod \"machine-config-daemon-wz495\" (UID: \"e9bec8bc-b2a6-4865-83ca-692ae5c022a6\") " pod="openshift-machine-config-operator/machine-config-daemon-wz495" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.471715 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7fbc1c27-fba2-40df-95dd-3842bd1f1906-host-var-lib-cni-bin\") pod \"multus-rq89t\" (UID: \"7fbc1c27-fba2-40df-95dd-3842bd1f1906\") " pod="openshift-multus/multus-rq89t" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.471738 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7fbc1c27-fba2-40df-95dd-3842bd1f1906-multus-conf-dir\") pod \"multus-rq89t\" (UID: \"7fbc1c27-fba2-40df-95dd-3842bd1f1906\") " pod="openshift-multus/multus-rq89t" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.471759 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/7fbc1c27-fba2-40df-95dd-3842bd1f1906-cnibin\") pod \"multus-rq89t\" (UID: \"7fbc1c27-fba2-40df-95dd-3842bd1f1906\") " pod="openshift-multus/multus-rq89t" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.471772 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/7fbc1c27-fba2-40df-95dd-3842bd1f1906-hostroot\") pod \"multus-rq89t\" (UID: \"7fbc1c27-fba2-40df-95dd-3842bd1f1906\") " pod="openshift-multus/multus-rq89t" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.471820 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7fbc1c27-fba2-40df-95dd-3842bd1f1906-system-cni-dir\") pod \"multus-rq89t\" (UID: \"7fbc1c27-fba2-40df-95dd-3842bd1f1906\") " pod="openshift-multus/multus-rq89t" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.471834 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/7fbc1c27-fba2-40df-95dd-3842bd1f1906-os-release\") pod \"multus-rq89t\" (UID: \"7fbc1c27-fba2-40df-95dd-3842bd1f1906\") " pod="openshift-multus/multus-rq89t" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.471851 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-npp7h\" (UniqueName: \"kubernetes.io/projected/7fbc1c27-fba2-40df-95dd-3842bd1f1906-kube-api-access-npp7h\") pod \"multus-rq89t\" (UID: \"7fbc1c27-fba2-40df-95dd-3842bd1f1906\") " pod="openshift-multus/multus-rq89t" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.471864 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b7543bea-0b65-44e1-8c0c-bc1a13577d69-os-release\") pod \"multus-additional-cni-plugins-bcb9s\" (UID: \"b7543bea-0b65-44e1-8c0c-bc1a13577d69\") " pod="openshift-multus/multus-additional-cni-plugins-bcb9s" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.471879 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cl56b\" (UniqueName: \"kubernetes.io/projected/b7543bea-0b65-44e1-8c0c-bc1a13577d69-kube-api-access-cl56b\") pod \"multus-additional-cni-plugins-bcb9s\" (UID: \"b7543bea-0b65-44e1-8c0c-bc1a13577d69\") " pod="openshift-multus/multus-additional-cni-plugins-bcb9s" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.471901 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e9bec8bc-b2a6-4865-83ca-692ae5c022a6-mcd-auth-proxy-config\") pod \"machine-config-daemon-wz495\" (UID: \"e9bec8bc-b2a6-4865-83ca-692ae5c022a6\") " pod="openshift-machine-config-operator/machine-config-daemon-wz495" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.471915 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wqhs\" (UniqueName: \"kubernetes.io/projected/e9bec8bc-b2a6-4865-83ca-692ae5c022a6-kube-api-access-2wqhs\") pod \"machine-config-daemon-wz495\" (UID: \"e9bec8bc-b2a6-4865-83ca-692ae5c022a6\") " pod="openshift-machine-config-operator/machine-config-daemon-wz495" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.471928 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b7543bea-0b65-44e1-8c0c-bc1a13577d69-tuning-conf-dir\") pod \"multus-additional-cni-plugins-bcb9s\" (UID: \"b7543bea-0b65-44e1-8c0c-bc1a13577d69\") " pod="openshift-multus/multus-additional-cni-plugins-bcb9s" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.471942 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7fbc1c27-fba2-40df-95dd-3842bd1f1906-multus-cni-dir\") pod \"multus-rq89t\" (UID: \"7fbc1c27-fba2-40df-95dd-3842bd1f1906\") " pod="openshift-multus/multus-rq89t" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.471955 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7fbc1c27-fba2-40df-95dd-3842bd1f1906-cni-binary-copy\") pod \"multus-rq89t\" (UID: \"7fbc1c27-fba2-40df-95dd-3842bd1f1906\") " pod="openshift-multus/multus-rq89t" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.471970 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/b7543bea-0b65-44e1-8c0c-bc1a13577d69-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-bcb9s\" (UID: \"b7543bea-0b65-44e1-8c0c-bc1a13577d69\") " pod="openshift-multus/multus-additional-cni-plugins-bcb9s" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.471989 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7fbc1c27-fba2-40df-95dd-3842bd1f1906-host-run-netns\") pod \"multus-rq89t\" (UID: \"7fbc1c27-fba2-40df-95dd-3842bd1f1906\") " pod="openshift-multus/multus-rq89t" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.472003 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7fbc1c27-fba2-40df-95dd-3842bd1f1906-etc-kubernetes\") pod \"multus-rq89t\" (UID: \"7fbc1c27-fba2-40df-95dd-3842bd1f1906\") " pod="openshift-multus/multus-rq89t" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.472017 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b7543bea-0b65-44e1-8c0c-bc1a13577d69-cni-binary-copy\") pod \"multus-additional-cni-plugins-bcb9s\" (UID: \"b7543bea-0b65-44e1-8c0c-bc1a13577d69\") " pod="openshift-multus/multus-additional-cni-plugins-bcb9s" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.472031 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/7fbc1c27-fba2-40df-95dd-3842bd1f1906-host-var-lib-cni-multus\") pod \"multus-rq89t\" (UID: \"7fbc1c27-fba2-40df-95dd-3842bd1f1906\") " pod="openshift-multus/multus-rq89t" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.474514 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.486043 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.497807 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.513210 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bcb9s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7543bea-0b65-44e1-8c0c-bc1a13577d69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bcb9s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.523689 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rq89t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fbc1c27-fba2-40df-95dd-3842bd1f1906\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npp7h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rq89t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.524686 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 14:21:11.040029073 +0000 UTC Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.567987 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:08:46 crc kubenswrapper[4697]: E0127 15:08:46.568174 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.572438 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.573051 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.573154 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7fbc1c27-fba2-40df-95dd-3842bd1f1906-system-cni-dir\") pod \"multus-rq89t\" (UID: \"7fbc1c27-fba2-40df-95dd-3842bd1f1906\") " pod="openshift-multus/multus-rq89t" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.573190 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/7fbc1c27-fba2-40df-95dd-3842bd1f1906-os-release\") pod \"multus-rq89t\" (UID: \"7fbc1c27-fba2-40df-95dd-3842bd1f1906\") " pod="openshift-multus/multus-rq89t" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.573210 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-npp7h\" (UniqueName: \"kubernetes.io/projected/7fbc1c27-fba2-40df-95dd-3842bd1f1906-kube-api-access-npp7h\") pod \"multus-rq89t\" (UID: \"7fbc1c27-fba2-40df-95dd-3842bd1f1906\") " pod="openshift-multus/multus-rq89t" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.573234 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b7543bea-0b65-44e1-8c0c-bc1a13577d69-os-release\") pod \"multus-additional-cni-plugins-bcb9s\" (UID: \"b7543bea-0b65-44e1-8c0c-bc1a13577d69\") " pod="openshift-multus/multus-additional-cni-plugins-bcb9s" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.573252 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cl56b\" (UniqueName: \"kubernetes.io/projected/b7543bea-0b65-44e1-8c0c-bc1a13577d69-kube-api-access-cl56b\") pod \"multus-additional-cni-plugins-bcb9s\" (UID: \"b7543bea-0b65-44e1-8c0c-bc1a13577d69\") " pod="openshift-multus/multus-additional-cni-plugins-bcb9s" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.573275 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e9bec8bc-b2a6-4865-83ca-692ae5c022a6-mcd-auth-proxy-config\") pod \"machine-config-daemon-wz495\" (UID: \"e9bec8bc-b2a6-4865-83ca-692ae5c022a6\") " pod="openshift-machine-config-operator/machine-config-daemon-wz495" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.573293 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b7543bea-0b65-44e1-8c0c-bc1a13577d69-tuning-conf-dir\") pod \"multus-additional-cni-plugins-bcb9s\" (UID: \"b7543bea-0b65-44e1-8c0c-bc1a13577d69\") " pod="openshift-multus/multus-additional-cni-plugins-bcb9s" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.573309 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7fbc1c27-fba2-40df-95dd-3842bd1f1906-multus-cni-dir\") pod \"multus-rq89t\" (UID: \"7fbc1c27-fba2-40df-95dd-3842bd1f1906\") " pod="openshift-multus/multus-rq89t" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.573327 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2wqhs\" (UniqueName: \"kubernetes.io/projected/e9bec8bc-b2a6-4865-83ca-692ae5c022a6-kube-api-access-2wqhs\") pod \"machine-config-daemon-wz495\" (UID: \"e9bec8bc-b2a6-4865-83ca-692ae5c022a6\") " pod="openshift-machine-config-operator/machine-config-daemon-wz495" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.573323 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7fbc1c27-fba2-40df-95dd-3842bd1f1906-system-cni-dir\") pod \"multus-rq89t\" (UID: \"7fbc1c27-fba2-40df-95dd-3842bd1f1906\") " pod="openshift-multus/multus-rq89t" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.573342 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7fbc1c27-fba2-40df-95dd-3842bd1f1906-cni-binary-copy\") pod \"multus-rq89t\" (UID: \"7fbc1c27-fba2-40df-95dd-3842bd1f1906\") " pod="openshift-multus/multus-rq89t" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.573464 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/b7543bea-0b65-44e1-8c0c-bc1a13577d69-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-bcb9s\" (UID: \"b7543bea-0b65-44e1-8c0c-bc1a13577d69\") " pod="openshift-multus/multus-additional-cni-plugins-bcb9s" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.573511 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/7fbc1c27-fba2-40df-95dd-3842bd1f1906-os-release\") pod \"multus-rq89t\" (UID: \"7fbc1c27-fba2-40df-95dd-3842bd1f1906\") " pod="openshift-multus/multus-rq89t" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.573518 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7fbc1c27-fba2-40df-95dd-3842bd1f1906-host-run-netns\") pod \"multus-rq89t\" (UID: \"7fbc1c27-fba2-40df-95dd-3842bd1f1906\") " pod="openshift-multus/multus-rq89t" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.573563 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7fbc1c27-fba2-40df-95dd-3842bd1f1906-host-run-netns\") pod \"multus-rq89t\" (UID: \"7fbc1c27-fba2-40df-95dd-3842bd1f1906\") " pod="openshift-multus/multus-rq89t" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.573596 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7fbc1c27-fba2-40df-95dd-3842bd1f1906-etc-kubernetes\") pod \"multus-rq89t\" (UID: \"7fbc1c27-fba2-40df-95dd-3842bd1f1906\") " pod="openshift-multus/multus-rq89t" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.573618 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b7543bea-0b65-44e1-8c0c-bc1a13577d69-cni-binary-copy\") pod \"multus-additional-cni-plugins-bcb9s\" (UID: \"b7543bea-0b65-44e1-8c0c-bc1a13577d69\") " pod="openshift-multus/multus-additional-cni-plugins-bcb9s" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.573637 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/7fbc1c27-fba2-40df-95dd-3842bd1f1906-host-var-lib-cni-multus\") pod \"multus-rq89t\" (UID: \"7fbc1c27-fba2-40df-95dd-3842bd1f1906\") " pod="openshift-multus/multus-rq89t" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.573655 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b7543bea-0b65-44e1-8c0c-bc1a13577d69-cnibin\") pod \"multus-additional-cni-plugins-bcb9s\" (UID: \"b7543bea-0b65-44e1-8c0c-bc1a13577d69\") " pod="openshift-multus/multus-additional-cni-plugins-bcb9s" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.573670 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/7fbc1c27-fba2-40df-95dd-3842bd1f1906-host-run-k8s-cni-cncf-io\") pod \"multus-rq89t\" (UID: \"7fbc1c27-fba2-40df-95dd-3842bd1f1906\") " pod="openshift-multus/multus-rq89t" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.573700 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/7fbc1c27-fba2-40df-95dd-3842bd1f1906-multus-daemon-config\") pod \"multus-rq89t\" (UID: \"7fbc1c27-fba2-40df-95dd-3842bd1f1906\") " pod="openshift-multus/multus-rq89t" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.573721 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7fbc1c27-fba2-40df-95dd-3842bd1f1906-multus-cni-dir\") pod \"multus-rq89t\" (UID: \"7fbc1c27-fba2-40df-95dd-3842bd1f1906\") " pod="openshift-multus/multus-rq89t" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.573752 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b7543bea-0b65-44e1-8c0c-bc1a13577d69-system-cni-dir\") pod \"multus-additional-cni-plugins-bcb9s\" (UID: \"b7543bea-0b65-44e1-8c0c-bc1a13577d69\") " pod="openshift-multus/multus-additional-cni-plugins-bcb9s" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.573728 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b7543bea-0b65-44e1-8c0c-bc1a13577d69-system-cni-dir\") pod \"multus-additional-cni-plugins-bcb9s\" (UID: \"b7543bea-0b65-44e1-8c0c-bc1a13577d69\") " pod="openshift-multus/multus-additional-cni-plugins-bcb9s" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.573522 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b7543bea-0b65-44e1-8c0c-bc1a13577d69-os-release\") pod \"multus-additional-cni-plugins-bcb9s\" (UID: \"b7543bea-0b65-44e1-8c0c-bc1a13577d69\") " pod="openshift-multus/multus-additional-cni-plugins-bcb9s" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.573801 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7fbc1c27-fba2-40df-95dd-3842bd1f1906-etc-kubernetes\") pod \"multus-rq89t\" (UID: \"7fbc1c27-fba2-40df-95dd-3842bd1f1906\") " pod="openshift-multus/multus-rq89t" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.573816 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/e9bec8bc-b2a6-4865-83ca-692ae5c022a6-rootfs\") pod \"machine-config-daemon-wz495\" (UID: \"e9bec8bc-b2a6-4865-83ca-692ae5c022a6\") " pod="openshift-machine-config-operator/machine-config-daemon-wz495" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.573845 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/7fbc1c27-fba2-40df-95dd-3842bd1f1906-multus-socket-dir-parent\") pod \"multus-rq89t\" (UID: \"7fbc1c27-fba2-40df-95dd-3842bd1f1906\") " pod="openshift-multus/multus-rq89t" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.573871 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/7fbc1c27-fba2-40df-95dd-3842bd1f1906-host-run-multus-certs\") pod \"multus-rq89t\" (UID: \"7fbc1c27-fba2-40df-95dd-3842bd1f1906\") " pod="openshift-multus/multus-rq89t" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.573892 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/7fbc1c27-fba2-40df-95dd-3842bd1f1906-host-var-lib-kubelet\") pod \"multus-rq89t\" (UID: \"7fbc1c27-fba2-40df-95dd-3842bd1f1906\") " pod="openshift-multus/multus-rq89t" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.573914 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e9bec8bc-b2a6-4865-83ca-692ae5c022a6-proxy-tls\") pod \"machine-config-daemon-wz495\" (UID: \"e9bec8bc-b2a6-4865-83ca-692ae5c022a6\") " pod="openshift-machine-config-operator/machine-config-daemon-wz495" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.573933 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7fbc1c27-fba2-40df-95dd-3842bd1f1906-host-var-lib-cni-bin\") pod \"multus-rq89t\" (UID: \"7fbc1c27-fba2-40df-95dd-3842bd1f1906\") " pod="openshift-multus/multus-rq89t" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.573954 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7fbc1c27-fba2-40df-95dd-3842bd1f1906-multus-conf-dir\") pod \"multus-rq89t\" (UID: \"7fbc1c27-fba2-40df-95dd-3842bd1f1906\") " pod="openshift-multus/multus-rq89t" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.573977 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/7fbc1c27-fba2-40df-95dd-3842bd1f1906-cnibin\") pod \"multus-rq89t\" (UID: \"7fbc1c27-fba2-40df-95dd-3842bd1f1906\") " pod="openshift-multus/multus-rq89t" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.573980 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7fbc1c27-fba2-40df-95dd-3842bd1f1906-cni-binary-copy\") pod \"multus-rq89t\" (UID: \"7fbc1c27-fba2-40df-95dd-3842bd1f1906\") " pod="openshift-multus/multus-rq89t" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.573994 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/7fbc1c27-fba2-40df-95dd-3842bd1f1906-hostroot\") pod \"multus-rq89t\" (UID: \"7fbc1c27-fba2-40df-95dd-3842bd1f1906\") " pod="openshift-multus/multus-rq89t" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.574199 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7fbc1c27-fba2-40df-95dd-3842bd1f1906-host-var-lib-cni-bin\") pod \"multus-rq89t\" (UID: \"7fbc1c27-fba2-40df-95dd-3842bd1f1906\") " pod="openshift-multus/multus-rq89t" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.574243 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7fbc1c27-fba2-40df-95dd-3842bd1f1906-multus-conf-dir\") pod \"multus-rq89t\" (UID: \"7fbc1c27-fba2-40df-95dd-3842bd1f1906\") " pod="openshift-multus/multus-rq89t" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.574281 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/7fbc1c27-fba2-40df-95dd-3842bd1f1906-cnibin\") pod \"multus-rq89t\" (UID: \"7fbc1c27-fba2-40df-95dd-3842bd1f1906\") " pod="openshift-multus/multus-rq89t" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.574314 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/7fbc1c27-fba2-40df-95dd-3842bd1f1906-hostroot\") pod \"multus-rq89t\" (UID: \"7fbc1c27-fba2-40df-95dd-3842bd1f1906\") " pod="openshift-multus/multus-rq89t" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.574345 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b7543bea-0b65-44e1-8c0c-bc1a13577d69-cnibin\") pod \"multus-additional-cni-plugins-bcb9s\" (UID: \"b7543bea-0b65-44e1-8c0c-bc1a13577d69\") " pod="openshift-multus/multus-additional-cni-plugins-bcb9s" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.574375 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/7fbc1c27-fba2-40df-95dd-3842bd1f1906-host-var-lib-cni-multus\") pod \"multus-rq89t\" (UID: \"7fbc1c27-fba2-40df-95dd-3842bd1f1906\") " pod="openshift-multus/multus-rq89t" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.574369 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b7543bea-0b65-44e1-8c0c-bc1a13577d69-tuning-conf-dir\") pod \"multus-additional-cni-plugins-bcb9s\" (UID: \"b7543bea-0b65-44e1-8c0c-bc1a13577d69\") " pod="openshift-multus/multus-additional-cni-plugins-bcb9s" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.574408 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/7fbc1c27-fba2-40df-95dd-3842bd1f1906-host-run-k8s-cni-cncf-io\") pod \"multus-rq89t\" (UID: \"7fbc1c27-fba2-40df-95dd-3842bd1f1906\") " pod="openshift-multus/multus-rq89t" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.574443 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b7543bea-0b65-44e1-8c0c-bc1a13577d69-cni-binary-copy\") pod \"multus-additional-cni-plugins-bcb9s\" (UID: \"b7543bea-0b65-44e1-8c0c-bc1a13577d69\") " pod="openshift-multus/multus-additional-cni-plugins-bcb9s" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.574465 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/7fbc1c27-fba2-40df-95dd-3842bd1f1906-multus-socket-dir-parent\") pod \"multus-rq89t\" (UID: \"7fbc1c27-fba2-40df-95dd-3842bd1f1906\") " pod="openshift-multus/multus-rq89t" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.574481 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/7fbc1c27-fba2-40df-95dd-3842bd1f1906-host-run-multus-certs\") pod \"multus-rq89t\" (UID: \"7fbc1c27-fba2-40df-95dd-3842bd1f1906\") " pod="openshift-multus/multus-rq89t" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.574510 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/7fbc1c27-fba2-40df-95dd-3842bd1f1906-host-var-lib-kubelet\") pod \"multus-rq89t\" (UID: \"7fbc1c27-fba2-40df-95dd-3842bd1f1906\") " pod="openshift-multus/multus-rq89t" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.574504 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/e9bec8bc-b2a6-4865-83ca-692ae5c022a6-rootfs\") pod \"machine-config-daemon-wz495\" (UID: \"e9bec8bc-b2a6-4865-83ca-692ae5c022a6\") " pod="openshift-machine-config-operator/machine-config-daemon-wz495" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.574602 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e9bec8bc-b2a6-4865-83ca-692ae5c022a6-mcd-auth-proxy-config\") pod \"machine-config-daemon-wz495\" (UID: \"e9bec8bc-b2a6-4865-83ca-692ae5c022a6\") " pod="openshift-machine-config-operator/machine-config-daemon-wz495" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.574890 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/b7543bea-0b65-44e1-8c0c-bc1a13577d69-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-bcb9s\" (UID: \"b7543bea-0b65-44e1-8c0c-bc1a13577d69\") " pod="openshift-multus/multus-additional-cni-plugins-bcb9s" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.575021 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/7fbc1c27-fba2-40df-95dd-3842bd1f1906-multus-daemon-config\") pod \"multus-rq89t\" (UID: \"7fbc1c27-fba2-40df-95dd-3842bd1f1906\") " pod="openshift-multus/multus-rq89t" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.575450 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.576746 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.577485 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.578307 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e9bec8bc-b2a6-4865-83ca-692ae5c022a6-proxy-tls\") pod \"machine-config-daemon-wz495\" (UID: \"e9bec8bc-b2a6-4865-83ca-692ae5c022a6\") " pod="openshift-machine-config-operator/machine-config-daemon-wz495" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.578588 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.579342 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.580654 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.581443 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.582609 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.583341 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.584646 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.585295 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.585940 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.587079 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.587705 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.589005 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.589722 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.590448 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.591255 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.592938 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.593625 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.594166 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.594713 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-npp7h\" (UniqueName: \"kubernetes.io/projected/7fbc1c27-fba2-40df-95dd-3842bd1f1906-kube-api-access-npp7h\") pod \"multus-rq89t\" (UID: \"7fbc1c27-fba2-40df-95dd-3842bd1f1906\") " pod="openshift-multus/multus-rq89t" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.595285 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.595695 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.596912 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.597119 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cl56b\" (UniqueName: \"kubernetes.io/projected/b7543bea-0b65-44e1-8c0c-bc1a13577d69-kube-api-access-cl56b\") pod \"multus-additional-cni-plugins-bcb9s\" (UID: \"b7543bea-0b65-44e1-8c0c-bc1a13577d69\") " pod="openshift-multus/multus-additional-cni-plugins-bcb9s" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.597683 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.598852 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.600104 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.600573 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.601743 4697 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.601912 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.602283 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2wqhs\" (UniqueName: \"kubernetes.io/projected/e9bec8bc-b2a6-4865-83ca-692ae5c022a6-kube-api-access-2wqhs\") pod \"machine-config-daemon-wz495\" (UID: \"e9bec8bc-b2a6-4865-83ca-692ae5c022a6\") " pod="openshift-machine-config-operator/machine-config-daemon-wz495" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.604083 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.605168 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.605705 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.607963 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.610087 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.610721 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.611558 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.612445 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.613052 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.613837 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.614496 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.615230 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.617564 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.618373 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.619043 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.620312 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.620930 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.621505 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.622219 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.622830 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.625521 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.626124 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.638299 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-wz495" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.649487 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-bcb9s" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.658083 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-rq89t" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.673562 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"1a182e147723dd1c9335e6c6a910d5d53bdfc118504b6a0a9f3c91f79b6d3aee"} Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.673611 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"52fcd1c6784720765f18ddc1936d3bdd625b743d27654a647ff80351957797e2"} Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.673624 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"f96c5d0fe0bd29185a07a9a8ee7ad1b6db28bae02aeb312b3e426306762ba744"} Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.674953 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.679593 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.685048 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.685810 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"3144c28de6be75231118993ba779a42bcc9032d51e927df649d3abb602ffa5dd"} Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.685886 4697 scope.go:117] "RemoveContainer" containerID="3a476130be05ce9f6b77a8c4d6e7d5b70c09a080100f8168ccc054b0b900edb8" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.685776 4697 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="3144c28de6be75231118993ba779a42bcc9032d51e927df649d3abb602ffa5dd" exitCode=255 Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.689794 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-bdclj" event={"ID":"ed86f7b6-a042-470f-8da3-9cad4e65c550","Type":"ContainerStarted","Data":"a701152234da7522fefeed3798f4748c4f8e56fa81edd5011ad4a89bbb2e4be7"} Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.689833 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-bdclj" event={"ID":"ed86f7b6-a042-470f-8da3-9cad4e65c550","Type":"ContainerStarted","Data":"6adbed90a9d0c3ba26d0d93f2788a53bc12ed93f2ba3ee7783f73f5aed7246e3"} Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.697770 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"e13ee612abe9aa03f8ccaf68abbdfdbeb29820484f430097aef6be1679d3efe8"} Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.697839 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"446eb5f1c27c004307af0c8024ba4e4c327889ccd0098eccb60184cd2c3d6074"} Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.699493 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-z6jxw"] Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.700464 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-z6jxw" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.701351 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.702689 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.702955 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.703149 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.703249 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.703364 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.703563 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.706566 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bcb9s" event={"ID":"b7543bea-0b65-44e1-8c0c-bc1a13577d69","Type":"ContainerStarted","Data":"b1411aa4472918b825c560a9470108b1b68d45ffce946a973af01b9fb9f4a54f"} Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.707110 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.708682 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wz495" event={"ID":"e9bec8bc-b2a6-4865-83ca-692ae5c022a6","Type":"ContainerStarted","Data":"0317d785f92b4be839580b29eeb2c4ad70bdf6337d50698829196bb8b68f6e9e"} Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.711088 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"44398b41724d16e702865d57bd9bb2ca8f8e6585a9e84b2f150b430fbfa6d6c6"} Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.717053 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.724489 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bdclj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed86f7b6-a042-470f-8da3-9cad4e65c550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f898q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bdclj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.734764 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.748491 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bcb9s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7543bea-0b65-44e1-8c0c-bc1a13577d69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bcb9s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.763606 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rq89t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fbc1c27-fba2-40df-95dd-3842bd1f1906\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npp7h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rq89t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.775137 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6a1ce5ad-1a8c-4a28-99d8-fc71649954ad-log-socket\") pod \"ovnkube-node-z6jxw\" (UID: \"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6jxw" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.775171 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6a1ce5ad-1a8c-4a28-99d8-fc71649954ad-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-z6jxw\" (UID: \"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6jxw" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.775186 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6a1ce5ad-1a8c-4a28-99d8-fc71649954ad-node-log\") pod \"ovnkube-node-z6jxw\" (UID: \"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6jxw" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.775200 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6a1ce5ad-1a8c-4a28-99d8-fc71649954ad-host-run-ovn-kubernetes\") pod \"ovnkube-node-z6jxw\" (UID: \"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6jxw" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.775217 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6a1ce5ad-1a8c-4a28-99d8-fc71649954ad-systemd-units\") pod \"ovnkube-node-z6jxw\" (UID: \"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6jxw" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.775230 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6a1ce5ad-1a8c-4a28-99d8-fc71649954ad-run-openvswitch\") pod \"ovnkube-node-z6jxw\" (UID: \"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6jxw" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.775244 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jp8x\" (UniqueName: \"kubernetes.io/projected/6a1ce5ad-1a8c-4a28-99d8-fc71649954ad-kube-api-access-5jp8x\") pod \"ovnkube-node-z6jxw\" (UID: \"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6jxw" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.775225 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.775297 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6a1ce5ad-1a8c-4a28-99d8-fc71649954ad-host-cni-netd\") pod \"ovnkube-node-z6jxw\" (UID: \"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6jxw" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.775318 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6a1ce5ad-1a8c-4a28-99d8-fc71649954ad-run-ovn\") pod \"ovnkube-node-z6jxw\" (UID: \"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6jxw" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.775335 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6a1ce5ad-1a8c-4a28-99d8-fc71649954ad-run-systemd\") pod \"ovnkube-node-z6jxw\" (UID: \"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6jxw" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.775348 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6a1ce5ad-1a8c-4a28-99d8-fc71649954ad-var-lib-openvswitch\") pod \"ovnkube-node-z6jxw\" (UID: \"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6jxw" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.775363 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6a1ce5ad-1a8c-4a28-99d8-fc71649954ad-etc-openvswitch\") pod \"ovnkube-node-z6jxw\" (UID: \"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6jxw" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.775377 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6a1ce5ad-1a8c-4a28-99d8-fc71649954ad-host-run-netns\") pod \"ovnkube-node-z6jxw\" (UID: \"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6jxw" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.775403 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6a1ce5ad-1a8c-4a28-99d8-fc71649954ad-ovnkube-script-lib\") pod \"ovnkube-node-z6jxw\" (UID: \"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6jxw" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.775421 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6a1ce5ad-1a8c-4a28-99d8-fc71649954ad-host-kubelet\") pod \"ovnkube-node-z6jxw\" (UID: \"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6jxw" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.775435 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6a1ce5ad-1a8c-4a28-99d8-fc71649954ad-host-cni-bin\") pod \"ovnkube-node-z6jxw\" (UID: \"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6jxw" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.775450 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6a1ce5ad-1a8c-4a28-99d8-fc71649954ad-ovnkube-config\") pod \"ovnkube-node-z6jxw\" (UID: \"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6jxw" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.775464 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6a1ce5ad-1a8c-4a28-99d8-fc71649954ad-ovn-node-metrics-cert\") pod \"ovnkube-node-z6jxw\" (UID: \"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6jxw" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.775493 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6a1ce5ad-1a8c-4a28-99d8-fc71649954ad-env-overrides\") pod \"ovnkube-node-z6jxw\" (UID: \"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6jxw" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.775518 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6a1ce5ad-1a8c-4a28-99d8-fc71649954ad-host-slash\") pod \"ovnkube-node-z6jxw\" (UID: \"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6jxw" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.786081 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a182e147723dd1c9335e6c6a910d5d53bdfc118504b6a0a9f3c91f79b6d3aee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52fcd1c6784720765f18ddc1936d3bdd625b743d27654a647ff80351957797e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.796984 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wz495" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9bec8bc-b2a6-4865-83ca-692ae5c022a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wqhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wqhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wz495\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.814962 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.824152 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.831695 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.835270 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.835698 4697 scope.go:117] "RemoveContainer" containerID="3144c28de6be75231118993ba779a42bcc9032d51e927df649d3abb602ffa5dd" Jan 27 15:08:46 crc kubenswrapper[4697]: E0127 15:08:46.835931 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.840454 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bdclj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed86f7b6-a042-470f-8da3-9cad4e65c550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a701152234da7522fefeed3798f4748c4f8e56fa81edd5011ad4a89bbb2e4be7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f898q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bdclj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.849398 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a182e147723dd1c9335e6c6a910d5d53bdfc118504b6a0a9f3c91f79b6d3aee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52fcd1c6784720765f18ddc1936d3bdd625b743d27654a647ff80351957797e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.860434 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.869997 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bcb9s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7543bea-0b65-44e1-8c0c-bc1a13577d69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bcb9s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.876041 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6a1ce5ad-1a8c-4a28-99d8-fc71649954ad-log-socket\") pod \"ovnkube-node-z6jxw\" (UID: \"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6jxw" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.876087 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6a1ce5ad-1a8c-4a28-99d8-fc71649954ad-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-z6jxw\" (UID: \"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6jxw" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.876107 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6a1ce5ad-1a8c-4a28-99d8-fc71649954ad-host-run-ovn-kubernetes\") pod \"ovnkube-node-z6jxw\" (UID: \"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6jxw" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.876178 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6a1ce5ad-1a8c-4a28-99d8-fc71649954ad-node-log\") pod \"ovnkube-node-z6jxw\" (UID: \"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6jxw" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.876195 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6a1ce5ad-1a8c-4a28-99d8-fc71649954ad-systemd-units\") pod \"ovnkube-node-z6jxw\" (UID: \"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6jxw" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.876209 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6a1ce5ad-1a8c-4a28-99d8-fc71649954ad-run-openvswitch\") pod \"ovnkube-node-z6jxw\" (UID: \"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6jxw" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.876231 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jp8x\" (UniqueName: \"kubernetes.io/projected/6a1ce5ad-1a8c-4a28-99d8-fc71649954ad-kube-api-access-5jp8x\") pod \"ovnkube-node-z6jxw\" (UID: \"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6jxw" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.876264 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6a1ce5ad-1a8c-4a28-99d8-fc71649954ad-host-cni-netd\") pod \"ovnkube-node-z6jxw\" (UID: \"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6jxw" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.876281 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6a1ce5ad-1a8c-4a28-99d8-fc71649954ad-run-ovn\") pod \"ovnkube-node-z6jxw\" (UID: \"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6jxw" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.876281 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6a1ce5ad-1a8c-4a28-99d8-fc71649954ad-host-run-ovn-kubernetes\") pod \"ovnkube-node-z6jxw\" (UID: \"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6jxw" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.876302 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6a1ce5ad-1a8c-4a28-99d8-fc71649954ad-run-systemd\") pod \"ovnkube-node-z6jxw\" (UID: \"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6jxw" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.876335 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6a1ce5ad-1a8c-4a28-99d8-fc71649954ad-run-systemd\") pod \"ovnkube-node-z6jxw\" (UID: \"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6jxw" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.876356 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6a1ce5ad-1a8c-4a28-99d8-fc71649954ad-var-lib-openvswitch\") pod \"ovnkube-node-z6jxw\" (UID: \"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6jxw" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.876365 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6a1ce5ad-1a8c-4a28-99d8-fc71649954ad-log-socket\") pod \"ovnkube-node-z6jxw\" (UID: \"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6jxw" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.876380 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6a1ce5ad-1a8c-4a28-99d8-fc71649954ad-node-log\") pod \"ovnkube-node-z6jxw\" (UID: \"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6jxw" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.876402 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6a1ce5ad-1a8c-4a28-99d8-fc71649954ad-systemd-units\") pod \"ovnkube-node-z6jxw\" (UID: \"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6jxw" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.876401 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6a1ce5ad-1a8c-4a28-99d8-fc71649954ad-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-z6jxw\" (UID: \"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6jxw" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.876425 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6a1ce5ad-1a8c-4a28-99d8-fc71649954ad-host-cni-netd\") pod \"ovnkube-node-z6jxw\" (UID: \"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6jxw" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.876430 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6a1ce5ad-1a8c-4a28-99d8-fc71649954ad-run-ovn\") pod \"ovnkube-node-z6jxw\" (UID: \"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6jxw" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.876334 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6a1ce5ad-1a8c-4a28-99d8-fc71649954ad-var-lib-openvswitch\") pod \"ovnkube-node-z6jxw\" (UID: \"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6jxw" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.876456 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6a1ce5ad-1a8c-4a28-99d8-fc71649954ad-run-openvswitch\") pod \"ovnkube-node-z6jxw\" (UID: \"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6jxw" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.876507 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6a1ce5ad-1a8c-4a28-99d8-fc71649954ad-etc-openvswitch\") pod \"ovnkube-node-z6jxw\" (UID: \"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6jxw" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.876530 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6a1ce5ad-1a8c-4a28-99d8-fc71649954ad-host-run-netns\") pod \"ovnkube-node-z6jxw\" (UID: \"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6jxw" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.876547 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6a1ce5ad-1a8c-4a28-99d8-fc71649954ad-ovnkube-script-lib\") pod \"ovnkube-node-z6jxw\" (UID: \"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6jxw" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.876562 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6a1ce5ad-1a8c-4a28-99d8-fc71649954ad-host-kubelet\") pod \"ovnkube-node-z6jxw\" (UID: \"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6jxw" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.876576 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6a1ce5ad-1a8c-4a28-99d8-fc71649954ad-host-cni-bin\") pod \"ovnkube-node-z6jxw\" (UID: \"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6jxw" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.876590 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6a1ce5ad-1a8c-4a28-99d8-fc71649954ad-ovnkube-config\") pod \"ovnkube-node-z6jxw\" (UID: \"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6jxw" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.876605 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6a1ce5ad-1a8c-4a28-99d8-fc71649954ad-ovn-node-metrics-cert\") pod \"ovnkube-node-z6jxw\" (UID: \"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6jxw" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.878197 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6a1ce5ad-1a8c-4a28-99d8-fc71649954ad-env-overrides\") pod \"ovnkube-node-z6jxw\" (UID: \"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6jxw" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.878212 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6a1ce5ad-1a8c-4a28-99d8-fc71649954ad-host-slash\") pod \"ovnkube-node-z6jxw\" (UID: \"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6jxw" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.876758 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6a1ce5ad-1a8c-4a28-99d8-fc71649954ad-host-cni-bin\") pod \"ovnkube-node-z6jxw\" (UID: \"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6jxw" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.876795 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6a1ce5ad-1a8c-4a28-99d8-fc71649954ad-host-kubelet\") pod \"ovnkube-node-z6jxw\" (UID: \"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6jxw" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.877398 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6a1ce5ad-1a8c-4a28-99d8-fc71649954ad-ovnkube-script-lib\") pod \"ovnkube-node-z6jxw\" (UID: \"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6jxw" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.877413 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6a1ce5ad-1a8c-4a28-99d8-fc71649954ad-ovnkube-config\") pod \"ovnkube-node-z6jxw\" (UID: \"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6jxw" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.876684 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6a1ce5ad-1a8c-4a28-99d8-fc71649954ad-etc-openvswitch\") pod \"ovnkube-node-z6jxw\" (UID: \"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6jxw" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.878557 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6a1ce5ad-1a8c-4a28-99d8-fc71649954ad-env-overrides\") pod \"ovnkube-node-z6jxw\" (UID: \"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6jxw" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.876734 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6a1ce5ad-1a8c-4a28-99d8-fc71649954ad-host-run-netns\") pod \"ovnkube-node-z6jxw\" (UID: \"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6jxw" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.878605 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6a1ce5ad-1a8c-4a28-99d8-fc71649954ad-host-slash\") pod \"ovnkube-node-z6jxw\" (UID: \"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6jxw" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.880682 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rq89t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fbc1c27-fba2-40df-95dd-3842bd1f1906\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npp7h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rq89t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.880838 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6a1ce5ad-1a8c-4a28-99d8-fc71649954ad-ovn-node-metrics-cert\") pod \"ovnkube-node-z6jxw\" (UID: \"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6jxw" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.891391 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jp8x\" (UniqueName: \"kubernetes.io/projected/6a1ce5ad-1a8c-4a28-99d8-fc71649954ad-kube-api-access-5jp8x\") pod \"ovnkube-node-z6jxw\" (UID: \"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6jxw" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.896469 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6jxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z6jxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.909770 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e13ee612abe9aa03f8ccaf68abbdfdbeb29820484f430097aef6be1679d3efe8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 15:08:46 crc kubenswrapper[4697]: I0127 15:08:46.919177 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wz495" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9bec8bc-b2a6-4865-83ca-692ae5c022a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wqhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wqhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wz495\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 15:08:47 crc kubenswrapper[4697]: I0127 15:08:47.019711 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-z6jxw" Jan 27 15:08:47 crc kubenswrapper[4697]: W0127 15:08:47.033602 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6a1ce5ad_1a8c_4a28_99d8_fc71649954ad.slice/crio-24ccad5ec43b98acb432bf323d3f81e8e30f928ca13d69c59aa9557597dfee96 WatchSource:0}: Error finding container 24ccad5ec43b98acb432bf323d3f81e8e30f928ca13d69c59aa9557597dfee96: Status 404 returned error can't find the container with id 24ccad5ec43b98acb432bf323d3f81e8e30f928ca13d69c59aa9557597dfee96 Jan 27 15:08:47 crc kubenswrapper[4697]: I0127 15:08:47.080139 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:08:47 crc kubenswrapper[4697]: E0127 15:08:47.080285 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:08:49.080262167 +0000 UTC m=+25.252661948 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:08:47 crc kubenswrapper[4697]: I0127 15:08:47.181650 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:08:47 crc kubenswrapper[4697]: I0127 15:08:47.181912 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:08:47 crc kubenswrapper[4697]: I0127 15:08:47.182024 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:08:47 crc kubenswrapper[4697]: I0127 15:08:47.182129 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:08:47 crc kubenswrapper[4697]: E0127 15:08:47.181843 4697 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 15:08:47 crc kubenswrapper[4697]: E0127 15:08:47.182389 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 15:08:49.182370733 +0000 UTC m=+25.354770524 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 15:08:47 crc kubenswrapper[4697]: E0127 15:08:47.182032 4697 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 15:08:47 crc kubenswrapper[4697]: E0127 15:08:47.182551 4697 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 15:08:47 crc kubenswrapper[4697]: E0127 15:08:47.182063 4697 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 15:08:47 crc kubenswrapper[4697]: E0127 15:08:47.182287 4697 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 15:08:47 crc kubenswrapper[4697]: E0127 15:08:47.182758 4697 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 15:08:47 crc kubenswrapper[4697]: E0127 15:08:47.182854 4697 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 15:08:47 crc kubenswrapper[4697]: E0127 15:08:47.182620 4697 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 15:08:47 crc kubenswrapper[4697]: E0127 15:08:47.182739 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 15:08:49.18270044 +0000 UTC m=+25.355100221 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 15:08:47 crc kubenswrapper[4697]: E0127 15:08:47.183074 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 15:08:49.183065 +0000 UTC m=+25.355464781 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 15:08:47 crc kubenswrapper[4697]: E0127 15:08:47.183142 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 15:08:49.183134981 +0000 UTC m=+25.355534762 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 15:08:47 crc kubenswrapper[4697]: I0127 15:08:47.525934 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 14:21:52.399474103 +0000 UTC Jan 27 15:08:47 crc kubenswrapper[4697]: I0127 15:08:47.567364 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:08:47 crc kubenswrapper[4697]: E0127 15:08:47.567521 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:08:47 crc kubenswrapper[4697]: I0127 15:08:47.567400 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:08:47 crc kubenswrapper[4697]: E0127 15:08:47.567594 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:08:47 crc kubenswrapper[4697]: I0127 15:08:47.715525 4697 generic.go:334] "Generic (PLEG): container finished" podID="b7543bea-0b65-44e1-8c0c-bc1a13577d69" containerID="e0b69d8311464a46854b17dc23de984ff37a24f3de84f8ad6033d26d5dd30afc" exitCode=0 Jan 27 15:08:47 crc kubenswrapper[4697]: I0127 15:08:47.715581 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bcb9s" event={"ID":"b7543bea-0b65-44e1-8c0c-bc1a13577d69","Type":"ContainerDied","Data":"e0b69d8311464a46854b17dc23de984ff37a24f3de84f8ad6033d26d5dd30afc"} Jan 27 15:08:47 crc kubenswrapper[4697]: I0127 15:08:47.717515 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wz495" event={"ID":"e9bec8bc-b2a6-4865-83ca-692ae5c022a6","Type":"ContainerStarted","Data":"2616d07c83d73b63d4b728a30de8a7e1d76986d38f8c4c3fe019bf73e64784f0"} Jan 27 15:08:47 crc kubenswrapper[4697]: I0127 15:08:47.717557 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wz495" event={"ID":"e9bec8bc-b2a6-4865-83ca-692ae5c022a6","Type":"ContainerStarted","Data":"faaced835dbc76e880a1fd29824b00fca5f720686e476bcba6ad4f807e28e8e2"} Jan 27 15:08:47 crc kubenswrapper[4697]: I0127 15:08:47.719225 4697 generic.go:334] "Generic (PLEG): container finished" podID="6a1ce5ad-1a8c-4a28-99d8-fc71649954ad" containerID="b9666b8a501ef015431ee3be1fc34ca2b196011df3007d2e4d508f09f9967785" exitCode=0 Jan 27 15:08:47 crc kubenswrapper[4697]: I0127 15:08:47.719313 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6jxw" event={"ID":"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad","Type":"ContainerDied","Data":"b9666b8a501ef015431ee3be1fc34ca2b196011df3007d2e4d508f09f9967785"} Jan 27 15:08:47 crc kubenswrapper[4697]: I0127 15:08:47.719515 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6jxw" event={"ID":"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad","Type":"ContainerStarted","Data":"24ccad5ec43b98acb432bf323d3f81e8e30f928ca13d69c59aa9557597dfee96"} Jan 27 15:08:47 crc kubenswrapper[4697]: I0127 15:08:47.722293 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-rq89t" event={"ID":"7fbc1c27-fba2-40df-95dd-3842bd1f1906","Type":"ContainerStarted","Data":"c0c056e48d3130806317f25486fea67d938a0e610f19b6089873f2fcfe4759a0"} Jan 27 15:08:47 crc kubenswrapper[4697]: I0127 15:08:47.722336 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-rq89t" event={"ID":"7fbc1c27-fba2-40df-95dd-3842bd1f1906","Type":"ContainerStarted","Data":"1a0bb59ae6e39a9ec68d8b2232e0352a7a0363425454bb7e7209a64fc89fbd6c"} Jan 27 15:08:47 crc kubenswrapper[4697]: I0127 15:08:47.725302 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 27 15:08:47 crc kubenswrapper[4697]: I0127 15:08:47.730335 4697 scope.go:117] "RemoveContainer" containerID="3144c28de6be75231118993ba779a42bcc9032d51e927df649d3abb602ffa5dd" Jan 27 15:08:47 crc kubenswrapper[4697]: E0127 15:08:47.730512 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 27 15:08:47 crc kubenswrapper[4697]: I0127 15:08:47.735917 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a182e147723dd1c9335e6c6a910d5d53bdfc118504b6a0a9f3c91f79b6d3aee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52fcd1c6784720765f18ddc1936d3bdd625b743d27654a647ff80351957797e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:47Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:47 crc kubenswrapper[4697]: I0127 15:08:47.751009 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:47Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:47 crc kubenswrapper[4697]: I0127 15:08:47.776052 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bcb9s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7543bea-0b65-44e1-8c0c-bc1a13577d69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0b69d8311464a46854b17dc23de984ff37a24f3de84f8ad6033d26d5dd30afc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0b69d8311464a46854b17dc23de984ff37a24f3de84f8ad6033d26d5dd30afc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bcb9s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:47Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:47 crc kubenswrapper[4697]: I0127 15:08:47.793039 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rq89t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fbc1c27-fba2-40df-95dd-3842bd1f1906\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npp7h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rq89t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:47Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:47 crc kubenswrapper[4697]: I0127 15:08:47.813190 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6jxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z6jxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:47Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:47 crc kubenswrapper[4697]: I0127 15:08:47.833887 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e13ee612abe9aa03f8ccaf68abbdfdbeb29820484f430097aef6be1679d3efe8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:47Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:47 crc kubenswrapper[4697]: I0127 15:08:47.847516 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wz495" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9bec8bc-b2a6-4865-83ca-692ae5c022a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wqhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wqhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wz495\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:47Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:47 crc kubenswrapper[4697]: I0127 15:08:47.859273 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:47Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:47 crc kubenswrapper[4697]: I0127 15:08:47.871134 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:47Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:47 crc kubenswrapper[4697]: I0127 15:08:47.888722 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:47Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:47 crc kubenswrapper[4697]: I0127 15:08:47.900229 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bdclj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed86f7b6-a042-470f-8da3-9cad4e65c550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a701152234da7522fefeed3798f4748c4f8e56fa81edd5011ad4a89bbb2e4be7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f898q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bdclj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:47Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:47 crc kubenswrapper[4697]: I0127 15:08:47.921880 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30821478-065e-48b2-85f3-ae69260477fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://841fe2379065903ddc38b4968c1764a6c83d13f42c7587f20be81d8539199c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc09ec12a81a4e2954a0d1146819e9f9b4fc1fd442a3e9c930ea213aff875eb9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa7833382543ce12d026eb8bbc6fb93276a1105a0cc34d215e719591be740f80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3144c28de6be75231118993ba779a42bcc9032d51e927df649d3abb602ffa5dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a476130be05ce9f6b77a8c4d6e7d5b70c09a080100f8168ccc054b0b900edb8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T15:08:38Z\\\",\\\"message\\\":\\\"W0127 15:08:27.778530 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0127 15:08:27.779741 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769526507 cert, and key in /tmp/serving-cert-402677612/serving-signer.crt, /tmp/serving-cert-402677612/serving-signer.key\\\\nI0127 15:08:28.368424 1 observer_polling.go:159] Starting file observer\\\\nW0127 15:08:28.370617 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0127 15:08:28.370757 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 15:08:28.372142 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-402677612/tls.crt::/tmp/serving-cert-402677612/tls.key\\\\\\\"\\\\nF0127 15:08:38.819056 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:27Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3144c28de6be75231118993ba779a42bcc9032d51e927df649d3abb602ffa5dd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 15:08:45.318333 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 15:08:45.318446 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 15:08:45.319039 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1083979560/tls.crt::/tmp/serving-cert-1083979560/tls.key\\\\\\\"\\\\nI0127 15:08:45.778691 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 15:08:45.781562 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 15:08:45.781589 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 15:08:45.781614 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 15:08:45.781620 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 15:08:45.799733 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0127 15:08:45.799756 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 15:08:45.799769 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:08:45.799774 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:08:45.799800 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 15:08:45.799806 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 15:08:45.799810 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 15:08:45.799814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 15:08:45.805747 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://772509e08b1dcc68190d81e10a93fe348af55fdc71dbab2f0cadffd65089c044\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d9c79b1675802dcd1800cdbf3562832c4d201ff1b4d7ab4504118a41a245453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d9c79b1675802dcd1800cdbf3562832c4d201ff1b4d7ab4504118a41a245453\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:24Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:47Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:47 crc kubenswrapper[4697]: I0127 15:08:47.943116 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bcb9s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7543bea-0b65-44e1-8c0c-bc1a13577d69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0b69d8311464a46854b17dc23de984ff37a24f3de84f8ad6033d26d5dd30afc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0b69d8311464a46854b17dc23de984ff37a24f3de84f8ad6033d26d5dd30afc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bcb9s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:47Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:47 crc kubenswrapper[4697]: I0127 15:08:47.958172 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rq89t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fbc1c27-fba2-40df-95dd-3842bd1f1906\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0c056e48d3130806317f25486fea67d938a0e610f19b6089873f2fcfe4759a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npp7h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rq89t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:47Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:47 crc kubenswrapper[4697]: I0127 15:08:47.978076 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6jxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9666b8a501ef015431ee3be1fc34ca2b196011df3007d2e4d508f09f9967785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9666b8a501ef015431ee3be1fc34ca2b196011df3007d2e4d508f09f9967785\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z6jxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:47Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:47 crc kubenswrapper[4697]: I0127 15:08:47.991527 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e13ee612abe9aa03f8ccaf68abbdfdbeb29820484f430097aef6be1679d3efe8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:47Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:48 crc kubenswrapper[4697]: I0127 15:08:48.004324 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a182e147723dd1c9335e6c6a910d5d53bdfc118504b6a0a9f3c91f79b6d3aee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52fcd1c6784720765f18ddc1936d3bdd625b743d27654a647ff80351957797e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:48Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:48 crc kubenswrapper[4697]: I0127 15:08:48.015751 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:48Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:48 crc kubenswrapper[4697]: I0127 15:08:48.026684 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wz495" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9bec8bc-b2a6-4865-83ca-692ae5c022a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2616d07c83d73b63d4b728a30de8a7e1d76986d38f8c4c3fe019bf73e64784f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wqhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://faaced835dbc76e880a1fd29824b00fca5f720686e476bcba6ad4f807e28e8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wqhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wz495\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:48Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:48 crc kubenswrapper[4697]: I0127 15:08:48.039911 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:48Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:48 crc kubenswrapper[4697]: I0127 15:08:48.051089 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:48Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:48 crc kubenswrapper[4697]: I0127 15:08:48.066679 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bdclj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed86f7b6-a042-470f-8da3-9cad4e65c550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a701152234da7522fefeed3798f4748c4f8e56fa81edd5011ad4a89bbb2e4be7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f898q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bdclj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:48Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:48 crc kubenswrapper[4697]: I0127 15:08:48.080453 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30821478-065e-48b2-85f3-ae69260477fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://841fe2379065903ddc38b4968c1764a6c83d13f42c7587f20be81d8539199c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc09ec12a81a4e2954a0d1146819e9f9b4fc1fd442a3e9c930ea213aff875eb9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa7833382543ce12d026eb8bbc6fb93276a1105a0cc34d215e719591be740f80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3144c28de6be75231118993ba779a42bcc9032d51e927df649d3abb602ffa5dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3144c28de6be75231118993ba779a42bcc9032d51e927df649d3abb602ffa5dd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 15:08:45.318333 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 15:08:45.318446 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 15:08:45.319039 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1083979560/tls.crt::/tmp/serving-cert-1083979560/tls.key\\\\\\\"\\\\nI0127 15:08:45.778691 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 15:08:45.781562 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 15:08:45.781589 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 15:08:45.781614 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 15:08:45.781620 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 15:08:45.799733 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0127 15:08:45.799756 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 15:08:45.799769 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:08:45.799774 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:08:45.799800 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 15:08:45.799806 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 15:08:45.799810 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 15:08:45.799814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 15:08:45.805747 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://772509e08b1dcc68190d81e10a93fe348af55fdc71dbab2f0cadffd65089c044\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d9c79b1675802dcd1800cdbf3562832c4d201ff1b4d7ab4504118a41a245453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d9c79b1675802dcd1800cdbf3562832c4d201ff1b4d7ab4504118a41a245453\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:24Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:48Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:48 crc kubenswrapper[4697]: I0127 15:08:48.091124 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:48Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:48 crc kubenswrapper[4697]: I0127 15:08:48.528086 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 15:08:22.369057597 +0000 UTC Jan 27 15:08:48 crc kubenswrapper[4697]: I0127 15:08:48.567685 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:08:48 crc kubenswrapper[4697]: E0127 15:08:48.567823 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:08:48 crc kubenswrapper[4697]: I0127 15:08:48.734761 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bcb9s" event={"ID":"b7543bea-0b65-44e1-8c0c-bc1a13577d69","Type":"ContainerStarted","Data":"3d34049aae4e409909bb597c8bf33aa1c1ac85699cf72e33f5643145fdf9fbb9"} Jan 27 15:08:48 crc kubenswrapper[4697]: I0127 15:08:48.736277 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"955eb03bb38f971417b1af1b193c2008607eaeda5addf30f899830dd84620c4c"} Jan 27 15:08:48 crc kubenswrapper[4697]: I0127 15:08:48.739871 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6jxw" event={"ID":"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad","Type":"ContainerStarted","Data":"c9146d3d41cb348c99ea78d62aef3aa7d46c5f99855e042fdf5bc38b18556e8d"} Jan 27 15:08:48 crc kubenswrapper[4697]: I0127 15:08:48.739900 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6jxw" event={"ID":"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad","Type":"ContainerStarted","Data":"e33c68fac5ef11b2704b8a1460588937489a191ea2eacb70548b1e99cf718822"} Jan 27 15:08:48 crc kubenswrapper[4697]: I0127 15:08:48.739912 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6jxw" event={"ID":"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad","Type":"ContainerStarted","Data":"25f52622d494cffbbd36c21f76148b896a10d3c1ace649ac0824e847b812a277"} Jan 27 15:08:48 crc kubenswrapper[4697]: I0127 15:08:48.739922 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6jxw" event={"ID":"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad","Type":"ContainerStarted","Data":"24ac4a674c5fb98082daeabf52736988951ea5c66064ff4bb63f0d40c43b947d"} Jan 27 15:08:48 crc kubenswrapper[4697]: I0127 15:08:48.739931 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6jxw" event={"ID":"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad","Type":"ContainerStarted","Data":"f8784cf473729161592d08c782f4754724d6609756a30040715cbff8c732a09c"} Jan 27 15:08:48 crc kubenswrapper[4697]: I0127 15:08:48.739940 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6jxw" event={"ID":"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad","Type":"ContainerStarted","Data":"eea7c2b7dbea8198cc4709a808f8ecab760514224f4e3eb96d04c3bd7f16df6d"} Jan 27 15:08:48 crc kubenswrapper[4697]: I0127 15:08:48.756531 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:48Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:48 crc kubenswrapper[4697]: I0127 15:08:48.773352 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bdclj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed86f7b6-a042-470f-8da3-9cad4e65c550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a701152234da7522fefeed3798f4748c4f8e56fa81edd5011ad4a89bbb2e4be7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f898q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bdclj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:48Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:48 crc kubenswrapper[4697]: I0127 15:08:48.794939 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30821478-065e-48b2-85f3-ae69260477fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://841fe2379065903ddc38b4968c1764a6c83d13f42c7587f20be81d8539199c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc09ec12a81a4e2954a0d1146819e9f9b4fc1fd442a3e9c930ea213aff875eb9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa7833382543ce12d026eb8bbc6fb93276a1105a0cc34d215e719591be740f80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3144c28de6be75231118993ba779a42bcc9032d51e927df649d3abb602ffa5dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3144c28de6be75231118993ba779a42bcc9032d51e927df649d3abb602ffa5dd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 15:08:45.318333 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 15:08:45.318446 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 15:08:45.319039 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1083979560/tls.crt::/tmp/serving-cert-1083979560/tls.key\\\\\\\"\\\\nI0127 15:08:45.778691 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 15:08:45.781562 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 15:08:45.781589 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 15:08:45.781614 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 15:08:45.781620 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 15:08:45.799733 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0127 15:08:45.799756 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 15:08:45.799769 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:08:45.799774 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:08:45.799800 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 15:08:45.799806 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 15:08:45.799810 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 15:08:45.799814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 15:08:45.805747 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://772509e08b1dcc68190d81e10a93fe348af55fdc71dbab2f0cadffd65089c044\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d9c79b1675802dcd1800cdbf3562832c4d201ff1b4d7ab4504118a41a245453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d9c79b1675802dcd1800cdbf3562832c4d201ff1b4d7ab4504118a41a245453\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:24Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:48Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:48 crc kubenswrapper[4697]: I0127 15:08:48.819702 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:48Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:48 crc kubenswrapper[4697]: I0127 15:08:48.835577 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:48Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:48 crc kubenswrapper[4697]: I0127 15:08:48.850088 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rq89t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fbc1c27-fba2-40df-95dd-3842bd1f1906\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0c056e48d3130806317f25486fea67d938a0e610f19b6089873f2fcfe4759a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npp7h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rq89t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:48Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:48 crc kubenswrapper[4697]: I0127 15:08:48.866522 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6jxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9666b8a501ef015431ee3be1fc34ca2b196011df3007d2e4d508f09f9967785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9666b8a501ef015431ee3be1fc34ca2b196011df3007d2e4d508f09f9967785\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z6jxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:48Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:48 crc kubenswrapper[4697]: I0127 15:08:48.881634 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e13ee612abe9aa03f8ccaf68abbdfdbeb29820484f430097aef6be1679d3efe8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:48Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:48 crc kubenswrapper[4697]: I0127 15:08:48.892535 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a182e147723dd1c9335e6c6a910d5d53bdfc118504b6a0a9f3c91f79b6d3aee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52fcd1c6784720765f18ddc1936d3bdd625b743d27654a647ff80351957797e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:48Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:48 crc kubenswrapper[4697]: I0127 15:08:48.909822 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:48Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:48 crc kubenswrapper[4697]: I0127 15:08:48.930561 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bcb9s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7543bea-0b65-44e1-8c0c-bc1a13577d69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0b69d8311464a46854b17dc23de984ff37a24f3de84f8ad6033d26d5dd30afc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0b69d8311464a46854b17dc23de984ff37a24f3de84f8ad6033d26d5dd30afc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d34049aae4e409909bb597c8bf33aa1c1ac85699cf72e33f5643145fdf9fbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bcb9s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:48Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:48 crc kubenswrapper[4697]: I0127 15:08:48.941631 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wz495" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9bec8bc-b2a6-4865-83ca-692ae5c022a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2616d07c83d73b63d4b728a30de8a7e1d76986d38f8c4c3fe019bf73e64784f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wqhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://faaced835dbc76e880a1fd29824b00fca5f720686e476bcba6ad4f807e28e8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wqhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wz495\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:48Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:48 crc kubenswrapper[4697]: I0127 15:08:48.965462 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30821478-065e-48b2-85f3-ae69260477fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://841fe2379065903ddc38b4968c1764a6c83d13f42c7587f20be81d8539199c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc09ec12a81a4e2954a0d1146819e9f9b4fc1fd442a3e9c930ea213aff875eb9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa7833382543ce12d026eb8bbc6fb93276a1105a0cc34d215e719591be740f80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3144c28de6be75231118993ba779a42bcc9032d51e927df649d3abb602ffa5dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3144c28de6be75231118993ba779a42bcc9032d51e927df649d3abb602ffa5dd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 15:08:45.318333 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 15:08:45.318446 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 15:08:45.319039 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1083979560/tls.crt::/tmp/serving-cert-1083979560/tls.key\\\\\\\"\\\\nI0127 15:08:45.778691 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 15:08:45.781562 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 15:08:45.781589 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 15:08:45.781614 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 15:08:45.781620 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 15:08:45.799733 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0127 15:08:45.799756 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 15:08:45.799769 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:08:45.799774 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:08:45.799800 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 15:08:45.799806 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 15:08:45.799810 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 15:08:45.799814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 15:08:45.805747 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://772509e08b1dcc68190d81e10a93fe348af55fdc71dbab2f0cadffd65089c044\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d9c79b1675802dcd1800cdbf3562832c4d201ff1b4d7ab4504118a41a245453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d9c79b1675802dcd1800cdbf3562832c4d201ff1b4d7ab4504118a41a245453\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:24Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:48Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:48 crc kubenswrapper[4697]: I0127 15:08:48.981306 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:48Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:48 crc kubenswrapper[4697]: I0127 15:08:48.996858 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:48Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:49 crc kubenswrapper[4697]: I0127 15:08:49.011901 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://955eb03bb38f971417b1af1b193c2008607eaeda5addf30f899830dd84620c4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:49Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:49 crc kubenswrapper[4697]: I0127 15:08:49.020920 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bdclj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed86f7b6-a042-470f-8da3-9cad4e65c550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a701152234da7522fefeed3798f4748c4f8e56fa81edd5011ad4a89bbb2e4be7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f898q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bdclj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:49Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:49 crc kubenswrapper[4697]: I0127 15:08:49.032618 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e13ee612abe9aa03f8ccaf68abbdfdbeb29820484f430097aef6be1679d3efe8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:49Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:49 crc kubenswrapper[4697]: I0127 15:08:49.042778 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a182e147723dd1c9335e6c6a910d5d53bdfc118504b6a0a9f3c91f79b6d3aee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52fcd1c6784720765f18ddc1936d3bdd625b743d27654a647ff80351957797e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:49Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:49 crc kubenswrapper[4697]: I0127 15:08:49.052898 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:49Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:49 crc kubenswrapper[4697]: I0127 15:08:49.065146 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bcb9s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7543bea-0b65-44e1-8c0c-bc1a13577d69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0b69d8311464a46854b17dc23de984ff37a24f3de84f8ad6033d26d5dd30afc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0b69d8311464a46854b17dc23de984ff37a24f3de84f8ad6033d26d5dd30afc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d34049aae4e409909bb597c8bf33aa1c1ac85699cf72e33f5643145fdf9fbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bcb9s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:49Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:49 crc kubenswrapper[4697]: I0127 15:08:49.076945 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rq89t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fbc1c27-fba2-40df-95dd-3842bd1f1906\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0c056e48d3130806317f25486fea67d938a0e610f19b6089873f2fcfe4759a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npp7h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rq89t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:49Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:49 crc kubenswrapper[4697]: I0127 15:08:49.094363 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6jxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9666b8a501ef015431ee3be1fc34ca2b196011df3007d2e4d508f09f9967785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9666b8a501ef015431ee3be1fc34ca2b196011df3007d2e4d508f09f9967785\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z6jxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:49Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:49 crc kubenswrapper[4697]: I0127 15:08:49.099285 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:08:49 crc kubenswrapper[4697]: E0127 15:08:49.099515 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:08:53.099485566 +0000 UTC m=+29.271885387 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:08:49 crc kubenswrapper[4697]: I0127 15:08:49.106555 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wz495" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9bec8bc-b2a6-4865-83ca-692ae5c022a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2616d07c83d73b63d4b728a30de8a7e1d76986d38f8c4c3fe019bf73e64784f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wqhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://faaced835dbc76e880a1fd29824b00fca5f720686e476bcba6ad4f807e28e8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wqhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wz495\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:49Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:49 crc kubenswrapper[4697]: I0127 15:08:49.200205 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:08:49 crc kubenswrapper[4697]: I0127 15:08:49.200257 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:08:49 crc kubenswrapper[4697]: I0127 15:08:49.200287 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:08:49 crc kubenswrapper[4697]: I0127 15:08:49.200321 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:08:49 crc kubenswrapper[4697]: E0127 15:08:49.200395 4697 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 15:08:49 crc kubenswrapper[4697]: E0127 15:08:49.200418 4697 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 15:08:49 crc kubenswrapper[4697]: E0127 15:08:49.200430 4697 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 15:08:49 crc kubenswrapper[4697]: E0127 15:08:49.200429 4697 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 15:08:49 crc kubenswrapper[4697]: E0127 15:08:49.200480 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 15:08:53.200462386 +0000 UTC m=+29.372862167 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 15:08:49 crc kubenswrapper[4697]: E0127 15:08:49.200442 4697 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 15:08:49 crc kubenswrapper[4697]: E0127 15:08:49.200502 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 15:08:53.200492607 +0000 UTC m=+29.372892388 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 15:08:49 crc kubenswrapper[4697]: E0127 15:08:49.200538 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 15:08:53.200515878 +0000 UTC m=+29.372915659 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 15:08:49 crc kubenswrapper[4697]: E0127 15:08:49.200611 4697 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 15:08:49 crc kubenswrapper[4697]: E0127 15:08:49.200624 4697 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 15:08:49 crc kubenswrapper[4697]: E0127 15:08:49.200632 4697 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 15:08:49 crc kubenswrapper[4697]: E0127 15:08:49.200667 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 15:08:53.200659751 +0000 UTC m=+29.373059532 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 15:08:49 crc kubenswrapper[4697]: I0127 15:08:49.528861 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 08:31:23.695653781 +0000 UTC Jan 27 15:08:49 crc kubenswrapper[4697]: I0127 15:08:49.567418 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:08:49 crc kubenswrapper[4697]: I0127 15:08:49.567552 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:08:49 crc kubenswrapper[4697]: E0127 15:08:49.567814 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:08:49 crc kubenswrapper[4697]: E0127 15:08:49.567551 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:08:49 crc kubenswrapper[4697]: I0127 15:08:49.744173 4697 generic.go:334] "Generic (PLEG): container finished" podID="b7543bea-0b65-44e1-8c0c-bc1a13577d69" containerID="3d34049aae4e409909bb597c8bf33aa1c1ac85699cf72e33f5643145fdf9fbb9" exitCode=0 Jan 27 15:08:49 crc kubenswrapper[4697]: I0127 15:08:49.744223 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bcb9s" event={"ID":"b7543bea-0b65-44e1-8c0c-bc1a13577d69","Type":"ContainerDied","Data":"3d34049aae4e409909bb597c8bf33aa1c1ac85699cf72e33f5643145fdf9fbb9"} Jan 27 15:08:49 crc kubenswrapper[4697]: I0127 15:08:49.760902 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:49Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:49 crc kubenswrapper[4697]: I0127 15:08:49.783558 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:49Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:49 crc kubenswrapper[4697]: I0127 15:08:49.800573 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://955eb03bb38f971417b1af1b193c2008607eaeda5addf30f899830dd84620c4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:49Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:49 crc kubenswrapper[4697]: I0127 15:08:49.811950 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bdclj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed86f7b6-a042-470f-8da3-9cad4e65c550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a701152234da7522fefeed3798f4748c4f8e56fa81edd5011ad4a89bbb2e4be7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f898q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bdclj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:49Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:49 crc kubenswrapper[4697]: I0127 15:08:49.825417 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30821478-065e-48b2-85f3-ae69260477fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://841fe2379065903ddc38b4968c1764a6c83d13f42c7587f20be81d8539199c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc09ec12a81a4e2954a0d1146819e9f9b4fc1fd442a3e9c930ea213aff875eb9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa7833382543ce12d026eb8bbc6fb93276a1105a0cc34d215e719591be740f80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3144c28de6be75231118993ba779a42bcc9032d51e927df649d3abb602ffa5dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3144c28de6be75231118993ba779a42bcc9032d51e927df649d3abb602ffa5dd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 15:08:45.318333 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 15:08:45.318446 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 15:08:45.319039 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1083979560/tls.crt::/tmp/serving-cert-1083979560/tls.key\\\\\\\"\\\\nI0127 15:08:45.778691 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 15:08:45.781562 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 15:08:45.781589 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 15:08:45.781614 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 15:08:45.781620 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 15:08:45.799733 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0127 15:08:45.799756 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 15:08:45.799769 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:08:45.799774 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:08:45.799800 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 15:08:45.799806 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 15:08:45.799810 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 15:08:45.799814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 15:08:45.805747 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://772509e08b1dcc68190d81e10a93fe348af55fdc71dbab2f0cadffd65089c044\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d9c79b1675802dcd1800cdbf3562832c4d201ff1b4d7ab4504118a41a245453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d9c79b1675802dcd1800cdbf3562832c4d201ff1b4d7ab4504118a41a245453\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:24Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:49Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:49 crc kubenswrapper[4697]: I0127 15:08:49.842452 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:49Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:49 crc kubenswrapper[4697]: I0127 15:08:49.856192 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bcb9s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7543bea-0b65-44e1-8c0c-bc1a13577d69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0b69d8311464a46854b17dc23de984ff37a24f3de84f8ad6033d26d5dd30afc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0b69d8311464a46854b17dc23de984ff37a24f3de84f8ad6033d26d5dd30afc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d34049aae4e409909bb597c8bf33aa1c1ac85699cf72e33f5643145fdf9fbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d34049aae4e409909bb597c8bf33aa1c1ac85699cf72e33f5643145fdf9fbb9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bcb9s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:49Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:49 crc kubenswrapper[4697]: I0127 15:08:49.870436 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rq89t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fbc1c27-fba2-40df-95dd-3842bd1f1906\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0c056e48d3130806317f25486fea67d938a0e610f19b6089873f2fcfe4759a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npp7h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rq89t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:49Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:49 crc kubenswrapper[4697]: I0127 15:08:49.888473 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6jxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9666b8a501ef015431ee3be1fc34ca2b196011df3007d2e4d508f09f9967785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9666b8a501ef015431ee3be1fc34ca2b196011df3007d2e4d508f09f9967785\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z6jxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:49Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:49 crc kubenswrapper[4697]: I0127 15:08:49.905451 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e13ee612abe9aa03f8ccaf68abbdfdbeb29820484f430097aef6be1679d3efe8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:49Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:49 crc kubenswrapper[4697]: I0127 15:08:49.918431 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a182e147723dd1c9335e6c6a910d5d53bdfc118504b6a0a9f3c91f79b6d3aee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52fcd1c6784720765f18ddc1936d3bdd625b743d27654a647ff80351957797e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:49Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:49 crc kubenswrapper[4697]: I0127 15:08:49.929362 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wz495" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9bec8bc-b2a6-4865-83ca-692ae5c022a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2616d07c83d73b63d4b728a30de8a7e1d76986d38f8c4c3fe019bf73e64784f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wqhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://faaced835dbc76e880a1fd29824b00fca5f720686e476bcba6ad4f807e28e8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wqhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wz495\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:49Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:50 crc kubenswrapper[4697]: I0127 15:08:50.440361 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-lpz4j"] Jan 27 15:08:50 crc kubenswrapper[4697]: I0127 15:08:50.440773 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-lpz4j" Jan 27 15:08:50 crc kubenswrapper[4697]: I0127 15:08:50.442695 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 27 15:08:50 crc kubenswrapper[4697]: I0127 15:08:50.442751 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 27 15:08:50 crc kubenswrapper[4697]: I0127 15:08:50.443082 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 27 15:08:50 crc kubenswrapper[4697]: I0127 15:08:50.444894 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 27 15:08:50 crc kubenswrapper[4697]: I0127 15:08:50.453330 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lpz4j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d187caad-2501-44d6-8ced-f8d8ca5fecfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5jqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lpz4j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:50Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:50 crc kubenswrapper[4697]: I0127 15:08:50.465184 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://955eb03bb38f971417b1af1b193c2008607eaeda5addf30f899830dd84620c4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:50Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:50 crc kubenswrapper[4697]: I0127 15:08:50.474355 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bdclj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed86f7b6-a042-470f-8da3-9cad4e65c550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a701152234da7522fefeed3798f4748c4f8e56fa81edd5011ad4a89bbb2e4be7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f898q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bdclj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:50Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:50 crc kubenswrapper[4697]: I0127 15:08:50.486701 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30821478-065e-48b2-85f3-ae69260477fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://841fe2379065903ddc38b4968c1764a6c83d13f42c7587f20be81d8539199c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc09ec12a81a4e2954a0d1146819e9f9b4fc1fd442a3e9c930ea213aff875eb9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa7833382543ce12d026eb8bbc6fb93276a1105a0cc34d215e719591be740f80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3144c28de6be75231118993ba779a42bcc9032d51e927df649d3abb602ffa5dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3144c28de6be75231118993ba779a42bcc9032d51e927df649d3abb602ffa5dd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 15:08:45.318333 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 15:08:45.318446 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 15:08:45.319039 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1083979560/tls.crt::/tmp/serving-cert-1083979560/tls.key\\\\\\\"\\\\nI0127 15:08:45.778691 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 15:08:45.781562 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 15:08:45.781589 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 15:08:45.781614 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 15:08:45.781620 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 15:08:45.799733 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0127 15:08:45.799756 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 15:08:45.799769 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:08:45.799774 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:08:45.799800 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 15:08:45.799806 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 15:08:45.799810 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 15:08:45.799814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 15:08:45.805747 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://772509e08b1dcc68190d81e10a93fe348af55fdc71dbab2f0cadffd65089c044\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d9c79b1675802dcd1800cdbf3562832c4d201ff1b4d7ab4504118a41a245453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d9c79b1675802dcd1800cdbf3562832c4d201ff1b4d7ab4504118a41a245453\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:24Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:50Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:50 crc kubenswrapper[4697]: I0127 15:08:50.498188 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:50Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:50 crc kubenswrapper[4697]: I0127 15:08:50.511158 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:50Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:50 crc kubenswrapper[4697]: I0127 15:08:50.511766 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5jqc\" (UniqueName: \"kubernetes.io/projected/d187caad-2501-44d6-8ced-f8d8ca5fecfb-kube-api-access-d5jqc\") pod \"node-ca-lpz4j\" (UID: \"d187caad-2501-44d6-8ced-f8d8ca5fecfb\") " pod="openshift-image-registry/node-ca-lpz4j" Jan 27 15:08:50 crc kubenswrapper[4697]: I0127 15:08:50.511834 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d187caad-2501-44d6-8ced-f8d8ca5fecfb-host\") pod \"node-ca-lpz4j\" (UID: \"d187caad-2501-44d6-8ced-f8d8ca5fecfb\") " pod="openshift-image-registry/node-ca-lpz4j" Jan 27 15:08:50 crc kubenswrapper[4697]: I0127 15:08:50.511886 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/d187caad-2501-44d6-8ced-f8d8ca5fecfb-serviceca\") pod \"node-ca-lpz4j\" (UID: \"d187caad-2501-44d6-8ced-f8d8ca5fecfb\") " pod="openshift-image-registry/node-ca-lpz4j" Jan 27 15:08:50 crc kubenswrapper[4697]: I0127 15:08:50.527335 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rq89t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fbc1c27-fba2-40df-95dd-3842bd1f1906\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0c056e48d3130806317f25486fea67d938a0e610f19b6089873f2fcfe4759a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npp7h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rq89t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:50Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:50 crc kubenswrapper[4697]: I0127 15:08:50.529398 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 11:18:14.293937173 +0000 UTC Jan 27 15:08:50 crc kubenswrapper[4697]: I0127 15:08:50.546164 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6jxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9666b8a501ef015431ee3be1fc34ca2b196011df3007d2e4d508f09f9967785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9666b8a501ef015431ee3be1fc34ca2b196011df3007d2e4d508f09f9967785\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z6jxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:50Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:50 crc kubenswrapper[4697]: I0127 15:08:50.559067 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e13ee612abe9aa03f8ccaf68abbdfdbeb29820484f430097aef6be1679d3efe8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:50Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:50 crc kubenswrapper[4697]: I0127 15:08:50.568294 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:08:50 crc kubenswrapper[4697]: E0127 15:08:50.568443 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:08:50 crc kubenswrapper[4697]: I0127 15:08:50.570511 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a182e147723dd1c9335e6c6a910d5d53bdfc118504b6a0a9f3c91f79b6d3aee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52fcd1c6784720765f18ddc1936d3bdd625b743d27654a647ff80351957797e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:50Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:50 crc kubenswrapper[4697]: I0127 15:08:50.584270 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:50Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:50 crc kubenswrapper[4697]: I0127 15:08:50.597084 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bcb9s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7543bea-0b65-44e1-8c0c-bc1a13577d69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0b69d8311464a46854b17dc23de984ff37a24f3de84f8ad6033d26d5dd30afc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0b69d8311464a46854b17dc23de984ff37a24f3de84f8ad6033d26d5dd30afc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d34049aae4e409909bb597c8bf33aa1c1ac85699cf72e33f5643145fdf9fbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d34049aae4e409909bb597c8bf33aa1c1ac85699cf72e33f5643145fdf9fbb9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bcb9s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:50Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:50 crc kubenswrapper[4697]: I0127 15:08:50.608097 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wz495" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9bec8bc-b2a6-4865-83ca-692ae5c022a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2616d07c83d73b63d4b728a30de8a7e1d76986d38f8c4c3fe019bf73e64784f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wqhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://faaced835dbc76e880a1fd29824b00fca5f720686e476bcba6ad4f807e28e8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wqhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wz495\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:50Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:50 crc kubenswrapper[4697]: I0127 15:08:50.613085 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/d187caad-2501-44d6-8ced-f8d8ca5fecfb-serviceca\") pod \"node-ca-lpz4j\" (UID: \"d187caad-2501-44d6-8ced-f8d8ca5fecfb\") " pod="openshift-image-registry/node-ca-lpz4j" Jan 27 15:08:50 crc kubenswrapper[4697]: I0127 15:08:50.613277 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5jqc\" (UniqueName: \"kubernetes.io/projected/d187caad-2501-44d6-8ced-f8d8ca5fecfb-kube-api-access-d5jqc\") pod \"node-ca-lpz4j\" (UID: \"d187caad-2501-44d6-8ced-f8d8ca5fecfb\") " pod="openshift-image-registry/node-ca-lpz4j" Jan 27 15:08:50 crc kubenswrapper[4697]: I0127 15:08:50.613561 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d187caad-2501-44d6-8ced-f8d8ca5fecfb-host\") pod \"node-ca-lpz4j\" (UID: \"d187caad-2501-44d6-8ced-f8d8ca5fecfb\") " pod="openshift-image-registry/node-ca-lpz4j" Jan 27 15:08:50 crc kubenswrapper[4697]: I0127 15:08:50.613623 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d187caad-2501-44d6-8ced-f8d8ca5fecfb-host\") pod \"node-ca-lpz4j\" (UID: \"d187caad-2501-44d6-8ced-f8d8ca5fecfb\") " pod="openshift-image-registry/node-ca-lpz4j" Jan 27 15:08:50 crc kubenswrapper[4697]: I0127 15:08:50.614363 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/d187caad-2501-44d6-8ced-f8d8ca5fecfb-serviceca\") pod \"node-ca-lpz4j\" (UID: \"d187caad-2501-44d6-8ced-f8d8ca5fecfb\") " pod="openshift-image-registry/node-ca-lpz4j" Jan 27 15:08:50 crc kubenswrapper[4697]: I0127 15:08:50.633696 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5jqc\" (UniqueName: \"kubernetes.io/projected/d187caad-2501-44d6-8ced-f8d8ca5fecfb-kube-api-access-d5jqc\") pod \"node-ca-lpz4j\" (UID: \"d187caad-2501-44d6-8ced-f8d8ca5fecfb\") " pod="openshift-image-registry/node-ca-lpz4j" Jan 27 15:08:50 crc kubenswrapper[4697]: I0127 15:08:50.750484 4697 generic.go:334] "Generic (PLEG): container finished" podID="b7543bea-0b65-44e1-8c0c-bc1a13577d69" containerID="2b85aff4ba7e4c4eddcdfd916b42392fd8f5bd4d18caae739a7490c0576fcff1" exitCode=0 Jan 27 15:08:50 crc kubenswrapper[4697]: I0127 15:08:50.750871 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bcb9s" event={"ID":"b7543bea-0b65-44e1-8c0c-bc1a13577d69","Type":"ContainerDied","Data":"2b85aff4ba7e4c4eddcdfd916b42392fd8f5bd4d18caae739a7490c0576fcff1"} Jan 27 15:08:50 crc kubenswrapper[4697]: I0127 15:08:50.753319 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-lpz4j" Jan 27 15:08:50 crc kubenswrapper[4697]: I0127 15:08:50.760514 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6jxw" event={"ID":"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad","Type":"ContainerStarted","Data":"971bf4362650664f5133d9b68b7a5ce76e54dafbf28c88730f678ada0256ffd9"} Jan 27 15:08:50 crc kubenswrapper[4697]: I0127 15:08:50.770745 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wz495" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9bec8bc-b2a6-4865-83ca-692ae5c022a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2616d07c83d73b63d4b728a30de8a7e1d76986d38f8c4c3fe019bf73e64784f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wqhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://faaced835dbc76e880a1fd29824b00fca5f720686e476bcba6ad4f807e28e8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wqhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wz495\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:50Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:50 crc kubenswrapper[4697]: I0127 15:08:50.789128 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lpz4j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d187caad-2501-44d6-8ced-f8d8ca5fecfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5jqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lpz4j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:50Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:50 crc kubenswrapper[4697]: I0127 15:08:50.802322 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30821478-065e-48b2-85f3-ae69260477fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://841fe2379065903ddc38b4968c1764a6c83d13f42c7587f20be81d8539199c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc09ec12a81a4e2954a0d1146819e9f9b4fc1fd442a3e9c930ea213aff875eb9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa7833382543ce12d026eb8bbc6fb93276a1105a0cc34d215e719591be740f80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3144c28de6be75231118993ba779a42bcc9032d51e927df649d3abb602ffa5dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3144c28de6be75231118993ba779a42bcc9032d51e927df649d3abb602ffa5dd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 15:08:45.318333 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 15:08:45.318446 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 15:08:45.319039 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1083979560/tls.crt::/tmp/serving-cert-1083979560/tls.key\\\\\\\"\\\\nI0127 15:08:45.778691 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 15:08:45.781562 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 15:08:45.781589 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 15:08:45.781614 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 15:08:45.781620 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 15:08:45.799733 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0127 15:08:45.799756 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 15:08:45.799769 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:08:45.799774 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:08:45.799800 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 15:08:45.799806 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 15:08:45.799810 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 15:08:45.799814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 15:08:45.805747 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://772509e08b1dcc68190d81e10a93fe348af55fdc71dbab2f0cadffd65089c044\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d9c79b1675802dcd1800cdbf3562832c4d201ff1b4d7ab4504118a41a245453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d9c79b1675802dcd1800cdbf3562832c4d201ff1b4d7ab4504118a41a245453\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:24Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:50Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:50 crc kubenswrapper[4697]: I0127 15:08:50.818312 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:50Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:50 crc kubenswrapper[4697]: I0127 15:08:50.831848 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:50Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:50 crc kubenswrapper[4697]: I0127 15:08:50.843570 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://955eb03bb38f971417b1af1b193c2008607eaeda5addf30f899830dd84620c4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:50Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:50 crc kubenswrapper[4697]: I0127 15:08:50.853753 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bdclj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed86f7b6-a042-470f-8da3-9cad4e65c550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a701152234da7522fefeed3798f4748c4f8e56fa81edd5011ad4a89bbb2e4be7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f898q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bdclj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:50Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:50 crc kubenswrapper[4697]: I0127 15:08:50.867965 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e13ee612abe9aa03f8ccaf68abbdfdbeb29820484f430097aef6be1679d3efe8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:50Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:50 crc kubenswrapper[4697]: I0127 15:08:50.880096 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a182e147723dd1c9335e6c6a910d5d53bdfc118504b6a0a9f3c91f79b6d3aee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52fcd1c6784720765f18ddc1936d3bdd625b743d27654a647ff80351957797e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:50Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:50 crc kubenswrapper[4697]: I0127 15:08:50.892122 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:50Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:50 crc kubenswrapper[4697]: I0127 15:08:50.908909 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bcb9s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7543bea-0b65-44e1-8c0c-bc1a13577d69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0b69d8311464a46854b17dc23de984ff37a24f3de84f8ad6033d26d5dd30afc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0b69d8311464a46854b17dc23de984ff37a24f3de84f8ad6033d26d5dd30afc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d34049aae4e409909bb597c8bf33aa1c1ac85699cf72e33f5643145fdf9fbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d34049aae4e409909bb597c8bf33aa1c1ac85699cf72e33f5643145fdf9fbb9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b85aff4ba7e4c4eddcdfd916b42392fd8f5bd4d18caae739a7490c0576fcff1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b85aff4ba7e4c4eddcdfd916b42392fd8f5bd4d18caae739a7490c0576fcff1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bcb9s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:50Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:50 crc kubenswrapper[4697]: I0127 15:08:50.926129 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rq89t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fbc1c27-fba2-40df-95dd-3842bd1f1906\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0c056e48d3130806317f25486fea67d938a0e610f19b6089873f2fcfe4759a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npp7h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rq89t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:50Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:50 crc kubenswrapper[4697]: I0127 15:08:50.948159 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6jxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9666b8a501ef015431ee3be1fc34ca2b196011df3007d2e4d508f09f9967785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9666b8a501ef015431ee3be1fc34ca2b196011df3007d2e4d508f09f9967785\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z6jxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:50Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:50 crc kubenswrapper[4697]: I0127 15:08:50.948513 4697 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 15:08:50 crc kubenswrapper[4697]: I0127 15:08:50.953592 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:08:50 crc kubenswrapper[4697]: I0127 15:08:50.954021 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:08:50 crc kubenswrapper[4697]: I0127 15:08:50.954049 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:08:50 crc kubenswrapper[4697]: I0127 15:08:50.954124 4697 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 27 15:08:50 crc kubenswrapper[4697]: I0127 15:08:50.961179 4697 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 27 15:08:50 crc kubenswrapper[4697]: I0127 15:08:50.961446 4697 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 27 15:08:50 crc kubenswrapper[4697]: I0127 15:08:50.972743 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:08:50 crc kubenswrapper[4697]: I0127 15:08:50.972798 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:08:50 crc kubenswrapper[4697]: I0127 15:08:50.972811 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:08:50 crc kubenswrapper[4697]: I0127 15:08:50.972828 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:08:50 crc kubenswrapper[4697]: I0127 15:08:50.972840 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:08:50Z","lastTransitionTime":"2026-01-27T15:08:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:08:50 crc kubenswrapper[4697]: E0127 15:08:50.991043 4697 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:08:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:08:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:08:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:08:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"74b869f4-b1e4-4686-af4e-9516e0fb5017\\\",\\\"systemUUID\\\":\\\"69bca9ab-721f-415b-ad88-6626c7795f3c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:50Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:50 crc kubenswrapper[4697]: I0127 15:08:50.994590 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:08:50 crc kubenswrapper[4697]: I0127 15:08:50.994614 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:08:50 crc kubenswrapper[4697]: I0127 15:08:50.994625 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:08:50 crc kubenswrapper[4697]: I0127 15:08:50.994640 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:08:50 crc kubenswrapper[4697]: I0127 15:08:50.994652 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:08:50Z","lastTransitionTime":"2026-01-27T15:08:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:08:51 crc kubenswrapper[4697]: E0127 15:08:51.013352 4697 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:08:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:08:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:08:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:08:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"74b869f4-b1e4-4686-af4e-9516e0fb5017\\\",\\\"systemUUID\\\":\\\"69bca9ab-721f-415b-ad88-6626c7795f3c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:51Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:51 crc kubenswrapper[4697]: I0127 15:08:51.023270 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:08:51 crc kubenswrapper[4697]: I0127 15:08:51.023299 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:08:51 crc kubenswrapper[4697]: I0127 15:08:51.023308 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:08:51 crc kubenswrapper[4697]: I0127 15:08:51.023321 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:08:51 crc kubenswrapper[4697]: I0127 15:08:51.023330 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:08:51Z","lastTransitionTime":"2026-01-27T15:08:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:08:51 crc kubenswrapper[4697]: E0127 15:08:51.038477 4697 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"74b869f4-b1e4-4686-af4e-9516e0fb5017\\\",\\\"systemUUID\\\":\\\"69bca9ab-721f-415b-ad88-6626c7795f3c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:51Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:51 crc kubenswrapper[4697]: I0127 15:08:51.044692 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:08:51 crc kubenswrapper[4697]: I0127 15:08:51.044729 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:08:51 crc kubenswrapper[4697]: I0127 15:08:51.044738 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:08:51 crc kubenswrapper[4697]: I0127 15:08:51.044754 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:08:51 crc kubenswrapper[4697]: I0127 15:08:51.044764 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:08:51Z","lastTransitionTime":"2026-01-27T15:08:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:08:51 crc kubenswrapper[4697]: E0127 15:08:51.062219 4697 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"74b869f4-b1e4-4686-af4e-9516e0fb5017\\\",\\\"systemUUID\\\":\\\"69bca9ab-721f-415b-ad88-6626c7795f3c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:51Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:51 crc kubenswrapper[4697]: I0127 15:08:51.065766 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:08:51 crc kubenswrapper[4697]: I0127 15:08:51.065828 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:08:51 crc kubenswrapper[4697]: I0127 15:08:51.065844 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:08:51 crc kubenswrapper[4697]: I0127 15:08:51.065859 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:08:51 crc kubenswrapper[4697]: I0127 15:08:51.065869 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:08:51Z","lastTransitionTime":"2026-01-27T15:08:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:08:51 crc kubenswrapper[4697]: E0127 15:08:51.077810 4697 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"74b869f4-b1e4-4686-af4e-9516e0fb5017\\\",\\\"systemUUID\\\":\\\"69bca9ab-721f-415b-ad88-6626c7795f3c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:51Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:51 crc kubenswrapper[4697]: E0127 15:08:51.077927 4697 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 15:08:51 crc kubenswrapper[4697]: I0127 15:08:51.079398 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:08:51 crc kubenswrapper[4697]: I0127 15:08:51.079493 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:08:51 crc kubenswrapper[4697]: I0127 15:08:51.079676 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:08:51 crc kubenswrapper[4697]: I0127 15:08:51.079841 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:08:51 crc kubenswrapper[4697]: I0127 15:08:51.079980 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:08:51Z","lastTransitionTime":"2026-01-27T15:08:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:08:51 crc kubenswrapper[4697]: I0127 15:08:51.182868 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:08:51 crc kubenswrapper[4697]: I0127 15:08:51.183082 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:08:51 crc kubenswrapper[4697]: I0127 15:08:51.183166 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:08:51 crc kubenswrapper[4697]: I0127 15:08:51.183226 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:08:51 crc kubenswrapper[4697]: I0127 15:08:51.183279 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:08:51Z","lastTransitionTime":"2026-01-27T15:08:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:08:51 crc kubenswrapper[4697]: I0127 15:08:51.285729 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:08:51 crc kubenswrapper[4697]: I0127 15:08:51.285997 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:08:51 crc kubenswrapper[4697]: I0127 15:08:51.286074 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:08:51 crc kubenswrapper[4697]: I0127 15:08:51.286180 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:08:51 crc kubenswrapper[4697]: I0127 15:08:51.286253 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:08:51Z","lastTransitionTime":"2026-01-27T15:08:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:08:51 crc kubenswrapper[4697]: I0127 15:08:51.387945 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:08:51 crc kubenswrapper[4697]: I0127 15:08:51.387977 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:08:51 crc kubenswrapper[4697]: I0127 15:08:51.387985 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:08:51 crc kubenswrapper[4697]: I0127 15:08:51.387998 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:08:51 crc kubenswrapper[4697]: I0127 15:08:51.388010 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:08:51Z","lastTransitionTime":"2026-01-27T15:08:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:08:51 crc kubenswrapper[4697]: I0127 15:08:51.422191 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 15:08:51 crc kubenswrapper[4697]: I0127 15:08:51.427113 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 15:08:51 crc kubenswrapper[4697]: I0127 15:08:51.434533 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 27 15:08:51 crc kubenswrapper[4697]: I0127 15:08:51.434966 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e13ee612abe9aa03f8ccaf68abbdfdbeb29820484f430097aef6be1679d3efe8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:51Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:51 crc kubenswrapper[4697]: I0127 15:08:51.446942 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a182e147723dd1c9335e6c6a910d5d53bdfc118504b6a0a9f3c91f79b6d3aee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52fcd1c6784720765f18ddc1936d3bdd625b743d27654a647ff80351957797e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:51Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:51 crc kubenswrapper[4697]: I0127 15:08:51.463658 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:51Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:51 crc kubenswrapper[4697]: I0127 15:08:51.479884 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bcb9s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7543bea-0b65-44e1-8c0c-bc1a13577d69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0b69d8311464a46854b17dc23de984ff37a24f3de84f8ad6033d26d5dd30afc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0b69d8311464a46854b17dc23de984ff37a24f3de84f8ad6033d26d5dd30afc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d34049aae4e409909bb597c8bf33aa1c1ac85699cf72e33f5643145fdf9fbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d34049aae4e409909bb597c8bf33aa1c1ac85699cf72e33f5643145fdf9fbb9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b85aff4ba7e4c4eddcdfd916b42392fd8f5bd4d18caae739a7490c0576fcff1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b85aff4ba7e4c4eddcdfd916b42392fd8f5bd4d18caae739a7490c0576fcff1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bcb9s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:51Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:51 crc kubenswrapper[4697]: I0127 15:08:51.489956 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:08:51 crc kubenswrapper[4697]: I0127 15:08:51.490162 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:08:51 crc kubenswrapper[4697]: I0127 15:08:51.490313 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:08:51 crc kubenswrapper[4697]: I0127 15:08:51.490381 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:08:51 crc kubenswrapper[4697]: I0127 15:08:51.490434 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:08:51Z","lastTransitionTime":"2026-01-27T15:08:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:08:51 crc kubenswrapper[4697]: I0127 15:08:51.495027 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rq89t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fbc1c27-fba2-40df-95dd-3842bd1f1906\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0c056e48d3130806317f25486fea67d938a0e610f19b6089873f2fcfe4759a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npp7h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rq89t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:51Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:51 crc kubenswrapper[4697]: I0127 15:08:51.529923 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 10:01:50.260056181 +0000 UTC Jan 27 15:08:51 crc kubenswrapper[4697]: I0127 15:08:51.530134 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6jxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9666b8a501ef015431ee3be1fc34ca2b196011df3007d2e4d508f09f9967785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9666b8a501ef015431ee3be1fc34ca2b196011df3007d2e4d508f09f9967785\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z6jxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:51Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:51 crc kubenswrapper[4697]: I0127 15:08:51.540935 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wz495" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9bec8bc-b2a6-4865-83ca-692ae5c022a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2616d07c83d73b63d4b728a30de8a7e1d76986d38f8c4c3fe019bf73e64784f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wqhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://faaced835dbc76e880a1fd29824b00fca5f720686e476bcba6ad4f807e28e8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wqhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wz495\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:51Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:51 crc kubenswrapper[4697]: I0127 15:08:51.551993 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lpz4j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d187caad-2501-44d6-8ced-f8d8ca5fecfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5jqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lpz4j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:51Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:51 crc kubenswrapper[4697]: I0127 15:08:51.565628 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30821478-065e-48b2-85f3-ae69260477fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://841fe2379065903ddc38b4968c1764a6c83d13f42c7587f20be81d8539199c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc09ec12a81a4e2954a0d1146819e9f9b4fc1fd442a3e9c930ea213aff875eb9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa7833382543ce12d026eb8bbc6fb93276a1105a0cc34d215e719591be740f80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3144c28de6be75231118993ba779a42bcc9032d51e927df649d3abb602ffa5dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3144c28de6be75231118993ba779a42bcc9032d51e927df649d3abb602ffa5dd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 15:08:45.318333 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 15:08:45.318446 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 15:08:45.319039 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1083979560/tls.crt::/tmp/serving-cert-1083979560/tls.key\\\\\\\"\\\\nI0127 15:08:45.778691 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 15:08:45.781562 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 15:08:45.781589 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 15:08:45.781614 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 15:08:45.781620 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 15:08:45.799733 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0127 15:08:45.799756 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 15:08:45.799769 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:08:45.799774 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:08:45.799800 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 15:08:45.799806 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 15:08:45.799810 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 15:08:45.799814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 15:08:45.805747 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://772509e08b1dcc68190d81e10a93fe348af55fdc71dbab2f0cadffd65089c044\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d9c79b1675802dcd1800cdbf3562832c4d201ff1b4d7ab4504118a41a245453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d9c79b1675802dcd1800cdbf3562832c4d201ff1b4d7ab4504118a41a245453\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:24Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:51Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:51 crc kubenswrapper[4697]: I0127 15:08:51.567632 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:08:51 crc kubenswrapper[4697]: E0127 15:08:51.567851 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:08:51 crc kubenswrapper[4697]: I0127 15:08:51.567632 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:08:51 crc kubenswrapper[4697]: E0127 15:08:51.568142 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:08:51 crc kubenswrapper[4697]: I0127 15:08:51.581084 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:51Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:51 crc kubenswrapper[4697]: I0127 15:08:51.592766 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:08:51 crc kubenswrapper[4697]: I0127 15:08:51.593007 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:08:51 crc kubenswrapper[4697]: I0127 15:08:51.593076 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:08:51 crc kubenswrapper[4697]: I0127 15:08:51.593141 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:08:51 crc kubenswrapper[4697]: I0127 15:08:51.593196 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:08:51Z","lastTransitionTime":"2026-01-27T15:08:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:08:51 crc kubenswrapper[4697]: I0127 15:08:51.600516 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:51Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:51 crc kubenswrapper[4697]: I0127 15:08:51.612279 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://955eb03bb38f971417b1af1b193c2008607eaeda5addf30f899830dd84620c4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:51Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:51 crc kubenswrapper[4697]: I0127 15:08:51.626167 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bdclj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed86f7b6-a042-470f-8da3-9cad4e65c550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a701152234da7522fefeed3798f4748c4f8e56fa81edd5011ad4a89bbb2e4be7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f898q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bdclj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:51Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:51 crc kubenswrapper[4697]: I0127 15:08:51.642892 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:51Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:51 crc kubenswrapper[4697]: I0127 15:08:51.659019 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bcb9s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7543bea-0b65-44e1-8c0c-bc1a13577d69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0b69d8311464a46854b17dc23de984ff37a24f3de84f8ad6033d26d5dd30afc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0b69d8311464a46854b17dc23de984ff37a24f3de84f8ad6033d26d5dd30afc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d34049aae4e409909bb597c8bf33aa1c1ac85699cf72e33f5643145fdf9fbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d34049aae4e409909bb597c8bf33aa1c1ac85699cf72e33f5643145fdf9fbb9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b85aff4ba7e4c4eddcdfd916b42392fd8f5bd4d18caae739a7490c0576fcff1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b85aff4ba7e4c4eddcdfd916b42392fd8f5bd4d18caae739a7490c0576fcff1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bcb9s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:51Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:51 crc kubenswrapper[4697]: I0127 15:08:51.671774 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rq89t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fbc1c27-fba2-40df-95dd-3842bd1f1906\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0c056e48d3130806317f25486fea67d938a0e610f19b6089873f2fcfe4759a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npp7h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rq89t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:51Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:51 crc kubenswrapper[4697]: I0127 15:08:51.689896 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6jxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9666b8a501ef015431ee3be1fc34ca2b196011df3007d2e4d508f09f9967785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9666b8a501ef015431ee3be1fc34ca2b196011df3007d2e4d508f09f9967785\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z6jxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:51Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:51 crc kubenswrapper[4697]: I0127 15:08:51.695493 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:08:51 crc kubenswrapper[4697]: I0127 15:08:51.695535 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:08:51 crc kubenswrapper[4697]: I0127 15:08:51.695548 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:08:51 crc kubenswrapper[4697]: I0127 15:08:51.695564 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:08:51 crc kubenswrapper[4697]: I0127 15:08:51.695577 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:08:51Z","lastTransitionTime":"2026-01-27T15:08:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:08:51 crc kubenswrapper[4697]: I0127 15:08:51.703919 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00bcd4fb-11e6-4087-91b7-290cd35a7292\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee5c74f4e3f1154431027a743528e81ec4bed30037b30a858870f74993da4691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b23c092c5d493951a1f6dbbf0482f102f36a830133d843f3c574afba2e1d50d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ad05a5c3b7640af677ede45c27c40da5d118e28a9d45de0ffa60a05684121c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fd615105781bcf4614f8a58cf63eeb89020db12e822192bd652a5ff23e25a2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:24Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:51Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:51 crc kubenswrapper[4697]: I0127 15:08:51.717992 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e13ee612abe9aa03f8ccaf68abbdfdbeb29820484f430097aef6be1679d3efe8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:51Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:51 crc kubenswrapper[4697]: I0127 15:08:51.731557 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a182e147723dd1c9335e6c6a910d5d53bdfc118504b6a0a9f3c91f79b6d3aee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52fcd1c6784720765f18ddc1936d3bdd625b743d27654a647ff80351957797e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:51Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:51 crc kubenswrapper[4697]: I0127 15:08:51.743274 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wz495" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9bec8bc-b2a6-4865-83ca-692ae5c022a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2616d07c83d73b63d4b728a30de8a7e1d76986d38f8c4c3fe019bf73e64784f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wqhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://faaced835dbc76e880a1fd29824b00fca5f720686e476bcba6ad4f807e28e8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wqhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wz495\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:51Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:51 crc kubenswrapper[4697]: I0127 15:08:51.752970 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lpz4j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d187caad-2501-44d6-8ced-f8d8ca5fecfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5jqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lpz4j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:51Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:51 crc kubenswrapper[4697]: I0127 15:08:51.765125 4697 generic.go:334] "Generic (PLEG): container finished" podID="b7543bea-0b65-44e1-8c0c-bc1a13577d69" containerID="a6affaf91a44dec8a9da34068ed68f480ad543e0efc8e0f584fd5002f8f6ed0c" exitCode=0 Jan 27 15:08:51 crc kubenswrapper[4697]: I0127 15:08:51.765222 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bcb9s" event={"ID":"b7543bea-0b65-44e1-8c0c-bc1a13577d69","Type":"ContainerDied","Data":"a6affaf91a44dec8a9da34068ed68f480ad543e0efc8e0f584fd5002f8f6ed0c"} Jan 27 15:08:51 crc kubenswrapper[4697]: I0127 15:08:51.766767 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-lpz4j" event={"ID":"d187caad-2501-44d6-8ced-f8d8ca5fecfb","Type":"ContainerStarted","Data":"7c2b6a00c426e85ca8ca4fe5790bf7badc12e0c2cc72c1454e664e809ace5e73"} Jan 27 15:08:51 crc kubenswrapper[4697]: I0127 15:08:51.766818 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-lpz4j" event={"ID":"d187caad-2501-44d6-8ced-f8d8ca5fecfb","Type":"ContainerStarted","Data":"131f2b9710f0a489722b92216b14dbf056435c463ad9d11faaa5d401c02fc16e"} Jan 27 15:08:51 crc kubenswrapper[4697]: I0127 15:08:51.770663 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:51Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:51 crc kubenswrapper[4697]: I0127 15:08:51.783513 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:51Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:51 crc kubenswrapper[4697]: I0127 15:08:51.795257 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://955eb03bb38f971417b1af1b193c2008607eaeda5addf30f899830dd84620c4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:51Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:51 crc kubenswrapper[4697]: I0127 15:08:51.798049 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:08:51 crc kubenswrapper[4697]: I0127 15:08:51.798214 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:08:51 crc kubenswrapper[4697]: I0127 15:08:51.798298 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:08:51 crc kubenswrapper[4697]: I0127 15:08:51.798383 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:08:51 crc kubenswrapper[4697]: I0127 15:08:51.798477 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:08:51Z","lastTransitionTime":"2026-01-27T15:08:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:08:51 crc kubenswrapper[4697]: I0127 15:08:51.807656 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bdclj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed86f7b6-a042-470f-8da3-9cad4e65c550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a701152234da7522fefeed3798f4748c4f8e56fa81edd5011ad4a89bbb2e4be7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f898q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bdclj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:51Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:51 crc kubenswrapper[4697]: I0127 15:08:51.821364 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30821478-065e-48b2-85f3-ae69260477fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://841fe2379065903ddc38b4968c1764a6c83d13f42c7587f20be81d8539199c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc09ec12a81a4e2954a0d1146819e9f9b4fc1fd442a3e9c930ea213aff875eb9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa7833382543ce12d026eb8bbc6fb93276a1105a0cc34d215e719591be740f80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3144c28de6be75231118993ba779a42bcc9032d51e927df649d3abb602ffa5dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3144c28de6be75231118993ba779a42bcc9032d51e927df649d3abb602ffa5dd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 15:08:45.318333 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 15:08:45.318446 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 15:08:45.319039 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1083979560/tls.crt::/tmp/serving-cert-1083979560/tls.key\\\\\\\"\\\\nI0127 15:08:45.778691 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 15:08:45.781562 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 15:08:45.781589 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 15:08:45.781614 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 15:08:45.781620 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 15:08:45.799733 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0127 15:08:45.799756 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 15:08:45.799769 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:08:45.799774 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:08:45.799800 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 15:08:45.799806 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 15:08:45.799810 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 15:08:45.799814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 15:08:45.805747 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://772509e08b1dcc68190d81e10a93fe348af55fdc71dbab2f0cadffd65089c044\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d9c79b1675802dcd1800cdbf3562832c4d201ff1b4d7ab4504118a41a245453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d9c79b1675802dcd1800cdbf3562832c4d201ff1b4d7ab4504118a41a245453\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:24Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:51Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:51 crc kubenswrapper[4697]: I0127 15:08:51.831825 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lpz4j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d187caad-2501-44d6-8ced-f8d8ca5fecfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c2b6a00c426e85ca8ca4fe5790bf7badc12e0c2cc72c1454e664e809ace5e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5jqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lpz4j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:51Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:51 crc kubenswrapper[4697]: I0127 15:08:51.843050 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:51Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:51 crc kubenswrapper[4697]: I0127 15:08:51.856253 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://955eb03bb38f971417b1af1b193c2008607eaeda5addf30f899830dd84620c4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:51Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:51 crc kubenswrapper[4697]: I0127 15:08:51.866807 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bdclj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed86f7b6-a042-470f-8da3-9cad4e65c550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a701152234da7522fefeed3798f4748c4f8e56fa81edd5011ad4a89bbb2e4be7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f898q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bdclj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:51Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:51 crc kubenswrapper[4697]: I0127 15:08:51.882122 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30821478-065e-48b2-85f3-ae69260477fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://841fe2379065903ddc38b4968c1764a6c83d13f42c7587f20be81d8539199c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc09ec12a81a4e2954a0d1146819e9f9b4fc1fd442a3e9c930ea213aff875eb9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa7833382543ce12d026eb8bbc6fb93276a1105a0cc34d215e719591be740f80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3144c28de6be75231118993ba779a42bcc9032d51e927df649d3abb602ffa5dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3144c28de6be75231118993ba779a42bcc9032d51e927df649d3abb602ffa5dd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 15:08:45.318333 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 15:08:45.318446 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 15:08:45.319039 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1083979560/tls.crt::/tmp/serving-cert-1083979560/tls.key\\\\\\\"\\\\nI0127 15:08:45.778691 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 15:08:45.781562 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 15:08:45.781589 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 15:08:45.781614 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 15:08:45.781620 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 15:08:45.799733 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0127 15:08:45.799756 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 15:08:45.799769 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:08:45.799774 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:08:45.799800 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 15:08:45.799806 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 15:08:45.799810 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 15:08:45.799814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 15:08:45.805747 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://772509e08b1dcc68190d81e10a93fe348af55fdc71dbab2f0cadffd65089c044\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d9c79b1675802dcd1800cdbf3562832c4d201ff1b4d7ab4504118a41a245453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d9c79b1675802dcd1800cdbf3562832c4d201ff1b4d7ab4504118a41a245453\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:24Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:51Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:51 crc kubenswrapper[4697]: I0127 15:08:51.894963 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:51Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:51 crc kubenswrapper[4697]: I0127 15:08:51.901374 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:08:51 crc kubenswrapper[4697]: I0127 15:08:51.901414 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:08:51 crc kubenswrapper[4697]: I0127 15:08:51.901429 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:08:51 crc kubenswrapper[4697]: I0127 15:08:51.901445 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:08:51 crc kubenswrapper[4697]: I0127 15:08:51.901457 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:08:51Z","lastTransitionTime":"2026-01-27T15:08:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:08:51 crc kubenswrapper[4697]: I0127 15:08:51.909173 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bcb9s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7543bea-0b65-44e1-8c0c-bc1a13577d69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0b69d8311464a46854b17dc23de984ff37a24f3de84f8ad6033d26d5dd30afc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0b69d8311464a46854b17dc23de984ff37a24f3de84f8ad6033d26d5dd30afc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d34049aae4e409909bb597c8bf33aa1c1ac85699cf72e33f5643145fdf9fbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d34049aae4e409909bb597c8bf33aa1c1ac85699cf72e33f5643145fdf9fbb9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b85aff4ba7e4c4eddcdfd916b42392fd8f5bd4d18caae739a7490c0576fcff1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b85aff4ba7e4c4eddcdfd916b42392fd8f5bd4d18caae739a7490c0576fcff1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6affaf91a44dec8a9da34068ed68f480ad543e0efc8e0f584fd5002f8f6ed0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6affaf91a44dec8a9da34068ed68f480ad543e0efc8e0f584fd5002f8f6ed0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bcb9s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:51Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:51 crc kubenswrapper[4697]: I0127 15:08:51.920949 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rq89t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fbc1c27-fba2-40df-95dd-3842bd1f1906\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0c056e48d3130806317f25486fea67d938a0e610f19b6089873f2fcfe4759a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npp7h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rq89t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:51Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:51 crc kubenswrapper[4697]: I0127 15:08:51.938339 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6jxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9666b8a501ef015431ee3be1fc34ca2b196011df3007d2e4d508f09f9967785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9666b8a501ef015431ee3be1fc34ca2b196011df3007d2e4d508f09f9967785\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z6jxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:51Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:51 crc kubenswrapper[4697]: I0127 15:08:51.949968 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00bcd4fb-11e6-4087-91b7-290cd35a7292\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee5c74f4e3f1154431027a743528e81ec4bed30037b30a858870f74993da4691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b23c092c5d493951a1f6dbbf0482f102f36a830133d843f3c574afba2e1d50d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ad05a5c3b7640af677ede45c27c40da5d118e28a9d45de0ffa60a05684121c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fd615105781bcf4614f8a58cf63eeb89020db12e822192bd652a5ff23e25a2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:24Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:51Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:51 crc kubenswrapper[4697]: I0127 15:08:51.963143 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e13ee612abe9aa03f8ccaf68abbdfdbeb29820484f430097aef6be1679d3efe8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:51Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:51 crc kubenswrapper[4697]: I0127 15:08:51.972472 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a182e147723dd1c9335e6c6a910d5d53bdfc118504b6a0a9f3c91f79b6d3aee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52fcd1c6784720765f18ddc1936d3bdd625b743d27654a647ff80351957797e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:51Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:51 crc kubenswrapper[4697]: I0127 15:08:51.981964 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:51Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:51 crc kubenswrapper[4697]: I0127 15:08:51.990579 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wz495" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9bec8bc-b2a6-4865-83ca-692ae5c022a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2616d07c83d73b63d4b728a30de8a7e1d76986d38f8c4c3fe019bf73e64784f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wqhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://faaced835dbc76e880a1fd29824b00fca5f720686e476bcba6ad4f807e28e8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wqhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wz495\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:51Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:52 crc kubenswrapper[4697]: I0127 15:08:52.003103 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:08:52 crc kubenswrapper[4697]: I0127 15:08:52.003151 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:08:52 crc kubenswrapper[4697]: I0127 15:08:52.003161 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:08:52 crc kubenswrapper[4697]: I0127 15:08:52.003177 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:08:52 crc kubenswrapper[4697]: I0127 15:08:52.003195 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:08:52Z","lastTransitionTime":"2026-01-27T15:08:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:08:52 crc kubenswrapper[4697]: I0127 15:08:52.105385 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:08:52 crc kubenswrapper[4697]: I0127 15:08:52.105415 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:08:52 crc kubenswrapper[4697]: I0127 15:08:52.105423 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:08:52 crc kubenswrapper[4697]: I0127 15:08:52.105435 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:08:52 crc kubenswrapper[4697]: I0127 15:08:52.105444 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:08:52Z","lastTransitionTime":"2026-01-27T15:08:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:08:52 crc kubenswrapper[4697]: I0127 15:08:52.207846 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:08:52 crc kubenswrapper[4697]: I0127 15:08:52.207921 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:08:52 crc kubenswrapper[4697]: I0127 15:08:52.207945 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:08:52 crc kubenswrapper[4697]: I0127 15:08:52.207978 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:08:52 crc kubenswrapper[4697]: I0127 15:08:52.208002 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:08:52Z","lastTransitionTime":"2026-01-27T15:08:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:08:52 crc kubenswrapper[4697]: I0127 15:08:52.310637 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:08:52 crc kubenswrapper[4697]: I0127 15:08:52.310692 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:08:52 crc kubenswrapper[4697]: I0127 15:08:52.310708 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:08:52 crc kubenswrapper[4697]: I0127 15:08:52.310733 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:08:52 crc kubenswrapper[4697]: I0127 15:08:52.310750 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:08:52Z","lastTransitionTime":"2026-01-27T15:08:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:08:52 crc kubenswrapper[4697]: I0127 15:08:52.413091 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:08:52 crc kubenswrapper[4697]: I0127 15:08:52.413168 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:08:52 crc kubenswrapper[4697]: I0127 15:08:52.413192 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:08:52 crc kubenswrapper[4697]: I0127 15:08:52.413225 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:08:52 crc kubenswrapper[4697]: I0127 15:08:52.413250 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:08:52Z","lastTransitionTime":"2026-01-27T15:08:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:08:52 crc kubenswrapper[4697]: I0127 15:08:52.517709 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:08:52 crc kubenswrapper[4697]: I0127 15:08:52.517759 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:08:52 crc kubenswrapper[4697]: I0127 15:08:52.517771 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:08:52 crc kubenswrapper[4697]: I0127 15:08:52.517811 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:08:52 crc kubenswrapper[4697]: I0127 15:08:52.517828 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:08:52Z","lastTransitionTime":"2026-01-27T15:08:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:08:52 crc kubenswrapper[4697]: I0127 15:08:52.530510 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 05:17:25.319473098 +0000 UTC Jan 27 15:08:52 crc kubenswrapper[4697]: I0127 15:08:52.568487 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:08:52 crc kubenswrapper[4697]: E0127 15:08:52.568645 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:08:52 crc kubenswrapper[4697]: I0127 15:08:52.620719 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:08:52 crc kubenswrapper[4697]: I0127 15:08:52.620770 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:08:52 crc kubenswrapper[4697]: I0127 15:08:52.620821 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:08:52 crc kubenswrapper[4697]: I0127 15:08:52.620849 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:08:52 crc kubenswrapper[4697]: I0127 15:08:52.620866 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:08:52Z","lastTransitionTime":"2026-01-27T15:08:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:08:52 crc kubenswrapper[4697]: I0127 15:08:52.725123 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:08:52 crc kubenswrapper[4697]: I0127 15:08:52.725193 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:08:52 crc kubenswrapper[4697]: I0127 15:08:52.725214 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:08:52 crc kubenswrapper[4697]: I0127 15:08:52.725240 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:08:52 crc kubenswrapper[4697]: I0127 15:08:52.725258 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:08:52Z","lastTransitionTime":"2026-01-27T15:08:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:08:52 crc kubenswrapper[4697]: I0127 15:08:52.809102 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bcb9s" event={"ID":"b7543bea-0b65-44e1-8c0c-bc1a13577d69","Type":"ContainerStarted","Data":"dede89b14b4d80c8b9e74c45b628b5def6a04f922bb59c06828c3a4e43deca4e"} Jan 27 15:08:52 crc kubenswrapper[4697]: I0127 15:08:52.822483 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rq89t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fbc1c27-fba2-40df-95dd-3842bd1f1906\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0c056e48d3130806317f25486fea67d938a0e610f19b6089873f2fcfe4759a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npp7h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rq89t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:52Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:52 crc kubenswrapper[4697]: I0127 15:08:52.828281 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:08:52 crc kubenswrapper[4697]: I0127 15:08:52.828513 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:08:52 crc kubenswrapper[4697]: I0127 15:08:52.828587 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:08:52 crc kubenswrapper[4697]: I0127 15:08:52.828661 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:08:52 crc kubenswrapper[4697]: I0127 15:08:52.828771 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:08:52Z","lastTransitionTime":"2026-01-27T15:08:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:08:52 crc kubenswrapper[4697]: I0127 15:08:52.847369 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6jxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9666b8a501ef015431ee3be1fc34ca2b196011df3007d2e4d508f09f9967785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9666b8a501ef015431ee3be1fc34ca2b196011df3007d2e4d508f09f9967785\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z6jxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:52Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:52 crc kubenswrapper[4697]: I0127 15:08:52.860564 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00bcd4fb-11e6-4087-91b7-290cd35a7292\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee5c74f4e3f1154431027a743528e81ec4bed30037b30a858870f74993da4691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b23c092c5d493951a1f6dbbf0482f102f36a830133d843f3c574afba2e1d50d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ad05a5c3b7640af677ede45c27c40da5d118e28a9d45de0ffa60a05684121c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fd615105781bcf4614f8a58cf63eeb89020db12e822192bd652a5ff23e25a2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:24Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:52Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:52 crc kubenswrapper[4697]: I0127 15:08:52.877090 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e13ee612abe9aa03f8ccaf68abbdfdbeb29820484f430097aef6be1679d3efe8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:52Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:52 crc kubenswrapper[4697]: I0127 15:08:52.891620 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a182e147723dd1c9335e6c6a910d5d53bdfc118504b6a0a9f3c91f79b6d3aee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52fcd1c6784720765f18ddc1936d3bdd625b743d27654a647ff80351957797e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:52Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:52 crc kubenswrapper[4697]: I0127 15:08:52.913443 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:52Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:52 crc kubenswrapper[4697]: I0127 15:08:52.933271 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:08:52 crc kubenswrapper[4697]: I0127 15:08:52.933354 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:08:52 crc kubenswrapper[4697]: I0127 15:08:52.933366 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:08:52 crc kubenswrapper[4697]: I0127 15:08:52.933384 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:08:52 crc kubenswrapper[4697]: I0127 15:08:52.933396 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:08:52Z","lastTransitionTime":"2026-01-27T15:08:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:08:52 crc kubenswrapper[4697]: I0127 15:08:52.942431 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bcb9s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7543bea-0b65-44e1-8c0c-bc1a13577d69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0b69d8311464a46854b17dc23de984ff37a24f3de84f8ad6033d26d5dd30afc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0b69d8311464a46854b17dc23de984ff37a24f3de84f8ad6033d26d5dd30afc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d34049aae4e409909bb597c8bf33aa1c1ac85699cf72e33f5643145fdf9fbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d34049aae4e409909bb597c8bf33aa1c1ac85699cf72e33f5643145fdf9fbb9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b85aff4ba7e4c4eddcdfd916b42392fd8f5bd4d18caae739a7490c0576fcff1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b85aff4ba7e4c4eddcdfd916b42392fd8f5bd4d18caae739a7490c0576fcff1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6affaf91a44dec8a9da34068ed68f480ad543e0efc8e0f584fd5002f8f6ed0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6affaf91a44dec8a9da34068ed68f480ad543e0efc8e0f584fd5002f8f6ed0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dede89b14b4d80c8b9e74c45b628b5def6a04f922bb59c06828c3a4e43deca4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bcb9s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:52Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:52 crc kubenswrapper[4697]: I0127 15:08:52.959425 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wz495" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9bec8bc-b2a6-4865-83ca-692ae5c022a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2616d07c83d73b63d4b728a30de8a7e1d76986d38f8c4c3fe019bf73e64784f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wqhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://faaced835dbc76e880a1fd29824b00fca5f720686e476bcba6ad4f807e28e8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wqhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wz495\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:52Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:52 crc kubenswrapper[4697]: I0127 15:08:52.972424 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lpz4j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d187caad-2501-44d6-8ced-f8d8ca5fecfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c2b6a00c426e85ca8ca4fe5790bf7badc12e0c2cc72c1454e664e809ace5e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5jqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lpz4j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:52Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:52 crc kubenswrapper[4697]: I0127 15:08:52.986407 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://955eb03bb38f971417b1af1b193c2008607eaeda5addf30f899830dd84620c4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:52Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:52 crc kubenswrapper[4697]: I0127 15:08:52.996679 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bdclj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed86f7b6-a042-470f-8da3-9cad4e65c550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a701152234da7522fefeed3798f4748c4f8e56fa81edd5011ad4a89bbb2e4be7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f898q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bdclj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:52Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:53 crc kubenswrapper[4697]: I0127 15:08:53.011955 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30821478-065e-48b2-85f3-ae69260477fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://841fe2379065903ddc38b4968c1764a6c83d13f42c7587f20be81d8539199c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc09ec12a81a4e2954a0d1146819e9f9b4fc1fd442a3e9c930ea213aff875eb9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa7833382543ce12d026eb8bbc6fb93276a1105a0cc34d215e719591be740f80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3144c28de6be75231118993ba779a42bcc9032d51e927df649d3abb602ffa5dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3144c28de6be75231118993ba779a42bcc9032d51e927df649d3abb602ffa5dd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 15:08:45.318333 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 15:08:45.318446 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 15:08:45.319039 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1083979560/tls.crt::/tmp/serving-cert-1083979560/tls.key\\\\\\\"\\\\nI0127 15:08:45.778691 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 15:08:45.781562 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 15:08:45.781589 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 15:08:45.781614 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 15:08:45.781620 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 15:08:45.799733 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0127 15:08:45.799756 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 15:08:45.799769 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:08:45.799774 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:08:45.799800 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 15:08:45.799806 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 15:08:45.799810 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 15:08:45.799814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 15:08:45.805747 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://772509e08b1dcc68190d81e10a93fe348af55fdc71dbab2f0cadffd65089c044\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d9c79b1675802dcd1800cdbf3562832c4d201ff1b4d7ab4504118a41a245453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d9c79b1675802dcd1800cdbf3562832c4d201ff1b4d7ab4504118a41a245453\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:24Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:53Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:53 crc kubenswrapper[4697]: I0127 15:08:53.028676 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:53Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:53 crc kubenswrapper[4697]: I0127 15:08:53.036079 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:08:53 crc kubenswrapper[4697]: I0127 15:08:53.036151 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:08:53 crc kubenswrapper[4697]: I0127 15:08:53.036168 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:08:53 crc kubenswrapper[4697]: I0127 15:08:53.036192 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:08:53 crc kubenswrapper[4697]: I0127 15:08:53.036210 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:08:53Z","lastTransitionTime":"2026-01-27T15:08:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:08:53 crc kubenswrapper[4697]: I0127 15:08:53.043300 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:53Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:53 crc kubenswrapper[4697]: I0127 15:08:53.138474 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:08:53 crc kubenswrapper[4697]: I0127 15:08:53.138517 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:08:53 crc kubenswrapper[4697]: I0127 15:08:53.138528 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:08:53 crc kubenswrapper[4697]: I0127 15:08:53.138545 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:08:53 crc kubenswrapper[4697]: I0127 15:08:53.138556 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:08:53Z","lastTransitionTime":"2026-01-27T15:08:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:08:53 crc kubenswrapper[4697]: I0127 15:08:53.140126 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:08:53 crc kubenswrapper[4697]: E0127 15:08:53.140396 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:09:01.140362693 +0000 UTC m=+37.312762504 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:08:53 crc kubenswrapper[4697]: I0127 15:08:53.240459 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:08:53 crc kubenswrapper[4697]: I0127 15:08:53.240490 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:08:53 crc kubenswrapper[4697]: I0127 15:08:53.240498 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:08:53 crc kubenswrapper[4697]: I0127 15:08:53.240512 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:08:53 crc kubenswrapper[4697]: I0127 15:08:53.240521 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:08:53Z","lastTransitionTime":"2026-01-27T15:08:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:08:53 crc kubenswrapper[4697]: I0127 15:08:53.241050 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:08:53 crc kubenswrapper[4697]: E0127 15:08:53.241203 4697 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 15:08:53 crc kubenswrapper[4697]: E0127 15:08:53.241223 4697 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 15:08:53 crc kubenswrapper[4697]: E0127 15:08:53.241237 4697 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 15:08:53 crc kubenswrapper[4697]: E0127 15:08:53.241293 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 15:09:01.24127727 +0000 UTC m=+37.413677051 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 15:08:53 crc kubenswrapper[4697]: I0127 15:08:53.241622 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:08:53 crc kubenswrapper[4697]: E0127 15:08:53.241718 4697 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 15:08:53 crc kubenswrapper[4697]: E0127 15:08:53.241735 4697 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 15:08:53 crc kubenswrapper[4697]: E0127 15:08:53.241743 4697 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 15:08:53 crc kubenswrapper[4697]: E0127 15:08:53.241773 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 15:09:01.241765022 +0000 UTC m=+37.414164803 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 15:08:53 crc kubenswrapper[4697]: I0127 15:08:53.241813 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:08:53 crc kubenswrapper[4697]: I0127 15:08:53.241834 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:08:53 crc kubenswrapper[4697]: E0127 15:08:53.241882 4697 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 15:08:53 crc kubenswrapper[4697]: E0127 15:08:53.241927 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 15:09:01.241897195 +0000 UTC m=+37.414296976 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 15:08:53 crc kubenswrapper[4697]: E0127 15:08:53.242026 4697 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 15:08:53 crc kubenswrapper[4697]: E0127 15:08:53.242104 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 15:09:01.24208678 +0000 UTC m=+37.414486561 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 15:08:53 crc kubenswrapper[4697]: I0127 15:08:53.342537 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:08:53 crc kubenswrapper[4697]: I0127 15:08:53.342576 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:08:53 crc kubenswrapper[4697]: I0127 15:08:53.342584 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:08:53 crc kubenswrapper[4697]: I0127 15:08:53.342601 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:08:53 crc kubenswrapper[4697]: I0127 15:08:53.342610 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:08:53Z","lastTransitionTime":"2026-01-27T15:08:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:08:53 crc kubenswrapper[4697]: I0127 15:08:53.445405 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:08:53 crc kubenswrapper[4697]: I0127 15:08:53.445458 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:08:53 crc kubenswrapper[4697]: I0127 15:08:53.445472 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:08:53 crc kubenswrapper[4697]: I0127 15:08:53.445493 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:08:53 crc kubenswrapper[4697]: I0127 15:08:53.445510 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:08:53Z","lastTransitionTime":"2026-01-27T15:08:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:08:53 crc kubenswrapper[4697]: I0127 15:08:53.530922 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 18:40:12.481659068 +0000 UTC Jan 27 15:08:53 crc kubenswrapper[4697]: I0127 15:08:53.548248 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:08:53 crc kubenswrapper[4697]: I0127 15:08:53.548297 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:08:53 crc kubenswrapper[4697]: I0127 15:08:53.548312 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:08:53 crc kubenswrapper[4697]: I0127 15:08:53.548329 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:08:53 crc kubenswrapper[4697]: I0127 15:08:53.548341 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:08:53Z","lastTransitionTime":"2026-01-27T15:08:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:08:53 crc kubenswrapper[4697]: I0127 15:08:53.567634 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:08:53 crc kubenswrapper[4697]: I0127 15:08:53.567663 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:08:53 crc kubenswrapper[4697]: E0127 15:08:53.567802 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:08:53 crc kubenswrapper[4697]: E0127 15:08:53.567886 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:08:53 crc kubenswrapper[4697]: I0127 15:08:53.650966 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:08:53 crc kubenswrapper[4697]: I0127 15:08:53.651007 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:08:53 crc kubenswrapper[4697]: I0127 15:08:53.651016 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:08:53 crc kubenswrapper[4697]: I0127 15:08:53.651033 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:08:53 crc kubenswrapper[4697]: I0127 15:08:53.651044 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:08:53Z","lastTransitionTime":"2026-01-27T15:08:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:08:53 crc kubenswrapper[4697]: I0127 15:08:53.753649 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:08:53 crc kubenswrapper[4697]: I0127 15:08:53.753714 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:08:53 crc kubenswrapper[4697]: I0127 15:08:53.753732 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:08:53 crc kubenswrapper[4697]: I0127 15:08:53.753758 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:08:53 crc kubenswrapper[4697]: I0127 15:08:53.753779 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:08:53Z","lastTransitionTime":"2026-01-27T15:08:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:08:53 crc kubenswrapper[4697]: I0127 15:08:53.817121 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6jxw" event={"ID":"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad","Type":"ContainerStarted","Data":"de2e2f7cc8d470d7508ee665d8ba11d253a3953775b5a893192992dac97af1b6"} Jan 27 15:08:53 crc kubenswrapper[4697]: I0127 15:08:53.817324 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-z6jxw" Jan 27 15:08:53 crc kubenswrapper[4697]: I0127 15:08:53.817348 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-z6jxw" Jan 27 15:08:53 crc kubenswrapper[4697]: I0127 15:08:53.820195 4697 generic.go:334] "Generic (PLEG): container finished" podID="b7543bea-0b65-44e1-8c0c-bc1a13577d69" containerID="dede89b14b4d80c8b9e74c45b628b5def6a04f922bb59c06828c3a4e43deca4e" exitCode=0 Jan 27 15:08:53 crc kubenswrapper[4697]: I0127 15:08:53.820250 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bcb9s" event={"ID":"b7543bea-0b65-44e1-8c0c-bc1a13577d69","Type":"ContainerDied","Data":"dede89b14b4d80c8b9e74c45b628b5def6a04f922bb59c06828c3a4e43deca4e"} Jan 27 15:08:53 crc kubenswrapper[4697]: I0127 15:08:53.836330 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30821478-065e-48b2-85f3-ae69260477fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://841fe2379065903ddc38b4968c1764a6c83d13f42c7587f20be81d8539199c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc09ec12a81a4e2954a0d1146819e9f9b4fc1fd442a3e9c930ea213aff875eb9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa7833382543ce12d026eb8bbc6fb93276a1105a0cc34d215e719591be740f80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3144c28de6be75231118993ba779a42bcc9032d51e927df649d3abb602ffa5dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3144c28de6be75231118993ba779a42bcc9032d51e927df649d3abb602ffa5dd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 15:08:45.318333 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 15:08:45.318446 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 15:08:45.319039 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1083979560/tls.crt::/tmp/serving-cert-1083979560/tls.key\\\\\\\"\\\\nI0127 15:08:45.778691 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 15:08:45.781562 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 15:08:45.781589 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 15:08:45.781614 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 15:08:45.781620 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 15:08:45.799733 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0127 15:08:45.799756 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 15:08:45.799769 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:08:45.799774 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:08:45.799800 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 15:08:45.799806 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 15:08:45.799810 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 15:08:45.799814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 15:08:45.805747 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://772509e08b1dcc68190d81e10a93fe348af55fdc71dbab2f0cadffd65089c044\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d9c79b1675802dcd1800cdbf3562832c4d201ff1b4d7ab4504118a41a245453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d9c79b1675802dcd1800cdbf3562832c4d201ff1b4d7ab4504118a41a245453\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:24Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:53Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:53 crc kubenswrapper[4697]: I0127 15:08:53.854919 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-z6jxw" Jan 27 15:08:53 crc kubenswrapper[4697]: I0127 15:08:53.856511 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:08:53 crc kubenswrapper[4697]: I0127 15:08:53.856545 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:08:53 crc kubenswrapper[4697]: I0127 15:08:53.856554 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:08:53 crc kubenswrapper[4697]: I0127 15:08:53.856567 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:08:53 crc kubenswrapper[4697]: I0127 15:08:53.856577 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:08:53Z","lastTransitionTime":"2026-01-27T15:08:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:08:53 crc kubenswrapper[4697]: I0127 15:08:53.857536 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:53Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:53 crc kubenswrapper[4697]: I0127 15:08:53.861273 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-z6jxw" Jan 27 15:08:53 crc kubenswrapper[4697]: I0127 15:08:53.883220 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:53Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:53 crc kubenswrapper[4697]: I0127 15:08:53.900764 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://955eb03bb38f971417b1af1b193c2008607eaeda5addf30f899830dd84620c4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:53Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:53 crc kubenswrapper[4697]: I0127 15:08:53.915491 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bdclj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed86f7b6-a042-470f-8da3-9cad4e65c550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a701152234da7522fefeed3798f4748c4f8e56fa81edd5011ad4a89bbb2e4be7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f898q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bdclj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:53Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:53 crc kubenswrapper[4697]: I0127 15:08:53.945814 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e13ee612abe9aa03f8ccaf68abbdfdbeb29820484f430097aef6be1679d3efe8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:53Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:53 crc kubenswrapper[4697]: I0127 15:08:53.959650 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:08:53 crc kubenswrapper[4697]: I0127 15:08:53.959732 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:08:53 crc kubenswrapper[4697]: I0127 15:08:53.959746 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:08:53 crc kubenswrapper[4697]: I0127 15:08:53.959763 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:08:53 crc kubenswrapper[4697]: I0127 15:08:53.959853 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:08:53Z","lastTransitionTime":"2026-01-27T15:08:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:08:53 crc kubenswrapper[4697]: I0127 15:08:53.965718 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a182e147723dd1c9335e6c6a910d5d53bdfc118504b6a0a9f3c91f79b6d3aee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52fcd1c6784720765f18ddc1936d3bdd625b743d27654a647ff80351957797e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:53Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:53 crc kubenswrapper[4697]: I0127 15:08:53.980821 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:53Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:53 crc kubenswrapper[4697]: I0127 15:08:53.996672 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bcb9s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7543bea-0b65-44e1-8c0c-bc1a13577d69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0b69d8311464a46854b17dc23de984ff37a24f3de84f8ad6033d26d5dd30afc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0b69d8311464a46854b17dc23de984ff37a24f3de84f8ad6033d26d5dd30afc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d34049aae4e409909bb597c8bf33aa1c1ac85699cf72e33f5643145fdf9fbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d34049aae4e409909bb597c8bf33aa1c1ac85699cf72e33f5643145fdf9fbb9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b85aff4ba7e4c4eddcdfd916b42392fd8f5bd4d18caae739a7490c0576fcff1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b85aff4ba7e4c4eddcdfd916b42392fd8f5bd4d18caae739a7490c0576fcff1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6affaf91a44dec8a9da34068ed68f480ad543e0efc8e0f584fd5002f8f6ed0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6affaf91a44dec8a9da34068ed68f480ad543e0efc8e0f584fd5002f8f6ed0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dede89b14b4d80c8b9e74c45b628b5def6a04f922bb59c06828c3a4e43deca4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bcb9s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:53Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:54 crc kubenswrapper[4697]: I0127 15:08:54.008054 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rq89t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fbc1c27-fba2-40df-95dd-3842bd1f1906\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0c056e48d3130806317f25486fea67d938a0e610f19b6089873f2fcfe4759a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npp7h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rq89t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:54Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:54 crc kubenswrapper[4697]: I0127 15:08:54.026303 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6jxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24ac4a674c5fb98082daeabf52736988951ea5c66064ff4bb63f0d40c43b947d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25f52622d494cffbbd36c21f76148b896a10d3c1ace649ac0824e847b812a277\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9146d3d41cb348c99ea78d62aef3aa7d46c5f99855e042fdf5bc38b18556e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e33c68fac5ef11b2704b8a1460588937489a191ea2eacb70548b1e99cf718822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8784cf473729161592d08c782f4754724d6609756a30040715cbff8c732a09c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eea7c2b7dbea8198cc4709a808f8ecab760514224f4e3eb96d04c3bd7f16df6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de2e2f7cc8d470d7508ee665d8ba11d253a3953775b5a893192992dac97af1b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://971bf4362650664f5133d9b68b7a5ce76e54dafbf28c88730f678ada0256ffd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9666b8a501ef015431ee3be1fc34ca2b196011df3007d2e4d508f09f9967785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9666b8a501ef015431ee3be1fc34ca2b196011df3007d2e4d508f09f9967785\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z6jxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:54Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:54 crc kubenswrapper[4697]: I0127 15:08:54.038306 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00bcd4fb-11e6-4087-91b7-290cd35a7292\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee5c74f4e3f1154431027a743528e81ec4bed30037b30a858870f74993da4691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b23c092c5d493951a1f6dbbf0482f102f36a830133d843f3c574afba2e1d50d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ad05a5c3b7640af677ede45c27c40da5d118e28a9d45de0ffa60a05684121c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fd615105781bcf4614f8a58cf63eeb89020db12e822192bd652a5ff23e25a2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:24Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:54Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:54 crc kubenswrapper[4697]: I0127 15:08:54.051380 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wz495" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9bec8bc-b2a6-4865-83ca-692ae5c022a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2616d07c83d73b63d4b728a30de8a7e1d76986d38f8c4c3fe019bf73e64784f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wqhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://faaced835dbc76e880a1fd29824b00fca5f720686e476bcba6ad4f807e28e8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wqhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wz495\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:54Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:54 crc kubenswrapper[4697]: I0127 15:08:54.061194 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lpz4j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d187caad-2501-44d6-8ced-f8d8ca5fecfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c2b6a00c426e85ca8ca4fe5790bf7badc12e0c2cc72c1454e664e809ace5e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5jqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lpz4j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:54Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:54 crc kubenswrapper[4697]: I0127 15:08:54.061749 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:08:54 crc kubenswrapper[4697]: I0127 15:08:54.061793 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:08:54 crc kubenswrapper[4697]: I0127 15:08:54.061804 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:08:54 crc kubenswrapper[4697]: I0127 15:08:54.061819 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:08:54 crc kubenswrapper[4697]: I0127 15:08:54.061827 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:08:54Z","lastTransitionTime":"2026-01-27T15:08:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:08:54 crc kubenswrapper[4697]: I0127 15:08:54.073668 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00bcd4fb-11e6-4087-91b7-290cd35a7292\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee5c74f4e3f1154431027a743528e81ec4bed30037b30a858870f74993da4691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b23c092c5d493951a1f6dbbf0482f102f36a830133d843f3c574afba2e1d50d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ad05a5c3b7640af677ede45c27c40da5d118e28a9d45de0ffa60a05684121c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fd615105781bcf4614f8a58cf63eeb89020db12e822192bd652a5ff23e25a2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:24Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:54Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:54 crc kubenswrapper[4697]: I0127 15:08:54.085322 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e13ee612abe9aa03f8ccaf68abbdfdbeb29820484f430097aef6be1679d3efe8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:54Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:54 crc kubenswrapper[4697]: I0127 15:08:54.096348 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a182e147723dd1c9335e6c6a910d5d53bdfc118504b6a0a9f3c91f79b6d3aee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52fcd1c6784720765f18ddc1936d3bdd625b743d27654a647ff80351957797e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:54Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:54 crc kubenswrapper[4697]: I0127 15:08:54.107374 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:54Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:54 crc kubenswrapper[4697]: I0127 15:08:54.123756 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bcb9s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7543bea-0b65-44e1-8c0c-bc1a13577d69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0b69d8311464a46854b17dc23de984ff37a24f3de84f8ad6033d26d5dd30afc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0b69d8311464a46854b17dc23de984ff37a24f3de84f8ad6033d26d5dd30afc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d34049aae4e409909bb597c8bf33aa1c1ac85699cf72e33f5643145fdf9fbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d34049aae4e409909bb597c8bf33aa1c1ac85699cf72e33f5643145fdf9fbb9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b85aff4ba7e4c4eddcdfd916b42392fd8f5bd4d18caae739a7490c0576fcff1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b85aff4ba7e4c4eddcdfd916b42392fd8f5bd4d18caae739a7490c0576fcff1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6affaf91a44dec8a9da34068ed68f480ad543e0efc8e0f584fd5002f8f6ed0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6affaf91a44dec8a9da34068ed68f480ad543e0efc8e0f584fd5002f8f6ed0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dede89b14b4d80c8b9e74c45b628b5def6a04f922bb59c06828c3a4e43deca4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dede89b14b4d80c8b9e74c45b628b5def6a04f922bb59c06828c3a4e43deca4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bcb9s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:54Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:54 crc kubenswrapper[4697]: I0127 15:08:54.140612 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rq89t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fbc1c27-fba2-40df-95dd-3842bd1f1906\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0c056e48d3130806317f25486fea67d938a0e610f19b6089873f2fcfe4759a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npp7h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rq89t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:54Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:54 crc kubenswrapper[4697]: I0127 15:08:54.158942 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6jxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24ac4a674c5fb98082daeabf52736988951ea5c66064ff4bb63f0d40c43b947d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25f52622d494cffbbd36c21f76148b896a10d3c1ace649ac0824e847b812a277\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9146d3d41cb348c99ea78d62aef3aa7d46c5f99855e042fdf5bc38b18556e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e33c68fac5ef11b2704b8a1460588937489a191ea2eacb70548b1e99cf718822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8784cf473729161592d08c782f4754724d6609756a30040715cbff8c732a09c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eea7c2b7dbea8198cc4709a808f8ecab760514224f4e3eb96d04c3bd7f16df6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de2e2f7cc8d470d7508ee665d8ba11d253a3953775b5a893192992dac97af1b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://971bf4362650664f5133d9b68b7a5ce76e54dafbf28c88730f678ada0256ffd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9666b8a501ef015431ee3be1fc34ca2b196011df3007d2e4d508f09f9967785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9666b8a501ef015431ee3be1fc34ca2b196011df3007d2e4d508f09f9967785\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z6jxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:54Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:54 crc kubenswrapper[4697]: I0127 15:08:54.163434 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:08:54 crc kubenswrapper[4697]: I0127 15:08:54.163458 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:08:54 crc kubenswrapper[4697]: I0127 15:08:54.163466 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:08:54 crc kubenswrapper[4697]: I0127 15:08:54.163480 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:08:54 crc kubenswrapper[4697]: I0127 15:08:54.163490 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:08:54Z","lastTransitionTime":"2026-01-27T15:08:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:08:54 crc kubenswrapper[4697]: I0127 15:08:54.170684 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wz495" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9bec8bc-b2a6-4865-83ca-692ae5c022a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2616d07c83d73b63d4b728a30de8a7e1d76986d38f8c4c3fe019bf73e64784f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wqhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://faaced835dbc76e880a1fd29824b00fca5f720686e476bcba6ad4f807e28e8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wqhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wz495\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:54Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:54 crc kubenswrapper[4697]: I0127 15:08:54.179972 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lpz4j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d187caad-2501-44d6-8ced-f8d8ca5fecfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c2b6a00c426e85ca8ca4fe5790bf7badc12e0c2cc72c1454e664e809ace5e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5jqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lpz4j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:54Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:54 crc kubenswrapper[4697]: I0127 15:08:54.193662 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30821478-065e-48b2-85f3-ae69260477fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://841fe2379065903ddc38b4968c1764a6c83d13f42c7587f20be81d8539199c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc09ec12a81a4e2954a0d1146819e9f9b4fc1fd442a3e9c930ea213aff875eb9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa7833382543ce12d026eb8bbc6fb93276a1105a0cc34d215e719591be740f80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3144c28de6be75231118993ba779a42bcc9032d51e927df649d3abb602ffa5dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3144c28de6be75231118993ba779a42bcc9032d51e927df649d3abb602ffa5dd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 15:08:45.318333 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 15:08:45.318446 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 15:08:45.319039 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1083979560/tls.crt::/tmp/serving-cert-1083979560/tls.key\\\\\\\"\\\\nI0127 15:08:45.778691 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 15:08:45.781562 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 15:08:45.781589 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 15:08:45.781614 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 15:08:45.781620 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 15:08:45.799733 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0127 15:08:45.799756 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 15:08:45.799769 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:08:45.799774 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:08:45.799800 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 15:08:45.799806 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 15:08:45.799810 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 15:08:45.799814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 15:08:45.805747 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://772509e08b1dcc68190d81e10a93fe348af55fdc71dbab2f0cadffd65089c044\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d9c79b1675802dcd1800cdbf3562832c4d201ff1b4d7ab4504118a41a245453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d9c79b1675802dcd1800cdbf3562832c4d201ff1b4d7ab4504118a41a245453\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:24Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:54Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:54 crc kubenswrapper[4697]: I0127 15:08:54.205511 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:54Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:54 crc kubenswrapper[4697]: I0127 15:08:54.218197 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:54Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:54 crc kubenswrapper[4697]: I0127 15:08:54.229976 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://955eb03bb38f971417b1af1b193c2008607eaeda5addf30f899830dd84620c4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:54Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:54 crc kubenswrapper[4697]: I0127 15:08:54.243853 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bdclj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed86f7b6-a042-470f-8da3-9cad4e65c550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a701152234da7522fefeed3798f4748c4f8e56fa81edd5011ad4a89bbb2e4be7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f898q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bdclj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:54Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:54 crc kubenswrapper[4697]: I0127 15:08:54.265498 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:08:54 crc kubenswrapper[4697]: I0127 15:08:54.265542 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:08:54 crc kubenswrapper[4697]: I0127 15:08:54.265551 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:08:54 crc kubenswrapper[4697]: I0127 15:08:54.265565 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:08:54 crc kubenswrapper[4697]: I0127 15:08:54.265576 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:08:54Z","lastTransitionTime":"2026-01-27T15:08:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:08:54 crc kubenswrapper[4697]: I0127 15:08:54.363371 4697 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 27 15:08:54 crc kubenswrapper[4697]: I0127 15:08:54.387602 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:08:54 crc kubenswrapper[4697]: I0127 15:08:54.387658 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:08:54 crc kubenswrapper[4697]: I0127 15:08:54.387674 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:08:54 crc kubenswrapper[4697]: I0127 15:08:54.387775 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:08:54 crc kubenswrapper[4697]: I0127 15:08:54.387862 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:08:54Z","lastTransitionTime":"2026-01-27T15:08:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:08:54 crc kubenswrapper[4697]: I0127 15:08:54.489757 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:08:54 crc kubenswrapper[4697]: I0127 15:08:54.489818 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:08:54 crc kubenswrapper[4697]: I0127 15:08:54.489830 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:08:54 crc kubenswrapper[4697]: I0127 15:08:54.489844 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:08:54 crc kubenswrapper[4697]: I0127 15:08:54.489855 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:08:54Z","lastTransitionTime":"2026-01-27T15:08:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:08:54 crc kubenswrapper[4697]: I0127 15:08:54.531117 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 22:17:16.96473645 +0000 UTC Jan 27 15:08:54 crc kubenswrapper[4697]: I0127 15:08:54.568043 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:08:54 crc kubenswrapper[4697]: E0127 15:08:54.568220 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:08:54 crc kubenswrapper[4697]: I0127 15:08:54.581708 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:54Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:54 crc kubenswrapper[4697]: I0127 15:08:54.592583 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:08:54 crc kubenswrapper[4697]: I0127 15:08:54.592633 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:08:54 crc kubenswrapper[4697]: I0127 15:08:54.592649 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:08:54 crc kubenswrapper[4697]: I0127 15:08:54.592672 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:08:54 crc kubenswrapper[4697]: I0127 15:08:54.592690 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:08:54Z","lastTransitionTime":"2026-01-27T15:08:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:08:54 crc kubenswrapper[4697]: I0127 15:08:54.593591 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://955eb03bb38f971417b1af1b193c2008607eaeda5addf30f899830dd84620c4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:54Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:54 crc kubenswrapper[4697]: I0127 15:08:54.603152 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bdclj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed86f7b6-a042-470f-8da3-9cad4e65c550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a701152234da7522fefeed3798f4748c4f8e56fa81edd5011ad4a89bbb2e4be7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f898q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bdclj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:54Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:54 crc kubenswrapper[4697]: I0127 15:08:54.615593 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30821478-065e-48b2-85f3-ae69260477fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://841fe2379065903ddc38b4968c1764a6c83d13f42c7587f20be81d8539199c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc09ec12a81a4e2954a0d1146819e9f9b4fc1fd442a3e9c930ea213aff875eb9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa7833382543ce12d026eb8bbc6fb93276a1105a0cc34d215e719591be740f80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3144c28de6be75231118993ba779a42bcc9032d51e927df649d3abb602ffa5dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3144c28de6be75231118993ba779a42bcc9032d51e927df649d3abb602ffa5dd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 15:08:45.318333 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 15:08:45.318446 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 15:08:45.319039 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1083979560/tls.crt::/tmp/serving-cert-1083979560/tls.key\\\\\\\"\\\\nI0127 15:08:45.778691 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 15:08:45.781562 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 15:08:45.781589 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 15:08:45.781614 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 15:08:45.781620 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 15:08:45.799733 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0127 15:08:45.799756 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 15:08:45.799769 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:08:45.799774 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:08:45.799800 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 15:08:45.799806 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 15:08:45.799810 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 15:08:45.799814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 15:08:45.805747 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://772509e08b1dcc68190d81e10a93fe348af55fdc71dbab2f0cadffd65089c044\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d9c79b1675802dcd1800cdbf3562832c4d201ff1b4d7ab4504118a41a245453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d9c79b1675802dcd1800cdbf3562832c4d201ff1b4d7ab4504118a41a245453\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:24Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:54Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:54 crc kubenswrapper[4697]: I0127 15:08:54.627003 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:54Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:54 crc kubenswrapper[4697]: I0127 15:08:54.645309 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bcb9s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7543bea-0b65-44e1-8c0c-bc1a13577d69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0b69d8311464a46854b17dc23de984ff37a24f3de84f8ad6033d26d5dd30afc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0b69d8311464a46854b17dc23de984ff37a24f3de84f8ad6033d26d5dd30afc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d34049aae4e409909bb597c8bf33aa1c1ac85699cf72e33f5643145fdf9fbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d34049aae4e409909bb597c8bf33aa1c1ac85699cf72e33f5643145fdf9fbb9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b85aff4ba7e4c4eddcdfd916b42392fd8f5bd4d18caae739a7490c0576fcff1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b85aff4ba7e4c4eddcdfd916b42392fd8f5bd4d18caae739a7490c0576fcff1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6affaf91a44dec8a9da34068ed68f480ad543e0efc8e0f584fd5002f8f6ed0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6affaf91a44dec8a9da34068ed68f480ad543e0efc8e0f584fd5002f8f6ed0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dede89b14b4d80c8b9e74c45b628b5def6a04f922bb59c06828c3a4e43deca4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dede89b14b4d80c8b9e74c45b628b5def6a04f922bb59c06828c3a4e43deca4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bcb9s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:54Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:54 crc kubenswrapper[4697]: I0127 15:08:54.660761 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rq89t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fbc1c27-fba2-40df-95dd-3842bd1f1906\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0c056e48d3130806317f25486fea67d938a0e610f19b6089873f2fcfe4759a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npp7h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rq89t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:54Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:54 crc kubenswrapper[4697]: I0127 15:08:54.681455 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6jxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24ac4a674c5fb98082daeabf52736988951ea5c66064ff4bb63f0d40c43b947d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25f52622d494cffbbd36c21f76148b896a10d3c1ace649ac0824e847b812a277\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9146d3d41cb348c99ea78d62aef3aa7d46c5f99855e042fdf5bc38b18556e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e33c68fac5ef11b2704b8a1460588937489a191ea2eacb70548b1e99cf718822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8784cf473729161592d08c782f4754724d6609756a30040715cbff8c732a09c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eea7c2b7dbea8198cc4709a808f8ecab760514224f4e3eb96d04c3bd7f16df6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de2e2f7cc8d470d7508ee665d8ba11d253a3953775b5a893192992dac97af1b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://971bf4362650664f5133d9b68b7a5ce76e54dafbf28c88730f678ada0256ffd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9666b8a501ef015431ee3be1fc34ca2b196011df3007d2e4d508f09f9967785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9666b8a501ef015431ee3be1fc34ca2b196011df3007d2e4d508f09f9967785\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z6jxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:54Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:54 crc kubenswrapper[4697]: I0127 15:08:54.694241 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:08:54 crc kubenswrapper[4697]: I0127 15:08:54.694274 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:08:54 crc kubenswrapper[4697]: I0127 15:08:54.694284 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:08:54 crc kubenswrapper[4697]: I0127 15:08:54.694300 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:08:54 crc kubenswrapper[4697]: I0127 15:08:54.694312 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:08:54Z","lastTransitionTime":"2026-01-27T15:08:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:08:54 crc kubenswrapper[4697]: I0127 15:08:54.698949 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00bcd4fb-11e6-4087-91b7-290cd35a7292\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee5c74f4e3f1154431027a743528e81ec4bed30037b30a858870f74993da4691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b23c092c5d493951a1f6dbbf0482f102f36a830133d843f3c574afba2e1d50d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ad05a5c3b7640af677ede45c27c40da5d118e28a9d45de0ffa60a05684121c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fd615105781bcf4614f8a58cf63eeb89020db12e822192bd652a5ff23e25a2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:24Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:54Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:54 crc kubenswrapper[4697]: I0127 15:08:54.712101 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e13ee612abe9aa03f8ccaf68abbdfdbeb29820484f430097aef6be1679d3efe8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:54Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:54 crc kubenswrapper[4697]: I0127 15:08:54.724406 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a182e147723dd1c9335e6c6a910d5d53bdfc118504b6a0a9f3c91f79b6d3aee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52fcd1c6784720765f18ddc1936d3bdd625b743d27654a647ff80351957797e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:54Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:54 crc kubenswrapper[4697]: I0127 15:08:54.737464 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:54Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:54 crc kubenswrapper[4697]: I0127 15:08:54.750219 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wz495" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9bec8bc-b2a6-4865-83ca-692ae5c022a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2616d07c83d73b63d4b728a30de8a7e1d76986d38f8c4c3fe019bf73e64784f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wqhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://faaced835dbc76e880a1fd29824b00fca5f720686e476bcba6ad4f807e28e8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wqhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wz495\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:54Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:54 crc kubenswrapper[4697]: I0127 15:08:54.760010 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lpz4j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d187caad-2501-44d6-8ced-f8d8ca5fecfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c2b6a00c426e85ca8ca4fe5790bf7badc12e0c2cc72c1454e664e809ace5e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5jqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lpz4j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:54Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:54 crc kubenswrapper[4697]: I0127 15:08:54.796884 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:08:54 crc kubenswrapper[4697]: I0127 15:08:54.796931 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:08:54 crc kubenswrapper[4697]: I0127 15:08:54.796946 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:08:54 crc kubenswrapper[4697]: I0127 15:08:54.796965 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:08:54 crc kubenswrapper[4697]: I0127 15:08:54.796980 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:08:54Z","lastTransitionTime":"2026-01-27T15:08:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:08:54 crc kubenswrapper[4697]: I0127 15:08:54.825233 4697 generic.go:334] "Generic (PLEG): container finished" podID="b7543bea-0b65-44e1-8c0c-bc1a13577d69" containerID="7a125d46e355d85444bf125e8184888e9b0c18dab3cd7b09b89ffff202e2c6b4" exitCode=0 Jan 27 15:08:54 crc kubenswrapper[4697]: I0127 15:08:54.825557 4697 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 15:08:54 crc kubenswrapper[4697]: I0127 15:08:54.825420 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bcb9s" event={"ID":"b7543bea-0b65-44e1-8c0c-bc1a13577d69","Type":"ContainerDied","Data":"7a125d46e355d85444bf125e8184888e9b0c18dab3cd7b09b89ffff202e2c6b4"} Jan 27 15:08:54 crc kubenswrapper[4697]: I0127 15:08:54.839252 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lpz4j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d187caad-2501-44d6-8ced-f8d8ca5fecfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c2b6a00c426e85ca8ca4fe5790bf7badc12e0c2cc72c1454e664e809ace5e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5jqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lpz4j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:54Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:54 crc kubenswrapper[4697]: I0127 15:08:54.855416 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:54Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:54 crc kubenswrapper[4697]: I0127 15:08:54.871649 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:54Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:54 crc kubenswrapper[4697]: I0127 15:08:54.890320 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://955eb03bb38f971417b1af1b193c2008607eaeda5addf30f899830dd84620c4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:54Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:54 crc kubenswrapper[4697]: I0127 15:08:54.900449 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:08:54 crc kubenswrapper[4697]: I0127 15:08:54.900476 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:08:54 crc kubenswrapper[4697]: I0127 15:08:54.900501 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:08:54 crc kubenswrapper[4697]: I0127 15:08:54.900515 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:08:54 crc kubenswrapper[4697]: I0127 15:08:54.900524 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:08:54Z","lastTransitionTime":"2026-01-27T15:08:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:08:54 crc kubenswrapper[4697]: I0127 15:08:54.900585 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bdclj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed86f7b6-a042-470f-8da3-9cad4e65c550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a701152234da7522fefeed3798f4748c4f8e56fa81edd5011ad4a89bbb2e4be7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f898q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bdclj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:54Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:54 crc kubenswrapper[4697]: I0127 15:08:54.918593 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30821478-065e-48b2-85f3-ae69260477fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://841fe2379065903ddc38b4968c1764a6c83d13f42c7587f20be81d8539199c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc09ec12a81a4e2954a0d1146819e9f9b4fc1fd442a3e9c930ea213aff875eb9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa7833382543ce12d026eb8bbc6fb93276a1105a0cc34d215e719591be740f80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3144c28de6be75231118993ba779a42bcc9032d51e927df649d3abb602ffa5dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3144c28de6be75231118993ba779a42bcc9032d51e927df649d3abb602ffa5dd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 15:08:45.318333 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 15:08:45.318446 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 15:08:45.319039 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1083979560/tls.crt::/tmp/serving-cert-1083979560/tls.key\\\\\\\"\\\\nI0127 15:08:45.778691 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 15:08:45.781562 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 15:08:45.781589 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 15:08:45.781614 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 15:08:45.781620 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 15:08:45.799733 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0127 15:08:45.799756 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 15:08:45.799769 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:08:45.799774 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:08:45.799800 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 15:08:45.799806 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 15:08:45.799810 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 15:08:45.799814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 15:08:45.805747 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://772509e08b1dcc68190d81e10a93fe348af55fdc71dbab2f0cadffd65089c044\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d9c79b1675802dcd1800cdbf3562832c4d201ff1b4d7ab4504118a41a245453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d9c79b1675802dcd1800cdbf3562832c4d201ff1b4d7ab4504118a41a245453\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:24Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:54Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:54 crc kubenswrapper[4697]: I0127 15:08:54.933147 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a182e147723dd1c9335e6c6a910d5d53bdfc118504b6a0a9f3c91f79b6d3aee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52fcd1c6784720765f18ddc1936d3bdd625b743d27654a647ff80351957797e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:54Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:54 crc kubenswrapper[4697]: I0127 15:08:54.943805 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:54Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:54 crc kubenswrapper[4697]: I0127 15:08:54.957766 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bcb9s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7543bea-0b65-44e1-8c0c-bc1a13577d69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0b69d8311464a46854b17dc23de984ff37a24f3de84f8ad6033d26d5dd30afc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0b69d8311464a46854b17dc23de984ff37a24f3de84f8ad6033d26d5dd30afc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d34049aae4e409909bb597c8bf33aa1c1ac85699cf72e33f5643145fdf9fbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d34049aae4e409909bb597c8bf33aa1c1ac85699cf72e33f5643145fdf9fbb9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b85aff4ba7e4c4eddcdfd916b42392fd8f5bd4d18caae739a7490c0576fcff1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b85aff4ba7e4c4eddcdfd916b42392fd8f5bd4d18caae739a7490c0576fcff1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6affaf91a44dec8a9da34068ed68f480ad543e0efc8e0f584fd5002f8f6ed0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6affaf91a44dec8a9da34068ed68f480ad543e0efc8e0f584fd5002f8f6ed0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dede89b14b4d80c8b9e74c45b628b5def6a04f922bb59c06828c3a4e43deca4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dede89b14b4d80c8b9e74c45b628b5def6a04f922bb59c06828c3a4e43deca4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a125d46e355d85444bf125e8184888e9b0c18dab3cd7b09b89ffff202e2c6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a125d46e355d85444bf125e8184888e9b0c18dab3cd7b09b89ffff202e2c6b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bcb9s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:54Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:54 crc kubenswrapper[4697]: I0127 15:08:54.971011 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rq89t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fbc1c27-fba2-40df-95dd-3842bd1f1906\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0c056e48d3130806317f25486fea67d938a0e610f19b6089873f2fcfe4759a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npp7h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rq89t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:54Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:54 crc kubenswrapper[4697]: I0127 15:08:54.994151 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6jxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24ac4a674c5fb98082daeabf52736988951ea5c66064ff4bb63f0d40c43b947d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25f52622d494cffbbd36c21f76148b896a10d3c1ace649ac0824e847b812a277\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9146d3d41cb348c99ea78d62aef3aa7d46c5f99855e042fdf5bc38b18556e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e33c68fac5ef11b2704b8a1460588937489a191ea2eacb70548b1e99cf718822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8784cf473729161592d08c782f4754724d6609756a30040715cbff8c732a09c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eea7c2b7dbea8198cc4709a808f8ecab760514224f4e3eb96d04c3bd7f16df6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de2e2f7cc8d470d7508ee665d8ba11d253a3953775b5a893192992dac97af1b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://971bf4362650664f5133d9b68b7a5ce76e54dafbf28c88730f678ada0256ffd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9666b8a501ef015431ee3be1fc34ca2b196011df3007d2e4d508f09f9967785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9666b8a501ef015431ee3be1fc34ca2b196011df3007d2e4d508f09f9967785\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z6jxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:54Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:55 crc kubenswrapper[4697]: I0127 15:08:55.002950 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:08:55 crc kubenswrapper[4697]: I0127 15:08:55.002988 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:08:55 crc kubenswrapper[4697]: I0127 15:08:55.003001 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:08:55 crc kubenswrapper[4697]: I0127 15:08:55.003016 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:08:55 crc kubenswrapper[4697]: I0127 15:08:55.003028 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:08:55Z","lastTransitionTime":"2026-01-27T15:08:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:08:55 crc kubenswrapper[4697]: I0127 15:08:55.008803 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00bcd4fb-11e6-4087-91b7-290cd35a7292\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee5c74f4e3f1154431027a743528e81ec4bed30037b30a858870f74993da4691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b23c092c5d493951a1f6dbbf0482f102f36a830133d843f3c574afba2e1d50d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ad05a5c3b7640af677ede45c27c40da5d118e28a9d45de0ffa60a05684121c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fd615105781bcf4614f8a58cf63eeb89020db12e822192bd652a5ff23e25a2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:24Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:55Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:55 crc kubenswrapper[4697]: I0127 15:08:55.021395 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e13ee612abe9aa03f8ccaf68abbdfdbeb29820484f430097aef6be1679d3efe8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:55Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:55 crc kubenswrapper[4697]: I0127 15:08:55.032095 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wz495" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9bec8bc-b2a6-4865-83ca-692ae5c022a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2616d07c83d73b63d4b728a30de8a7e1d76986d38f8c4c3fe019bf73e64784f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wqhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://faaced835dbc76e880a1fd29824b00fca5f720686e476bcba6ad4f807e28e8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wqhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wz495\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:55Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:55 crc kubenswrapper[4697]: I0127 15:08:55.094305 4697 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 15:08:55 crc kubenswrapper[4697]: I0127 15:08:55.094910 4697 scope.go:117] "RemoveContainer" containerID="3144c28de6be75231118993ba779a42bcc9032d51e927df649d3abb602ffa5dd" Jan 27 15:08:55 crc kubenswrapper[4697]: E0127 15:08:55.095050 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 27 15:08:55 crc kubenswrapper[4697]: I0127 15:08:55.104977 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:08:55 crc kubenswrapper[4697]: I0127 15:08:55.105026 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:08:55 crc kubenswrapper[4697]: I0127 15:08:55.105042 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:08:55 crc kubenswrapper[4697]: I0127 15:08:55.105063 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:08:55 crc kubenswrapper[4697]: I0127 15:08:55.105080 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:08:55Z","lastTransitionTime":"2026-01-27T15:08:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:08:55 crc kubenswrapper[4697]: I0127 15:08:55.208531 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:08:55 crc kubenswrapper[4697]: I0127 15:08:55.208597 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:08:55 crc kubenswrapper[4697]: I0127 15:08:55.208608 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:08:55 crc kubenswrapper[4697]: I0127 15:08:55.208626 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:08:55 crc kubenswrapper[4697]: I0127 15:08:55.208662 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:08:55Z","lastTransitionTime":"2026-01-27T15:08:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:08:55 crc kubenswrapper[4697]: I0127 15:08:55.311322 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:08:55 crc kubenswrapper[4697]: I0127 15:08:55.311371 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:08:55 crc kubenswrapper[4697]: I0127 15:08:55.311381 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:08:55 crc kubenswrapper[4697]: I0127 15:08:55.311395 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:08:55 crc kubenswrapper[4697]: I0127 15:08:55.311404 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:08:55Z","lastTransitionTime":"2026-01-27T15:08:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:08:55 crc kubenswrapper[4697]: I0127 15:08:55.414368 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:08:55 crc kubenswrapper[4697]: I0127 15:08:55.414445 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:08:55 crc kubenswrapper[4697]: I0127 15:08:55.414462 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:08:55 crc kubenswrapper[4697]: I0127 15:08:55.414509 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:08:55 crc kubenswrapper[4697]: I0127 15:08:55.414525 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:08:55Z","lastTransitionTime":"2026-01-27T15:08:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:08:55 crc kubenswrapper[4697]: I0127 15:08:55.517229 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:08:55 crc kubenswrapper[4697]: I0127 15:08:55.517258 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:08:55 crc kubenswrapper[4697]: I0127 15:08:55.517267 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:08:55 crc kubenswrapper[4697]: I0127 15:08:55.517280 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:08:55 crc kubenswrapper[4697]: I0127 15:08:55.517290 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:08:55Z","lastTransitionTime":"2026-01-27T15:08:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:08:55 crc kubenswrapper[4697]: I0127 15:08:55.531733 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 13:46:40.676573278 +0000 UTC Jan 27 15:08:55 crc kubenswrapper[4697]: I0127 15:08:55.567635 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:08:55 crc kubenswrapper[4697]: I0127 15:08:55.567674 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:08:55 crc kubenswrapper[4697]: E0127 15:08:55.567884 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:08:55 crc kubenswrapper[4697]: E0127 15:08:55.568089 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:08:55 crc kubenswrapper[4697]: I0127 15:08:55.619118 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:08:55 crc kubenswrapper[4697]: I0127 15:08:55.619176 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:08:55 crc kubenswrapper[4697]: I0127 15:08:55.619195 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:08:55 crc kubenswrapper[4697]: I0127 15:08:55.619218 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:08:55 crc kubenswrapper[4697]: I0127 15:08:55.619232 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:08:55Z","lastTransitionTime":"2026-01-27T15:08:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:08:55 crc kubenswrapper[4697]: I0127 15:08:55.721547 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:08:55 crc kubenswrapper[4697]: I0127 15:08:55.721603 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:08:55 crc kubenswrapper[4697]: I0127 15:08:55.721646 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:08:55 crc kubenswrapper[4697]: I0127 15:08:55.721671 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:08:55 crc kubenswrapper[4697]: I0127 15:08:55.721690 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:08:55Z","lastTransitionTime":"2026-01-27T15:08:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:08:55 crc kubenswrapper[4697]: I0127 15:08:55.824147 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:08:55 crc kubenswrapper[4697]: I0127 15:08:55.824239 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:08:55 crc kubenswrapper[4697]: I0127 15:08:55.824259 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:08:55 crc kubenswrapper[4697]: I0127 15:08:55.824286 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:08:55 crc kubenswrapper[4697]: I0127 15:08:55.824306 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:08:55Z","lastTransitionTime":"2026-01-27T15:08:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:08:55 crc kubenswrapper[4697]: I0127 15:08:55.833848 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bcb9s" event={"ID":"b7543bea-0b65-44e1-8c0c-bc1a13577d69","Type":"ContainerStarted","Data":"4fe79de88015d62a290c140e0504b9ef088f39fa79bc9b379d46fa9cdb03123f"} Jan 27 15:08:55 crc kubenswrapper[4697]: I0127 15:08:55.833898 4697 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 15:08:55 crc kubenswrapper[4697]: I0127 15:08:55.860179 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bcb9s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7543bea-0b65-44e1-8c0c-bc1a13577d69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fe79de88015d62a290c140e0504b9ef088f39fa79bc9b379d46fa9cdb03123f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0b69d8311464a46854b17dc23de984ff37a24f3de84f8ad6033d26d5dd30afc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0b69d8311464a46854b17dc23de984ff37a24f3de84f8ad6033d26d5dd30afc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d34049aae4e409909bb597c8bf33aa1c1ac85699cf72e33f5643145fdf9fbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d34049aae4e409909bb597c8bf33aa1c1ac85699cf72e33f5643145fdf9fbb9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b85aff4ba7e4c4eddcdfd916b42392fd8f5bd4d18caae739a7490c0576fcff1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b85aff4ba7e4c4eddcdfd916b42392fd8f5bd4d18caae739a7490c0576fcff1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6affaf91a44dec8a9da34068ed68f480ad543e0efc8e0f584fd5002f8f6ed0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6affaf91a44dec8a9da34068ed68f480ad543e0efc8e0f584fd5002f8f6ed0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dede89b14b4d80c8b9e74c45b628b5def6a04f922bb59c06828c3a4e43deca4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dede89b14b4d80c8b9e74c45b628b5def6a04f922bb59c06828c3a4e43deca4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a125d46e355d85444bf125e8184888e9b0c18dab3cd7b09b89ffff202e2c6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a125d46e355d85444bf125e8184888e9b0c18dab3cd7b09b89ffff202e2c6b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bcb9s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:55Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:55 crc kubenswrapper[4697]: I0127 15:08:55.882605 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rq89t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fbc1c27-fba2-40df-95dd-3842bd1f1906\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0c056e48d3130806317f25486fea67d938a0e610f19b6089873f2fcfe4759a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npp7h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rq89t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:55Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:55 crc kubenswrapper[4697]: I0127 15:08:55.914021 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6jxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24ac4a674c5fb98082daeabf52736988951ea5c66064ff4bb63f0d40c43b947d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25f52622d494cffbbd36c21f76148b896a10d3c1ace649ac0824e847b812a277\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9146d3d41cb348c99ea78d62aef3aa7d46c5f99855e042fdf5bc38b18556e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e33c68fac5ef11b2704b8a1460588937489a191ea2eacb70548b1e99cf718822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8784cf473729161592d08c782f4754724d6609756a30040715cbff8c732a09c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eea7c2b7dbea8198cc4709a808f8ecab760514224f4e3eb96d04c3bd7f16df6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de2e2f7cc8d470d7508ee665d8ba11d253a3953775b5a893192992dac97af1b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://971bf4362650664f5133d9b68b7a5ce76e54dafbf28c88730f678ada0256ffd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9666b8a501ef015431ee3be1fc34ca2b196011df3007d2e4d508f09f9967785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9666b8a501ef015431ee3be1fc34ca2b196011df3007d2e4d508f09f9967785\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z6jxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:55Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:55 crc kubenswrapper[4697]: I0127 15:08:55.926800 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:08:55 crc kubenswrapper[4697]: I0127 15:08:55.926846 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:08:55 crc kubenswrapper[4697]: I0127 15:08:55.926859 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:08:55 crc kubenswrapper[4697]: I0127 15:08:55.926878 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:08:55 crc kubenswrapper[4697]: I0127 15:08:55.926891 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:08:55Z","lastTransitionTime":"2026-01-27T15:08:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:08:55 crc kubenswrapper[4697]: I0127 15:08:55.937588 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00bcd4fb-11e6-4087-91b7-290cd35a7292\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee5c74f4e3f1154431027a743528e81ec4bed30037b30a858870f74993da4691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b23c092c5d493951a1f6dbbf0482f102f36a830133d843f3c574afba2e1d50d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ad05a5c3b7640af677ede45c27c40da5d118e28a9d45de0ffa60a05684121c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fd615105781bcf4614f8a58cf63eeb89020db12e822192bd652a5ff23e25a2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:24Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:55Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:55 crc kubenswrapper[4697]: I0127 15:08:55.956652 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e13ee612abe9aa03f8ccaf68abbdfdbeb29820484f430097aef6be1679d3efe8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:55Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:55 crc kubenswrapper[4697]: I0127 15:08:55.980431 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a182e147723dd1c9335e6c6a910d5d53bdfc118504b6a0a9f3c91f79b6d3aee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52fcd1c6784720765f18ddc1936d3bdd625b743d27654a647ff80351957797e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:55Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:55 crc kubenswrapper[4697]: I0127 15:08:55.992429 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:55Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:56 crc kubenswrapper[4697]: I0127 15:08:56.002847 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wz495" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9bec8bc-b2a6-4865-83ca-692ae5c022a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2616d07c83d73b63d4b728a30de8a7e1d76986d38f8c4c3fe019bf73e64784f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wqhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://faaced835dbc76e880a1fd29824b00fca5f720686e476bcba6ad4f807e28e8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wqhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wz495\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:56Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:56 crc kubenswrapper[4697]: I0127 15:08:56.013419 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lpz4j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d187caad-2501-44d6-8ced-f8d8ca5fecfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c2b6a00c426e85ca8ca4fe5790bf7badc12e0c2cc72c1454e664e809ace5e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5jqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lpz4j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:56Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:56 crc kubenswrapper[4697]: I0127 15:08:56.025197 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:56Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:56 crc kubenswrapper[4697]: I0127 15:08:56.028292 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:08:56 crc kubenswrapper[4697]: I0127 15:08:56.028323 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:08:56 crc kubenswrapper[4697]: I0127 15:08:56.028362 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:08:56 crc kubenswrapper[4697]: I0127 15:08:56.028382 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:08:56 crc kubenswrapper[4697]: I0127 15:08:56.028393 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:08:56Z","lastTransitionTime":"2026-01-27T15:08:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:08:56 crc kubenswrapper[4697]: I0127 15:08:56.036044 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://955eb03bb38f971417b1af1b193c2008607eaeda5addf30f899830dd84620c4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:56Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:56 crc kubenswrapper[4697]: I0127 15:08:56.047492 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bdclj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed86f7b6-a042-470f-8da3-9cad4e65c550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a701152234da7522fefeed3798f4748c4f8e56fa81edd5011ad4a89bbb2e4be7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f898q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bdclj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:56Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:56 crc kubenswrapper[4697]: I0127 15:08:56.060979 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30821478-065e-48b2-85f3-ae69260477fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://841fe2379065903ddc38b4968c1764a6c83d13f42c7587f20be81d8539199c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc09ec12a81a4e2954a0d1146819e9f9b4fc1fd442a3e9c930ea213aff875eb9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa7833382543ce12d026eb8bbc6fb93276a1105a0cc34d215e719591be740f80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3144c28de6be75231118993ba779a42bcc9032d51e927df649d3abb602ffa5dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3144c28de6be75231118993ba779a42bcc9032d51e927df649d3abb602ffa5dd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 15:08:45.318333 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 15:08:45.318446 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 15:08:45.319039 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1083979560/tls.crt::/tmp/serving-cert-1083979560/tls.key\\\\\\\"\\\\nI0127 15:08:45.778691 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 15:08:45.781562 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 15:08:45.781589 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 15:08:45.781614 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 15:08:45.781620 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 15:08:45.799733 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0127 15:08:45.799756 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 15:08:45.799769 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:08:45.799774 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:08:45.799800 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 15:08:45.799806 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 15:08:45.799810 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 15:08:45.799814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 15:08:45.805747 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://772509e08b1dcc68190d81e10a93fe348af55fdc71dbab2f0cadffd65089c044\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d9c79b1675802dcd1800cdbf3562832c4d201ff1b4d7ab4504118a41a245453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d9c79b1675802dcd1800cdbf3562832c4d201ff1b4d7ab4504118a41a245453\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:24Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:56Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:56 crc kubenswrapper[4697]: I0127 15:08:56.073316 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:56Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:56 crc kubenswrapper[4697]: I0127 15:08:56.130348 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:08:56 crc kubenswrapper[4697]: I0127 15:08:56.130388 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:08:56 crc kubenswrapper[4697]: I0127 15:08:56.130401 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:08:56 crc kubenswrapper[4697]: I0127 15:08:56.130418 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:08:56 crc kubenswrapper[4697]: I0127 15:08:56.130433 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:08:56Z","lastTransitionTime":"2026-01-27T15:08:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:08:56 crc kubenswrapper[4697]: I0127 15:08:56.232728 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:08:56 crc kubenswrapper[4697]: I0127 15:08:56.232837 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:08:56 crc kubenswrapper[4697]: I0127 15:08:56.232864 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:08:56 crc kubenswrapper[4697]: I0127 15:08:56.232895 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:08:56 crc kubenswrapper[4697]: I0127 15:08:56.232920 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:08:56Z","lastTransitionTime":"2026-01-27T15:08:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:08:56 crc kubenswrapper[4697]: I0127 15:08:56.335389 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:08:56 crc kubenswrapper[4697]: I0127 15:08:56.335864 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:08:56 crc kubenswrapper[4697]: I0127 15:08:56.335951 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:08:56 crc kubenswrapper[4697]: I0127 15:08:56.336061 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:08:56 crc kubenswrapper[4697]: I0127 15:08:56.336139 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:08:56Z","lastTransitionTime":"2026-01-27T15:08:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:08:56 crc kubenswrapper[4697]: I0127 15:08:56.438409 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:08:56 crc kubenswrapper[4697]: I0127 15:08:56.438438 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:08:56 crc kubenswrapper[4697]: I0127 15:08:56.438446 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:08:56 crc kubenswrapper[4697]: I0127 15:08:56.438459 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:08:56 crc kubenswrapper[4697]: I0127 15:08:56.438467 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:08:56Z","lastTransitionTime":"2026-01-27T15:08:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:08:56 crc kubenswrapper[4697]: I0127 15:08:56.532286 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 15:17:03.045675921 +0000 UTC Jan 27 15:08:56 crc kubenswrapper[4697]: I0127 15:08:56.541063 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:08:56 crc kubenswrapper[4697]: I0127 15:08:56.541113 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:08:56 crc kubenswrapper[4697]: I0127 15:08:56.541124 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:08:56 crc kubenswrapper[4697]: I0127 15:08:56.541142 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:08:56 crc kubenswrapper[4697]: I0127 15:08:56.541155 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:08:56Z","lastTransitionTime":"2026-01-27T15:08:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:08:56 crc kubenswrapper[4697]: I0127 15:08:56.568296 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:08:56 crc kubenswrapper[4697]: E0127 15:08:56.568443 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:08:56 crc kubenswrapper[4697]: I0127 15:08:56.642915 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:08:56 crc kubenswrapper[4697]: I0127 15:08:56.642946 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:08:56 crc kubenswrapper[4697]: I0127 15:08:56.642954 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:08:56 crc kubenswrapper[4697]: I0127 15:08:56.642967 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:08:56 crc kubenswrapper[4697]: I0127 15:08:56.642975 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:08:56Z","lastTransitionTime":"2026-01-27T15:08:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:08:56 crc kubenswrapper[4697]: I0127 15:08:56.745866 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:08:56 crc kubenswrapper[4697]: I0127 15:08:56.745902 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:08:56 crc kubenswrapper[4697]: I0127 15:08:56.745914 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:08:56 crc kubenswrapper[4697]: I0127 15:08:56.745930 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:08:56 crc kubenswrapper[4697]: I0127 15:08:56.745942 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:08:56Z","lastTransitionTime":"2026-01-27T15:08:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:08:56 crc kubenswrapper[4697]: I0127 15:08:56.838575 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-z6jxw_6a1ce5ad-1a8c-4a28-99d8-fc71649954ad/ovnkube-controller/0.log" Jan 27 15:08:56 crc kubenswrapper[4697]: I0127 15:08:56.841382 4697 generic.go:334] "Generic (PLEG): container finished" podID="6a1ce5ad-1a8c-4a28-99d8-fc71649954ad" containerID="de2e2f7cc8d470d7508ee665d8ba11d253a3953775b5a893192992dac97af1b6" exitCode=1 Jan 27 15:08:56 crc kubenswrapper[4697]: I0127 15:08:56.841456 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6jxw" event={"ID":"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad","Type":"ContainerDied","Data":"de2e2f7cc8d470d7508ee665d8ba11d253a3953775b5a893192992dac97af1b6"} Jan 27 15:08:56 crc kubenswrapper[4697]: I0127 15:08:56.842415 4697 scope.go:117] "RemoveContainer" containerID="de2e2f7cc8d470d7508ee665d8ba11d253a3953775b5a893192992dac97af1b6" Jan 27 15:08:56 crc kubenswrapper[4697]: I0127 15:08:56.848129 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:08:56 crc kubenswrapper[4697]: I0127 15:08:56.848171 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:08:56 crc kubenswrapper[4697]: I0127 15:08:56.848183 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:08:56 crc kubenswrapper[4697]: I0127 15:08:56.848201 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:08:56 crc kubenswrapper[4697]: I0127 15:08:56.848213 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:08:56Z","lastTransitionTime":"2026-01-27T15:08:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:08:56 crc kubenswrapper[4697]: I0127 15:08:56.860003 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bdclj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed86f7b6-a042-470f-8da3-9cad4e65c550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a701152234da7522fefeed3798f4748c4f8e56fa81edd5011ad4a89bbb2e4be7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f898q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bdclj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:56Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:56 crc kubenswrapper[4697]: I0127 15:08:56.872667 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30821478-065e-48b2-85f3-ae69260477fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://841fe2379065903ddc38b4968c1764a6c83d13f42c7587f20be81d8539199c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc09ec12a81a4e2954a0d1146819e9f9b4fc1fd442a3e9c930ea213aff875eb9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa7833382543ce12d026eb8bbc6fb93276a1105a0cc34d215e719591be740f80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3144c28de6be75231118993ba779a42bcc9032d51e927df649d3abb602ffa5dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3144c28de6be75231118993ba779a42bcc9032d51e927df649d3abb602ffa5dd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 15:08:45.318333 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 15:08:45.318446 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 15:08:45.319039 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1083979560/tls.crt::/tmp/serving-cert-1083979560/tls.key\\\\\\\"\\\\nI0127 15:08:45.778691 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 15:08:45.781562 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 15:08:45.781589 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 15:08:45.781614 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 15:08:45.781620 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 15:08:45.799733 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0127 15:08:45.799756 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 15:08:45.799769 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:08:45.799774 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:08:45.799800 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 15:08:45.799806 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 15:08:45.799810 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 15:08:45.799814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 15:08:45.805747 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://772509e08b1dcc68190d81e10a93fe348af55fdc71dbab2f0cadffd65089c044\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d9c79b1675802dcd1800cdbf3562832c4d201ff1b4d7ab4504118a41a245453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d9c79b1675802dcd1800cdbf3562832c4d201ff1b4d7ab4504118a41a245453\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:24Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:56Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:56 crc kubenswrapper[4697]: I0127 15:08:56.888801 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:56Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:56 crc kubenswrapper[4697]: I0127 15:08:56.900340 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:56Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:56 crc kubenswrapper[4697]: I0127 15:08:56.916658 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://955eb03bb38f971417b1af1b193c2008607eaeda5addf30f899830dd84620c4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:56Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:56 crc kubenswrapper[4697]: I0127 15:08:56.943962 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6jxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24ac4a674c5fb98082daeabf52736988951ea5c66064ff4bb63f0d40c43b947d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25f52622d494cffbbd36c21f76148b896a10d3c1ace649ac0824e847b812a277\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9146d3d41cb348c99ea78d62aef3aa7d46c5f99855e042fdf5bc38b18556e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e33c68fac5ef11b2704b8a1460588937489a191ea2eacb70548b1e99cf718822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8784cf473729161592d08c782f4754724d6609756a30040715cbff8c732a09c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eea7c2b7dbea8198cc4709a808f8ecab760514224f4e3eb96d04c3bd7f16df6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de2e2f7cc8d470d7508ee665d8ba11d253a3953775b5a893192992dac97af1b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de2e2f7cc8d470d7508ee665d8ba11d253a3953775b5a893192992dac97af1b6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T15:08:56Z\\\",\\\"message\\\":\\\"nt handler 6 for removal\\\\nI0127 15:08:56.557373 5853 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0127 15:08:56.557383 5853 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 15:08:56.557422 5853 handler.go:208] Removed *v1.Node event handler 7\\\\nI0127 15:08:56.557448 5853 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0127 15:08:56.557616 5853 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0127 15:08:56.557656 5853 factory.go:656] Stopping watch factory\\\\nI0127 15:08:56.557677 5853 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0127 15:08:56.557718 5853 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 15:08:56.557823 5853 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 15:08:56.558030 5853 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 15:08:56.558270 5853 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 15:08:56.558371 5853 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 15:08:56.558451 5853 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://971bf4362650664f5133d9b68b7a5ce76e54dafbf28c88730f678ada0256ffd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9666b8a501ef015431ee3be1fc34ca2b196011df3007d2e4d508f09f9967785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9666b8a501ef015431ee3be1fc34ca2b196011df3007d2e4d508f09f9967785\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z6jxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:56Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:56 crc kubenswrapper[4697]: I0127 15:08:56.950230 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:08:56 crc kubenswrapper[4697]: I0127 15:08:56.950368 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:08:56 crc kubenswrapper[4697]: I0127 15:08:56.950443 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:08:56 crc kubenswrapper[4697]: I0127 15:08:56.950518 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:08:56 crc kubenswrapper[4697]: I0127 15:08:56.950606 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:08:56Z","lastTransitionTime":"2026-01-27T15:08:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:08:56 crc kubenswrapper[4697]: I0127 15:08:56.958386 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00bcd4fb-11e6-4087-91b7-290cd35a7292\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee5c74f4e3f1154431027a743528e81ec4bed30037b30a858870f74993da4691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b23c092c5d493951a1f6dbbf0482f102f36a830133d843f3c574afba2e1d50d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ad05a5c3b7640af677ede45c27c40da5d118e28a9d45de0ffa60a05684121c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fd615105781bcf4614f8a58cf63eeb89020db12e822192bd652a5ff23e25a2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:24Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:56Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:56 crc kubenswrapper[4697]: I0127 15:08:56.972514 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e13ee612abe9aa03f8ccaf68abbdfdbeb29820484f430097aef6be1679d3efe8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:56Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:56 crc kubenswrapper[4697]: I0127 15:08:56.986196 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a182e147723dd1c9335e6c6a910d5d53bdfc118504b6a0a9f3c91f79b6d3aee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52fcd1c6784720765f18ddc1936d3bdd625b743d27654a647ff80351957797e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:56Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:57 crc kubenswrapper[4697]: I0127 15:08:57.001431 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:56Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:57 crc kubenswrapper[4697]: I0127 15:08:57.014835 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bcb9s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7543bea-0b65-44e1-8c0c-bc1a13577d69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fe79de88015d62a290c140e0504b9ef088f39fa79bc9b379d46fa9cdb03123f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0b69d8311464a46854b17dc23de984ff37a24f3de84f8ad6033d26d5dd30afc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0b69d8311464a46854b17dc23de984ff37a24f3de84f8ad6033d26d5dd30afc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d34049aae4e409909bb597c8bf33aa1c1ac85699cf72e33f5643145fdf9fbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d34049aae4e409909bb597c8bf33aa1c1ac85699cf72e33f5643145fdf9fbb9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b85aff4ba7e4c4eddcdfd916b42392fd8f5bd4d18caae739a7490c0576fcff1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b85aff4ba7e4c4eddcdfd916b42392fd8f5bd4d18caae739a7490c0576fcff1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6affaf91a44dec8a9da34068ed68f480ad543e0efc8e0f584fd5002f8f6ed0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6affaf91a44dec8a9da34068ed68f480ad543e0efc8e0f584fd5002f8f6ed0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dede89b14b4d80c8b9e74c45b628b5def6a04f922bb59c06828c3a4e43deca4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dede89b14b4d80c8b9e74c45b628b5def6a04f922bb59c06828c3a4e43deca4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a125d46e355d85444bf125e8184888e9b0c18dab3cd7b09b89ffff202e2c6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a125d46e355d85444bf125e8184888e9b0c18dab3cd7b09b89ffff202e2c6b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bcb9s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:57Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:57 crc kubenswrapper[4697]: I0127 15:08:57.026319 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rq89t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fbc1c27-fba2-40df-95dd-3842bd1f1906\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0c056e48d3130806317f25486fea67d938a0e610f19b6089873f2fcfe4759a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npp7h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rq89t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:57Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:57 crc kubenswrapper[4697]: I0127 15:08:57.038891 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wz495" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9bec8bc-b2a6-4865-83ca-692ae5c022a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2616d07c83d73b63d4b728a30de8a7e1d76986d38f8c4c3fe019bf73e64784f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wqhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://faaced835dbc76e880a1fd29824b00fca5f720686e476bcba6ad4f807e28e8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wqhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wz495\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:57Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:57 crc kubenswrapper[4697]: I0127 15:08:57.051444 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lpz4j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d187caad-2501-44d6-8ced-f8d8ca5fecfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c2b6a00c426e85ca8ca4fe5790bf7badc12e0c2cc72c1454e664e809ace5e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5jqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lpz4j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:57Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:57 crc kubenswrapper[4697]: I0127 15:08:57.052983 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:08:57 crc kubenswrapper[4697]: I0127 15:08:57.053012 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:08:57 crc kubenswrapper[4697]: I0127 15:08:57.053020 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:08:57 crc kubenswrapper[4697]: I0127 15:08:57.053035 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:08:57 crc kubenswrapper[4697]: I0127 15:08:57.053044 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:08:57Z","lastTransitionTime":"2026-01-27T15:08:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:08:57 crc kubenswrapper[4697]: I0127 15:08:57.156318 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:08:57 crc kubenswrapper[4697]: I0127 15:08:57.156427 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:08:57 crc kubenswrapper[4697]: I0127 15:08:57.156441 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:08:57 crc kubenswrapper[4697]: I0127 15:08:57.156462 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:08:57 crc kubenswrapper[4697]: I0127 15:08:57.156475 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:08:57Z","lastTransitionTime":"2026-01-27T15:08:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:08:57 crc kubenswrapper[4697]: I0127 15:08:57.259592 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:08:57 crc kubenswrapper[4697]: I0127 15:08:57.259669 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:08:57 crc kubenswrapper[4697]: I0127 15:08:57.259702 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:08:57 crc kubenswrapper[4697]: I0127 15:08:57.259738 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:08:57 crc kubenswrapper[4697]: I0127 15:08:57.259763 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:08:57Z","lastTransitionTime":"2026-01-27T15:08:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:08:57 crc kubenswrapper[4697]: I0127 15:08:57.361964 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:08:57 crc kubenswrapper[4697]: I0127 15:08:57.362003 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:08:57 crc kubenswrapper[4697]: I0127 15:08:57.362012 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:08:57 crc kubenswrapper[4697]: I0127 15:08:57.362027 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:08:57 crc kubenswrapper[4697]: I0127 15:08:57.362036 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:08:57Z","lastTransitionTime":"2026-01-27T15:08:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:08:57 crc kubenswrapper[4697]: I0127 15:08:57.465489 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:08:57 crc kubenswrapper[4697]: I0127 15:08:57.465525 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:08:57 crc kubenswrapper[4697]: I0127 15:08:57.465532 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:08:57 crc kubenswrapper[4697]: I0127 15:08:57.465548 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:08:57 crc kubenswrapper[4697]: I0127 15:08:57.465557 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:08:57Z","lastTransitionTime":"2026-01-27T15:08:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:08:57 crc kubenswrapper[4697]: I0127 15:08:57.532727 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 04:03:40.331013984 +0000 UTC Jan 27 15:08:57 crc kubenswrapper[4697]: I0127 15:08:57.567377 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:08:57 crc kubenswrapper[4697]: E0127 15:08:57.567474 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:08:57 crc kubenswrapper[4697]: I0127 15:08:57.567512 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:08:57 crc kubenswrapper[4697]: E0127 15:08:57.567562 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:08:57 crc kubenswrapper[4697]: I0127 15:08:57.568494 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:08:57 crc kubenswrapper[4697]: I0127 15:08:57.568515 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:08:57 crc kubenswrapper[4697]: I0127 15:08:57.568524 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:08:57 crc kubenswrapper[4697]: I0127 15:08:57.568534 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:08:57 crc kubenswrapper[4697]: I0127 15:08:57.568543 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:08:57Z","lastTransitionTime":"2026-01-27T15:08:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:08:57 crc kubenswrapper[4697]: I0127 15:08:57.671423 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:08:57 crc kubenswrapper[4697]: I0127 15:08:57.671474 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:08:57 crc kubenswrapper[4697]: I0127 15:08:57.671485 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:08:57 crc kubenswrapper[4697]: I0127 15:08:57.671506 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:08:57 crc kubenswrapper[4697]: I0127 15:08:57.671523 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:08:57Z","lastTransitionTime":"2026-01-27T15:08:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:08:57 crc kubenswrapper[4697]: I0127 15:08:57.774698 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:08:57 crc kubenswrapper[4697]: I0127 15:08:57.774732 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:08:57 crc kubenswrapper[4697]: I0127 15:08:57.774742 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:08:57 crc kubenswrapper[4697]: I0127 15:08:57.774758 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:08:57 crc kubenswrapper[4697]: I0127 15:08:57.774769 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:08:57Z","lastTransitionTime":"2026-01-27T15:08:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:08:57 crc kubenswrapper[4697]: I0127 15:08:57.847247 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-z6jxw_6a1ce5ad-1a8c-4a28-99d8-fc71649954ad/ovnkube-controller/0.log" Jan 27 15:08:57 crc kubenswrapper[4697]: I0127 15:08:57.849887 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6jxw" event={"ID":"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad","Type":"ContainerStarted","Data":"c69c1cef25da3355f8dacd4b9acc8d52cdc6e32c3149645679a66420b2ff1fc2"} Jan 27 15:08:57 crc kubenswrapper[4697]: I0127 15:08:57.849991 4697 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 15:08:57 crc kubenswrapper[4697]: I0127 15:08:57.865188 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e13ee612abe9aa03f8ccaf68abbdfdbeb29820484f430097aef6be1679d3efe8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:57Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:57 crc kubenswrapper[4697]: I0127 15:08:57.877208 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:08:57 crc kubenswrapper[4697]: I0127 15:08:57.877247 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:08:57 crc kubenswrapper[4697]: I0127 15:08:57.877257 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:08:57 crc kubenswrapper[4697]: I0127 15:08:57.877272 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:08:57 crc kubenswrapper[4697]: I0127 15:08:57.877283 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:08:57Z","lastTransitionTime":"2026-01-27T15:08:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:08:57 crc kubenswrapper[4697]: I0127 15:08:57.885169 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a182e147723dd1c9335e6c6a910d5d53bdfc118504b6a0a9f3c91f79b6d3aee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52fcd1c6784720765f18ddc1936d3bdd625b743d27654a647ff80351957797e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:57Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:57 crc kubenswrapper[4697]: I0127 15:08:57.902592 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:57Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:57 crc kubenswrapper[4697]: I0127 15:08:57.917236 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bcb9s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7543bea-0b65-44e1-8c0c-bc1a13577d69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fe79de88015d62a290c140e0504b9ef088f39fa79bc9b379d46fa9cdb03123f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0b69d8311464a46854b17dc23de984ff37a24f3de84f8ad6033d26d5dd30afc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0b69d8311464a46854b17dc23de984ff37a24f3de84f8ad6033d26d5dd30afc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d34049aae4e409909bb597c8bf33aa1c1ac85699cf72e33f5643145fdf9fbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d34049aae4e409909bb597c8bf33aa1c1ac85699cf72e33f5643145fdf9fbb9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b85aff4ba7e4c4eddcdfd916b42392fd8f5bd4d18caae739a7490c0576fcff1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b85aff4ba7e4c4eddcdfd916b42392fd8f5bd4d18caae739a7490c0576fcff1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6affaf91a44dec8a9da34068ed68f480ad543e0efc8e0f584fd5002f8f6ed0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6affaf91a44dec8a9da34068ed68f480ad543e0efc8e0f584fd5002f8f6ed0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dede89b14b4d80c8b9e74c45b628b5def6a04f922bb59c06828c3a4e43deca4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dede89b14b4d80c8b9e74c45b628b5def6a04f922bb59c06828c3a4e43deca4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a125d46e355d85444bf125e8184888e9b0c18dab3cd7b09b89ffff202e2c6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a125d46e355d85444bf125e8184888e9b0c18dab3cd7b09b89ffff202e2c6b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bcb9s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:57Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:57 crc kubenswrapper[4697]: I0127 15:08:57.928944 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rq89t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fbc1c27-fba2-40df-95dd-3842bd1f1906\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0c056e48d3130806317f25486fea67d938a0e610f19b6089873f2fcfe4759a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npp7h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rq89t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:57Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:57 crc kubenswrapper[4697]: I0127 15:08:57.947039 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6jxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24ac4a674c5fb98082daeabf52736988951ea5c66064ff4bb63f0d40c43b947d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25f52622d494cffbbd36c21f76148b896a10d3c1ace649ac0824e847b812a277\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9146d3d41cb348c99ea78d62aef3aa7d46c5f99855e042fdf5bc38b18556e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e33c68fac5ef11b2704b8a1460588937489a191ea2eacb70548b1e99cf718822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8784cf473729161592d08c782f4754724d6609756a30040715cbff8c732a09c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eea7c2b7dbea8198cc4709a808f8ecab760514224f4e3eb96d04c3bd7f16df6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c69c1cef25da3355f8dacd4b9acc8d52cdc6e32c3149645679a66420b2ff1fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de2e2f7cc8d470d7508ee665d8ba11d253a3953775b5a893192992dac97af1b6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T15:08:56Z\\\",\\\"message\\\":\\\"nt handler 6 for removal\\\\nI0127 15:08:56.557373 5853 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0127 15:08:56.557383 5853 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 15:08:56.557422 5853 handler.go:208] Removed *v1.Node event handler 7\\\\nI0127 15:08:56.557448 5853 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0127 15:08:56.557616 5853 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0127 15:08:56.557656 5853 factory.go:656] Stopping watch factory\\\\nI0127 15:08:56.557677 5853 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0127 15:08:56.557718 5853 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 15:08:56.557823 5853 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 15:08:56.558030 5853 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 15:08:56.558270 5853 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 15:08:56.558371 5853 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 15:08:56.558451 5853 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:53Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://971bf4362650664f5133d9b68b7a5ce76e54dafbf28c88730f678ada0256ffd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9666b8a501ef015431ee3be1fc34ca2b196011df3007d2e4d508f09f9967785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9666b8a501ef015431ee3be1fc34ca2b196011df3007d2e4d508f09f9967785\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z6jxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:57Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:57 crc kubenswrapper[4697]: I0127 15:08:57.960640 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00bcd4fb-11e6-4087-91b7-290cd35a7292\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee5c74f4e3f1154431027a743528e81ec4bed30037b30a858870f74993da4691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b23c092c5d493951a1f6dbbf0482f102f36a830133d843f3c574afba2e1d50d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ad05a5c3b7640af677ede45c27c40da5d118e28a9d45de0ffa60a05684121c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fd615105781bcf4614f8a58cf63eeb89020db12e822192bd652a5ff23e25a2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:24Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:57Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:57 crc kubenswrapper[4697]: I0127 15:08:57.974631 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wz495" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9bec8bc-b2a6-4865-83ca-692ae5c022a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2616d07c83d73b63d4b728a30de8a7e1d76986d38f8c4c3fe019bf73e64784f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wqhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://faaced835dbc76e880a1fd29824b00fca5f720686e476bcba6ad4f807e28e8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wqhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wz495\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:57Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:57 crc kubenswrapper[4697]: I0127 15:08:57.979675 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:08:57 crc kubenswrapper[4697]: I0127 15:08:57.979718 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:08:57 crc kubenswrapper[4697]: I0127 15:08:57.979728 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:08:57 crc kubenswrapper[4697]: I0127 15:08:57.979746 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:08:57 crc kubenswrapper[4697]: I0127 15:08:57.979756 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:08:57Z","lastTransitionTime":"2026-01-27T15:08:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:08:57 crc kubenswrapper[4697]: I0127 15:08:57.988018 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lpz4j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d187caad-2501-44d6-8ced-f8d8ca5fecfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c2b6a00c426e85ca8ca4fe5790bf7badc12e0c2cc72c1454e664e809ace5e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5jqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lpz4j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:57Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:58 crc kubenswrapper[4697]: I0127 15:08:58.007748 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30821478-065e-48b2-85f3-ae69260477fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://841fe2379065903ddc38b4968c1764a6c83d13f42c7587f20be81d8539199c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc09ec12a81a4e2954a0d1146819e9f9b4fc1fd442a3e9c930ea213aff875eb9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa7833382543ce12d026eb8bbc6fb93276a1105a0cc34d215e719591be740f80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3144c28de6be75231118993ba779a42bcc9032d51e927df649d3abb602ffa5dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3144c28de6be75231118993ba779a42bcc9032d51e927df649d3abb602ffa5dd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 15:08:45.318333 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 15:08:45.318446 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 15:08:45.319039 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1083979560/tls.crt::/tmp/serving-cert-1083979560/tls.key\\\\\\\"\\\\nI0127 15:08:45.778691 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 15:08:45.781562 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 15:08:45.781589 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 15:08:45.781614 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 15:08:45.781620 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 15:08:45.799733 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0127 15:08:45.799756 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 15:08:45.799769 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:08:45.799774 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:08:45.799800 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 15:08:45.799806 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 15:08:45.799810 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 15:08:45.799814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 15:08:45.805747 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://772509e08b1dcc68190d81e10a93fe348af55fdc71dbab2f0cadffd65089c044\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d9c79b1675802dcd1800cdbf3562832c4d201ff1b4d7ab4504118a41a245453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d9c79b1675802dcd1800cdbf3562832c4d201ff1b4d7ab4504118a41a245453\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:24Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:58Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:58 crc kubenswrapper[4697]: I0127 15:08:58.020550 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:58Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:58 crc kubenswrapper[4697]: I0127 15:08:58.034074 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:58Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:58 crc kubenswrapper[4697]: I0127 15:08:58.047921 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://955eb03bb38f971417b1af1b193c2008607eaeda5addf30f899830dd84620c4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:58Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:58 crc kubenswrapper[4697]: I0127 15:08:58.060751 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bdclj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed86f7b6-a042-470f-8da3-9cad4e65c550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a701152234da7522fefeed3798f4748c4f8e56fa81edd5011ad4a89bbb2e4be7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f898q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bdclj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:58Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:58 crc kubenswrapper[4697]: I0127 15:08:58.082936 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:08:58 crc kubenswrapper[4697]: I0127 15:08:58.082992 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:08:58 crc kubenswrapper[4697]: I0127 15:08:58.083004 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:08:58 crc kubenswrapper[4697]: I0127 15:08:58.083022 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:08:58 crc kubenswrapper[4697]: I0127 15:08:58.083035 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:08:58Z","lastTransitionTime":"2026-01-27T15:08:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:08:58 crc kubenswrapper[4697]: I0127 15:08:58.185970 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:08:58 crc kubenswrapper[4697]: I0127 15:08:58.186013 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:08:58 crc kubenswrapper[4697]: I0127 15:08:58.186023 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:08:58 crc kubenswrapper[4697]: I0127 15:08:58.186041 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:08:58 crc kubenswrapper[4697]: I0127 15:08:58.186051 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:08:58Z","lastTransitionTime":"2026-01-27T15:08:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:08:58 crc kubenswrapper[4697]: I0127 15:08:58.289666 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:08:58 crc kubenswrapper[4697]: I0127 15:08:58.289707 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:08:58 crc kubenswrapper[4697]: I0127 15:08:58.289715 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:08:58 crc kubenswrapper[4697]: I0127 15:08:58.289731 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:08:58 crc kubenswrapper[4697]: I0127 15:08:58.289742 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:08:58Z","lastTransitionTime":"2026-01-27T15:08:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:08:58 crc kubenswrapper[4697]: I0127 15:08:58.392843 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:08:58 crc kubenswrapper[4697]: I0127 15:08:58.392941 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:08:58 crc kubenswrapper[4697]: I0127 15:08:58.392961 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:08:58 crc kubenswrapper[4697]: I0127 15:08:58.392987 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:08:58 crc kubenswrapper[4697]: I0127 15:08:58.393005 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:08:58Z","lastTransitionTime":"2026-01-27T15:08:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:08:58 crc kubenswrapper[4697]: I0127 15:08:58.495561 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:08:58 crc kubenswrapper[4697]: I0127 15:08:58.495606 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:08:58 crc kubenswrapper[4697]: I0127 15:08:58.495617 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:08:58 crc kubenswrapper[4697]: I0127 15:08:58.495632 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:08:58 crc kubenswrapper[4697]: I0127 15:08:58.495643 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:08:58Z","lastTransitionTime":"2026-01-27T15:08:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:08:58 crc kubenswrapper[4697]: I0127 15:08:58.533085 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 08:54:52.609833071 +0000 UTC Jan 27 15:08:58 crc kubenswrapper[4697]: I0127 15:08:58.570604 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:08:58 crc kubenswrapper[4697]: E0127 15:08:58.570734 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:08:58 crc kubenswrapper[4697]: I0127 15:08:58.598252 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:08:58 crc kubenswrapper[4697]: I0127 15:08:58.598295 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:08:58 crc kubenswrapper[4697]: I0127 15:08:58.598316 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:08:58 crc kubenswrapper[4697]: I0127 15:08:58.598333 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:08:58 crc kubenswrapper[4697]: I0127 15:08:58.598344 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:08:58Z","lastTransitionTime":"2026-01-27T15:08:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:08:58 crc kubenswrapper[4697]: I0127 15:08:58.701434 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:08:58 crc kubenswrapper[4697]: I0127 15:08:58.701492 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:08:58 crc kubenswrapper[4697]: I0127 15:08:58.701509 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:08:58 crc kubenswrapper[4697]: I0127 15:08:58.701532 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:08:58 crc kubenswrapper[4697]: I0127 15:08:58.701551 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:08:58Z","lastTransitionTime":"2026-01-27T15:08:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:08:58 crc kubenswrapper[4697]: I0127 15:08:58.805005 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:08:58 crc kubenswrapper[4697]: I0127 15:08:58.805065 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:08:58 crc kubenswrapper[4697]: I0127 15:08:58.805077 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:08:58 crc kubenswrapper[4697]: I0127 15:08:58.805098 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:08:58 crc kubenswrapper[4697]: I0127 15:08:58.805111 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:08:58Z","lastTransitionTime":"2026-01-27T15:08:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:08:58 crc kubenswrapper[4697]: I0127 15:08:58.823044 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6lf86"] Jan 27 15:08:58 crc kubenswrapper[4697]: I0127 15:08:58.823594 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6lf86" Jan 27 15:08:58 crc kubenswrapper[4697]: I0127 15:08:58.826993 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 27 15:08:58 crc kubenswrapper[4697]: I0127 15:08:58.827629 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 27 15:08:58 crc kubenswrapper[4697]: I0127 15:08:58.846838 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wz495" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9bec8bc-b2a6-4865-83ca-692ae5c022a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2616d07c83d73b63d4b728a30de8a7e1d76986d38f8c4c3fe019bf73e64784f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wqhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://faaced835dbc76e880a1fd29824b00fca5f720686e476bcba6ad4f807e28e8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wqhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wz495\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:58Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:58 crc kubenswrapper[4697]: I0127 15:08:58.855477 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-z6jxw_6a1ce5ad-1a8c-4a28-99d8-fc71649954ad/ovnkube-controller/1.log" Jan 27 15:08:58 crc kubenswrapper[4697]: I0127 15:08:58.856427 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-z6jxw_6a1ce5ad-1a8c-4a28-99d8-fc71649954ad/ovnkube-controller/0.log" Jan 27 15:08:58 crc kubenswrapper[4697]: I0127 15:08:58.861113 4697 generic.go:334] "Generic (PLEG): container finished" podID="6a1ce5ad-1a8c-4a28-99d8-fc71649954ad" containerID="c69c1cef25da3355f8dacd4b9acc8d52cdc6e32c3149645679a66420b2ff1fc2" exitCode=1 Jan 27 15:08:58 crc kubenswrapper[4697]: I0127 15:08:58.861165 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6jxw" event={"ID":"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad","Type":"ContainerDied","Data":"c69c1cef25da3355f8dacd4b9acc8d52cdc6e32c3149645679a66420b2ff1fc2"} Jan 27 15:08:58 crc kubenswrapper[4697]: I0127 15:08:58.861205 4697 scope.go:117] "RemoveContainer" containerID="de2e2f7cc8d470d7508ee665d8ba11d253a3953775b5a893192992dac97af1b6" Jan 27 15:08:58 crc kubenswrapper[4697]: I0127 15:08:58.862075 4697 scope.go:117] "RemoveContainer" containerID="c69c1cef25da3355f8dacd4b9acc8d52cdc6e32c3149645679a66420b2ff1fc2" Jan 27 15:08:58 crc kubenswrapper[4697]: E0127 15:08:58.862312 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-z6jxw_openshift-ovn-kubernetes(6a1ce5ad-1a8c-4a28-99d8-fc71649954ad)\"" pod="openshift-ovn-kubernetes/ovnkube-node-z6jxw" podUID="6a1ce5ad-1a8c-4a28-99d8-fc71649954ad" Jan 27 15:08:58 crc kubenswrapper[4697]: I0127 15:08:58.870267 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lpz4j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d187caad-2501-44d6-8ced-f8d8ca5fecfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c2b6a00c426e85ca8ca4fe5790bf7badc12e0c2cc72c1454e664e809ace5e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5jqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lpz4j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:58Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:58 crc kubenswrapper[4697]: I0127 15:08:58.885001 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://955eb03bb38f971417b1af1b193c2008607eaeda5addf30f899830dd84620c4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:58Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:58 crc kubenswrapper[4697]: I0127 15:08:58.898625 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bdclj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed86f7b6-a042-470f-8da3-9cad4e65c550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a701152234da7522fefeed3798f4748c4f8e56fa81edd5011ad4a89bbb2e4be7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f898q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bdclj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:58Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:58 crc kubenswrapper[4697]: I0127 15:08:58.901961 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/35bbb68b-046f-482d-8c38-e76dd8a12a61-env-overrides\") pod \"ovnkube-control-plane-749d76644c-6lf86\" (UID: \"35bbb68b-046f-482d-8c38-e76dd8a12a61\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6lf86" Jan 27 15:08:58 crc kubenswrapper[4697]: I0127 15:08:58.902245 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/35bbb68b-046f-482d-8c38-e76dd8a12a61-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-6lf86\" (UID: \"35bbb68b-046f-482d-8c38-e76dd8a12a61\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6lf86" Jan 27 15:08:58 crc kubenswrapper[4697]: I0127 15:08:58.902629 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/35bbb68b-046f-482d-8c38-e76dd8a12a61-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-6lf86\" (UID: \"35bbb68b-046f-482d-8c38-e76dd8a12a61\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6lf86" Jan 27 15:08:58 crc kubenswrapper[4697]: I0127 15:08:58.903019 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sf5z9\" (UniqueName: \"kubernetes.io/projected/35bbb68b-046f-482d-8c38-e76dd8a12a61-kube-api-access-sf5z9\") pod \"ovnkube-control-plane-749d76644c-6lf86\" (UID: \"35bbb68b-046f-482d-8c38-e76dd8a12a61\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6lf86" Jan 27 15:08:58 crc kubenswrapper[4697]: I0127 15:08:58.907719 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:08:58 crc kubenswrapper[4697]: I0127 15:08:58.907759 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:08:58 crc kubenswrapper[4697]: I0127 15:08:58.907771 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:08:58 crc kubenswrapper[4697]: I0127 15:08:58.907808 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:08:58 crc kubenswrapper[4697]: I0127 15:08:58.907824 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:08:58Z","lastTransitionTime":"2026-01-27T15:08:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:08:58 crc kubenswrapper[4697]: I0127 15:08:58.914158 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6lf86" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35bbb68b-046f-482d-8c38-e76dd8a12a61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sf5z9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sf5z9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6lf86\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:58Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:58 crc kubenswrapper[4697]: I0127 15:08:58.932073 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30821478-065e-48b2-85f3-ae69260477fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://841fe2379065903ddc38b4968c1764a6c83d13f42c7587f20be81d8539199c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc09ec12a81a4e2954a0d1146819e9f9b4fc1fd442a3e9c930ea213aff875eb9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa7833382543ce12d026eb8bbc6fb93276a1105a0cc34d215e719591be740f80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3144c28de6be75231118993ba779a42bcc9032d51e927df649d3abb602ffa5dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3144c28de6be75231118993ba779a42bcc9032d51e927df649d3abb602ffa5dd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 15:08:45.318333 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 15:08:45.318446 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 15:08:45.319039 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1083979560/tls.crt::/tmp/serving-cert-1083979560/tls.key\\\\\\\"\\\\nI0127 15:08:45.778691 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 15:08:45.781562 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 15:08:45.781589 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 15:08:45.781614 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 15:08:45.781620 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 15:08:45.799733 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0127 15:08:45.799756 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 15:08:45.799769 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:08:45.799774 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:08:45.799800 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 15:08:45.799806 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 15:08:45.799810 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 15:08:45.799814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 15:08:45.805747 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://772509e08b1dcc68190d81e10a93fe348af55fdc71dbab2f0cadffd65089c044\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d9c79b1675802dcd1800cdbf3562832c4d201ff1b4d7ab4504118a41a245453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d9c79b1675802dcd1800cdbf3562832c4d201ff1b4d7ab4504118a41a245453\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:24Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:58Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:58 crc kubenswrapper[4697]: I0127 15:08:58.946850 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:58Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:58 crc kubenswrapper[4697]: I0127 15:08:58.961138 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:58Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:58 crc kubenswrapper[4697]: I0127 15:08:58.985026 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rq89t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fbc1c27-fba2-40df-95dd-3842bd1f1906\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0c056e48d3130806317f25486fea67d938a0e610f19b6089873f2fcfe4759a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npp7h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rq89t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:58Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:59 crc kubenswrapper[4697]: I0127 15:08:59.003664 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/35bbb68b-046f-482d-8c38-e76dd8a12a61-env-overrides\") pod \"ovnkube-control-plane-749d76644c-6lf86\" (UID: \"35bbb68b-046f-482d-8c38-e76dd8a12a61\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6lf86" Jan 27 15:08:59 crc kubenswrapper[4697]: I0127 15:08:59.004085 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/35bbb68b-046f-482d-8c38-e76dd8a12a61-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-6lf86\" (UID: \"35bbb68b-046f-482d-8c38-e76dd8a12a61\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6lf86" Jan 27 15:08:59 crc kubenswrapper[4697]: I0127 15:08:59.004249 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/35bbb68b-046f-482d-8c38-e76dd8a12a61-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-6lf86\" (UID: \"35bbb68b-046f-482d-8c38-e76dd8a12a61\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6lf86" Jan 27 15:08:59 crc kubenswrapper[4697]: I0127 15:08:59.004421 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sf5z9\" (UniqueName: \"kubernetes.io/projected/35bbb68b-046f-482d-8c38-e76dd8a12a61-kube-api-access-sf5z9\") pod \"ovnkube-control-plane-749d76644c-6lf86\" (UID: \"35bbb68b-046f-482d-8c38-e76dd8a12a61\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6lf86" Jan 27 15:08:59 crc kubenswrapper[4697]: I0127 15:08:59.004303 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/35bbb68b-046f-482d-8c38-e76dd8a12a61-env-overrides\") pod \"ovnkube-control-plane-749d76644c-6lf86\" (UID: \"35bbb68b-046f-482d-8c38-e76dd8a12a61\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6lf86" Jan 27 15:08:59 crc kubenswrapper[4697]: I0127 15:08:59.004841 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/35bbb68b-046f-482d-8c38-e76dd8a12a61-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-6lf86\" (UID: \"35bbb68b-046f-482d-8c38-e76dd8a12a61\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6lf86" Jan 27 15:08:59 crc kubenswrapper[4697]: I0127 15:08:59.018665 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/35bbb68b-046f-482d-8c38-e76dd8a12a61-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-6lf86\" (UID: \"35bbb68b-046f-482d-8c38-e76dd8a12a61\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6lf86" Jan 27 15:08:59 crc kubenswrapper[4697]: I0127 15:08:59.019774 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:08:59 crc kubenswrapper[4697]: I0127 15:08:59.019827 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:08:59 crc kubenswrapper[4697]: I0127 15:08:59.019857 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:08:59 crc kubenswrapper[4697]: I0127 15:08:59.019873 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:08:59 crc kubenswrapper[4697]: I0127 15:08:59.019883 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:08:59Z","lastTransitionTime":"2026-01-27T15:08:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:08:59 crc kubenswrapper[4697]: I0127 15:08:59.021533 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6jxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24ac4a674c5fb98082daeabf52736988951ea5c66064ff4bb63f0d40c43b947d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25f52622d494cffbbd36c21f76148b896a10d3c1ace649ac0824e847b812a277\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9146d3d41cb348c99ea78d62aef3aa7d46c5f99855e042fdf5bc38b18556e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e33c68fac5ef11b2704b8a1460588937489a191ea2eacb70548b1e99cf718822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8784cf473729161592d08c782f4754724d6609756a30040715cbff8c732a09c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eea7c2b7dbea8198cc4709a808f8ecab760514224f4e3eb96d04c3bd7f16df6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c69c1cef25da3355f8dacd4b9acc8d52cdc6e32c3149645679a66420b2ff1fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de2e2f7cc8d470d7508ee665d8ba11d253a3953775b5a893192992dac97af1b6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T15:08:56Z\\\",\\\"message\\\":\\\"nt handler 6 for removal\\\\nI0127 15:08:56.557373 5853 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0127 15:08:56.557383 5853 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 15:08:56.557422 5853 handler.go:208] Removed *v1.Node event handler 7\\\\nI0127 15:08:56.557448 5853 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0127 15:08:56.557616 5853 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0127 15:08:56.557656 5853 factory.go:656] Stopping watch factory\\\\nI0127 15:08:56.557677 5853 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0127 15:08:56.557718 5853 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 15:08:56.557823 5853 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 15:08:56.558030 5853 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 15:08:56.558270 5853 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 15:08:56.558371 5853 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 15:08:56.558451 5853 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:53Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://971bf4362650664f5133d9b68b7a5ce76e54dafbf28c88730f678ada0256ffd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9666b8a501ef015431ee3be1fc34ca2b196011df3007d2e4d508f09f9967785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9666b8a501ef015431ee3be1fc34ca2b196011df3007d2e4d508f09f9967785\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z6jxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:59Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:59 crc kubenswrapper[4697]: I0127 15:08:59.028280 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sf5z9\" (UniqueName: \"kubernetes.io/projected/35bbb68b-046f-482d-8c38-e76dd8a12a61-kube-api-access-sf5z9\") pod \"ovnkube-control-plane-749d76644c-6lf86\" (UID: \"35bbb68b-046f-482d-8c38-e76dd8a12a61\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6lf86" Jan 27 15:08:59 crc kubenswrapper[4697]: I0127 15:08:59.039494 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00bcd4fb-11e6-4087-91b7-290cd35a7292\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee5c74f4e3f1154431027a743528e81ec4bed30037b30a858870f74993da4691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b23c092c5d493951a1f6dbbf0482f102f36a830133d843f3c574afba2e1d50d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ad05a5c3b7640af677ede45c27c40da5d118e28a9d45de0ffa60a05684121c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fd615105781bcf4614f8a58cf63eeb89020db12e822192bd652a5ff23e25a2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:24Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:59Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:59 crc kubenswrapper[4697]: I0127 15:08:59.053571 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e13ee612abe9aa03f8ccaf68abbdfdbeb29820484f430097aef6be1679d3efe8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:59Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:59 crc kubenswrapper[4697]: I0127 15:08:59.066008 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a182e147723dd1c9335e6c6a910d5d53bdfc118504b6a0a9f3c91f79b6d3aee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52fcd1c6784720765f18ddc1936d3bdd625b743d27654a647ff80351957797e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:59Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:59 crc kubenswrapper[4697]: I0127 15:08:59.078481 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:59Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:59 crc kubenswrapper[4697]: I0127 15:08:59.090881 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bcb9s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7543bea-0b65-44e1-8c0c-bc1a13577d69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fe79de88015d62a290c140e0504b9ef088f39fa79bc9b379d46fa9cdb03123f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0b69d8311464a46854b17dc23de984ff37a24f3de84f8ad6033d26d5dd30afc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0b69d8311464a46854b17dc23de984ff37a24f3de84f8ad6033d26d5dd30afc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d34049aae4e409909bb597c8bf33aa1c1ac85699cf72e33f5643145fdf9fbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d34049aae4e409909bb597c8bf33aa1c1ac85699cf72e33f5643145fdf9fbb9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b85aff4ba7e4c4eddcdfd916b42392fd8f5bd4d18caae739a7490c0576fcff1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b85aff4ba7e4c4eddcdfd916b42392fd8f5bd4d18caae739a7490c0576fcff1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6affaf91a44dec8a9da34068ed68f480ad543e0efc8e0f584fd5002f8f6ed0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6affaf91a44dec8a9da34068ed68f480ad543e0efc8e0f584fd5002f8f6ed0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dede89b14b4d80c8b9e74c45b628b5def6a04f922bb59c06828c3a4e43deca4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dede89b14b4d80c8b9e74c45b628b5def6a04f922bb59c06828c3a4e43deca4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a125d46e355d85444bf125e8184888e9b0c18dab3cd7b09b89ffff202e2c6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a125d46e355d85444bf125e8184888e9b0c18dab3cd7b09b89ffff202e2c6b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bcb9s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:59Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:59 crc kubenswrapper[4697]: I0127 15:08:59.102871 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wz495" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9bec8bc-b2a6-4865-83ca-692ae5c022a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2616d07c83d73b63d4b728a30de8a7e1d76986d38f8c4c3fe019bf73e64784f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wqhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://faaced835dbc76e880a1fd29824b00fca5f720686e476bcba6ad4f807e28e8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wqhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wz495\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:59Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:59 crc kubenswrapper[4697]: I0127 15:08:59.113952 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lpz4j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d187caad-2501-44d6-8ced-f8d8ca5fecfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c2b6a00c426e85ca8ca4fe5790bf7badc12e0c2cc72c1454e664e809ace5e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5jqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lpz4j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:59Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:59 crc kubenswrapper[4697]: I0127 15:08:59.121662 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:08:59 crc kubenswrapper[4697]: I0127 15:08:59.121698 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:08:59 crc kubenswrapper[4697]: I0127 15:08:59.121710 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:08:59 crc kubenswrapper[4697]: I0127 15:08:59.121726 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:08:59 crc kubenswrapper[4697]: I0127 15:08:59.121746 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:08:59Z","lastTransitionTime":"2026-01-27T15:08:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:08:59 crc kubenswrapper[4697]: I0127 15:08:59.130381 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6lf86" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35bbb68b-046f-482d-8c38-e76dd8a12a61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sf5z9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sf5z9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6lf86\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:59Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:59 crc kubenswrapper[4697]: I0127 15:08:59.142175 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6lf86" Jan 27 15:08:59 crc kubenswrapper[4697]: I0127 15:08:59.142826 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30821478-065e-48b2-85f3-ae69260477fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://841fe2379065903ddc38b4968c1764a6c83d13f42c7587f20be81d8539199c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc09ec12a81a4e2954a0d1146819e9f9b4fc1fd442a3e9c930ea213aff875eb9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa7833382543ce12d026eb8bbc6fb93276a1105a0cc34d215e719591be740f80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3144c28de6be75231118993ba779a42bcc9032d51e927df649d3abb602ffa5dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3144c28de6be75231118993ba779a42bcc9032d51e927df649d3abb602ffa5dd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 15:08:45.318333 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 15:08:45.318446 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 15:08:45.319039 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1083979560/tls.crt::/tmp/serving-cert-1083979560/tls.key\\\\\\\"\\\\nI0127 15:08:45.778691 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 15:08:45.781562 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 15:08:45.781589 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 15:08:45.781614 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 15:08:45.781620 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 15:08:45.799733 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0127 15:08:45.799756 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 15:08:45.799769 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:08:45.799774 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:08:45.799800 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 15:08:45.799806 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 15:08:45.799810 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 15:08:45.799814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 15:08:45.805747 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://772509e08b1dcc68190d81e10a93fe348af55fdc71dbab2f0cadffd65089c044\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d9c79b1675802dcd1800cdbf3562832c4d201ff1b4d7ab4504118a41a245453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d9c79b1675802dcd1800cdbf3562832c4d201ff1b4d7ab4504118a41a245453\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:24Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:59Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:59 crc kubenswrapper[4697]: W0127 15:08:59.154739 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod35bbb68b_046f_482d_8c38_e76dd8a12a61.slice/crio-017ffcbe15aa9ef6411f1b5cda99e96b044408dd3075db8e2b212b3f81a47f46 WatchSource:0}: Error finding container 017ffcbe15aa9ef6411f1b5cda99e96b044408dd3075db8e2b212b3f81a47f46: Status 404 returned error can't find the container with id 017ffcbe15aa9ef6411f1b5cda99e96b044408dd3075db8e2b212b3f81a47f46 Jan 27 15:08:59 crc kubenswrapper[4697]: I0127 15:08:59.160075 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:59Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:59 crc kubenswrapper[4697]: I0127 15:08:59.177098 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:59Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:59 crc kubenswrapper[4697]: I0127 15:08:59.191802 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://955eb03bb38f971417b1af1b193c2008607eaeda5addf30f899830dd84620c4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:59Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:59 crc kubenswrapper[4697]: I0127 15:08:59.202753 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bdclj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed86f7b6-a042-470f-8da3-9cad4e65c550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a701152234da7522fefeed3798f4748c4f8e56fa81edd5011ad4a89bbb2e4be7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f898q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bdclj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:59Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:59 crc kubenswrapper[4697]: I0127 15:08:59.218566 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00bcd4fb-11e6-4087-91b7-290cd35a7292\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee5c74f4e3f1154431027a743528e81ec4bed30037b30a858870f74993da4691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b23c092c5d493951a1f6dbbf0482f102f36a830133d843f3c574afba2e1d50d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ad05a5c3b7640af677ede45c27c40da5d118e28a9d45de0ffa60a05684121c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fd615105781bcf4614f8a58cf63eeb89020db12e822192bd652a5ff23e25a2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:24Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:59Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:59 crc kubenswrapper[4697]: I0127 15:08:59.223989 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:08:59 crc kubenswrapper[4697]: I0127 15:08:59.224016 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:08:59 crc kubenswrapper[4697]: I0127 15:08:59.224023 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:08:59 crc kubenswrapper[4697]: I0127 15:08:59.224036 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:08:59 crc kubenswrapper[4697]: I0127 15:08:59.224044 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:08:59Z","lastTransitionTime":"2026-01-27T15:08:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:08:59 crc kubenswrapper[4697]: I0127 15:08:59.230281 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e13ee612abe9aa03f8ccaf68abbdfdbeb29820484f430097aef6be1679d3efe8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:59Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:59 crc kubenswrapper[4697]: I0127 15:08:59.240177 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a182e147723dd1c9335e6c6a910d5d53bdfc118504b6a0a9f3c91f79b6d3aee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52fcd1c6784720765f18ddc1936d3bdd625b743d27654a647ff80351957797e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:59Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:59 crc kubenswrapper[4697]: I0127 15:08:59.264932 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:59Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:59 crc kubenswrapper[4697]: I0127 15:08:59.282915 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bcb9s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7543bea-0b65-44e1-8c0c-bc1a13577d69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fe79de88015d62a290c140e0504b9ef088f39fa79bc9b379d46fa9cdb03123f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0b69d8311464a46854b17dc23de984ff37a24f3de84f8ad6033d26d5dd30afc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0b69d8311464a46854b17dc23de984ff37a24f3de84f8ad6033d26d5dd30afc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d34049aae4e409909bb597c8bf33aa1c1ac85699cf72e33f5643145fdf9fbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d34049aae4e409909bb597c8bf33aa1c1ac85699cf72e33f5643145fdf9fbb9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b85aff4ba7e4c4eddcdfd916b42392fd8f5bd4d18caae739a7490c0576fcff1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b85aff4ba7e4c4eddcdfd916b42392fd8f5bd4d18caae739a7490c0576fcff1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6affaf91a44dec8a9da34068ed68f480ad543e0efc8e0f584fd5002f8f6ed0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6affaf91a44dec8a9da34068ed68f480ad543e0efc8e0f584fd5002f8f6ed0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dede89b14b4d80c8b9e74c45b628b5def6a04f922bb59c06828c3a4e43deca4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dede89b14b4d80c8b9e74c45b628b5def6a04f922bb59c06828c3a4e43deca4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a125d46e355d85444bf125e8184888e9b0c18dab3cd7b09b89ffff202e2c6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a125d46e355d85444bf125e8184888e9b0c18dab3cd7b09b89ffff202e2c6b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bcb9s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:59Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:59 crc kubenswrapper[4697]: I0127 15:08:59.294523 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rq89t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fbc1c27-fba2-40df-95dd-3842bd1f1906\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0c056e48d3130806317f25486fea67d938a0e610f19b6089873f2fcfe4759a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npp7h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rq89t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:59Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:59 crc kubenswrapper[4697]: I0127 15:08:59.304200 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-z6jxw" Jan 27 15:08:59 crc kubenswrapper[4697]: I0127 15:08:59.311542 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6jxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24ac4a674c5fb98082daeabf52736988951ea5c66064ff4bb63f0d40c43b947d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25f52622d494cffbbd36c21f76148b896a10d3c1ace649ac0824e847b812a277\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9146d3d41cb348c99ea78d62aef3aa7d46c5f99855e042fdf5bc38b18556e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e33c68fac5ef11b2704b8a1460588937489a191ea2eacb70548b1e99cf718822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8784cf473729161592d08c782f4754724d6609756a30040715cbff8c732a09c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eea7c2b7dbea8198cc4709a808f8ecab760514224f4e3eb96d04c3bd7f16df6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c69c1cef25da3355f8dacd4b9acc8d52cdc6e32c3149645679a66420b2ff1fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de2e2f7cc8d470d7508ee665d8ba11d253a3953775b5a893192992dac97af1b6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T15:08:56Z\\\",\\\"message\\\":\\\"nt handler 6 for removal\\\\nI0127 15:08:56.557373 5853 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0127 15:08:56.557383 5853 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 15:08:56.557422 5853 handler.go:208] Removed *v1.Node event handler 7\\\\nI0127 15:08:56.557448 5853 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0127 15:08:56.557616 5853 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0127 15:08:56.557656 5853 factory.go:656] Stopping watch factory\\\\nI0127 15:08:56.557677 5853 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0127 15:08:56.557718 5853 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 15:08:56.557823 5853 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 15:08:56.558030 5853 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 15:08:56.558270 5853 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 15:08:56.558371 5853 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 15:08:56.558451 5853 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:53Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c69c1cef25da3355f8dacd4b9acc8d52cdc6e32c3149645679a66420b2ff1fc2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T15:08:57Z\\\",\\\"message\\\":\\\"dded to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:57Z is after 2025-08-24T17:21:41Z]\\\\nI0127 15:08:57.846840 6015 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-controller-manager-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"3ec9f67e-7758-4707-a6d0-2dc28f28ac37\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-controller-manager-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-controller-manager-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://971bf4362650664f5133d9b68b7a5ce76e54dafbf28c88730f678ada0256ffd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9666b8a501ef015431ee3be1fc34ca2b196011df3007d2e4d508f09f9967785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9666b8a501ef015431ee3be1fc34ca2b196011df3007d2e4d508f09f9967785\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z6jxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:59Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:59 crc kubenswrapper[4697]: I0127 15:08:59.326039 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:08:59 crc kubenswrapper[4697]: I0127 15:08:59.326066 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:08:59 crc kubenswrapper[4697]: I0127 15:08:59.326073 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:08:59 crc kubenswrapper[4697]: I0127 15:08:59.326088 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:08:59 crc kubenswrapper[4697]: I0127 15:08:59.326098 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:08:59Z","lastTransitionTime":"2026-01-27T15:08:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:08:59 crc kubenswrapper[4697]: I0127 15:08:59.428176 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:08:59 crc kubenswrapper[4697]: I0127 15:08:59.428215 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:08:59 crc kubenswrapper[4697]: I0127 15:08:59.428225 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:08:59 crc kubenswrapper[4697]: I0127 15:08:59.428241 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:08:59 crc kubenswrapper[4697]: I0127 15:08:59.428250 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:08:59Z","lastTransitionTime":"2026-01-27T15:08:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:08:59 crc kubenswrapper[4697]: I0127 15:08:59.530230 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:08:59 crc kubenswrapper[4697]: I0127 15:08:59.530450 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:08:59 crc kubenswrapper[4697]: I0127 15:08:59.530512 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:08:59 crc kubenswrapper[4697]: I0127 15:08:59.530578 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:08:59 crc kubenswrapper[4697]: I0127 15:08:59.530638 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:08:59Z","lastTransitionTime":"2026-01-27T15:08:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:08:59 crc kubenswrapper[4697]: I0127 15:08:59.533427 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 01:14:56.014129 +0000 UTC Jan 27 15:08:59 crc kubenswrapper[4697]: I0127 15:08:59.567600 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:08:59 crc kubenswrapper[4697]: E0127 15:08:59.570273 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:08:59 crc kubenswrapper[4697]: I0127 15:08:59.567611 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:08:59 crc kubenswrapper[4697]: E0127 15:08:59.570376 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:08:59 crc kubenswrapper[4697]: I0127 15:08:59.633915 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:08:59 crc kubenswrapper[4697]: I0127 15:08:59.633968 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:08:59 crc kubenswrapper[4697]: I0127 15:08:59.633987 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:08:59 crc kubenswrapper[4697]: I0127 15:08:59.634008 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:08:59 crc kubenswrapper[4697]: I0127 15:08:59.634025 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:08:59Z","lastTransitionTime":"2026-01-27T15:08:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:08:59 crc kubenswrapper[4697]: I0127 15:08:59.736623 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:08:59 crc kubenswrapper[4697]: I0127 15:08:59.736671 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:08:59 crc kubenswrapper[4697]: I0127 15:08:59.736682 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:08:59 crc kubenswrapper[4697]: I0127 15:08:59.736703 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:08:59 crc kubenswrapper[4697]: I0127 15:08:59.736724 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:08:59Z","lastTransitionTime":"2026-01-27T15:08:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:08:59 crc kubenswrapper[4697]: I0127 15:08:59.840031 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:08:59 crc kubenswrapper[4697]: I0127 15:08:59.840080 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:08:59 crc kubenswrapper[4697]: I0127 15:08:59.840090 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:08:59 crc kubenswrapper[4697]: I0127 15:08:59.840140 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:08:59 crc kubenswrapper[4697]: I0127 15:08:59.840152 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:08:59Z","lastTransitionTime":"2026-01-27T15:08:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:08:59 crc kubenswrapper[4697]: I0127 15:08:59.870672 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6lf86" event={"ID":"35bbb68b-046f-482d-8c38-e76dd8a12a61","Type":"ContainerStarted","Data":"6949b3c1babb1c4c69bf612b869bea5dabf3fedc5e6c930ec3d3a51736c9651f"} Jan 27 15:08:59 crc kubenswrapper[4697]: I0127 15:08:59.870722 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6lf86" event={"ID":"35bbb68b-046f-482d-8c38-e76dd8a12a61","Type":"ContainerStarted","Data":"017ffcbe15aa9ef6411f1b5cda99e96b044408dd3075db8e2b212b3f81a47f46"} Jan 27 15:08:59 crc kubenswrapper[4697]: I0127 15:08:59.883613 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-z6jxw_6a1ce5ad-1a8c-4a28-99d8-fc71649954ad/ovnkube-controller/1.log" Jan 27 15:08:59 crc kubenswrapper[4697]: I0127 15:08:59.891733 4697 scope.go:117] "RemoveContainer" containerID="c69c1cef25da3355f8dacd4b9acc8d52cdc6e32c3149645679a66420b2ff1fc2" Jan 27 15:08:59 crc kubenswrapper[4697]: E0127 15:08:59.892120 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-z6jxw_openshift-ovn-kubernetes(6a1ce5ad-1a8c-4a28-99d8-fc71649954ad)\"" pod="openshift-ovn-kubernetes/ovnkube-node-z6jxw" podUID="6a1ce5ad-1a8c-4a28-99d8-fc71649954ad" Jan 27 15:08:59 crc kubenswrapper[4697]: I0127 15:08:59.914978 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bcb9s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7543bea-0b65-44e1-8c0c-bc1a13577d69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fe79de88015d62a290c140e0504b9ef088f39fa79bc9b379d46fa9cdb03123f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0b69d8311464a46854b17dc23de984ff37a24f3de84f8ad6033d26d5dd30afc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0b69d8311464a46854b17dc23de984ff37a24f3de84f8ad6033d26d5dd30afc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d34049aae4e409909bb597c8bf33aa1c1ac85699cf72e33f5643145fdf9fbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d34049aae4e409909bb597c8bf33aa1c1ac85699cf72e33f5643145fdf9fbb9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b85aff4ba7e4c4eddcdfd916b42392fd8f5bd4d18caae739a7490c0576fcff1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b85aff4ba7e4c4eddcdfd916b42392fd8f5bd4d18caae739a7490c0576fcff1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6affaf91a44dec8a9da34068ed68f480ad543e0efc8e0f584fd5002f8f6ed0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6affaf91a44dec8a9da34068ed68f480ad543e0efc8e0f584fd5002f8f6ed0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dede89b14b4d80c8b9e74c45b628b5def6a04f922bb59c06828c3a4e43deca4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dede89b14b4d80c8b9e74c45b628b5def6a04f922bb59c06828c3a4e43deca4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a125d46e355d85444bf125e8184888e9b0c18dab3cd7b09b89ffff202e2c6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a125d46e355d85444bf125e8184888e9b0c18dab3cd7b09b89ffff202e2c6b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bcb9s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:59Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:59 crc kubenswrapper[4697]: I0127 15:08:59.934591 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rq89t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fbc1c27-fba2-40df-95dd-3842bd1f1906\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0c056e48d3130806317f25486fea67d938a0e610f19b6089873f2fcfe4759a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npp7h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rq89t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:59Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:59 crc kubenswrapper[4697]: I0127 15:08:59.943002 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:08:59 crc kubenswrapper[4697]: I0127 15:08:59.943037 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:08:59 crc kubenswrapper[4697]: I0127 15:08:59.943049 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:08:59 crc kubenswrapper[4697]: I0127 15:08:59.943070 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:08:59 crc kubenswrapper[4697]: I0127 15:08:59.943083 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:08:59Z","lastTransitionTime":"2026-01-27T15:08:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:08:59 crc kubenswrapper[4697]: I0127 15:08:59.961283 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6jxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24ac4a674c5fb98082daeabf52736988951ea5c66064ff4bb63f0d40c43b947d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25f52622d494cffbbd36c21f76148b896a10d3c1ace649ac0824e847b812a277\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9146d3d41cb348c99ea78d62aef3aa7d46c5f99855e042fdf5bc38b18556e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e33c68fac5ef11b2704b8a1460588937489a191ea2eacb70548b1e99cf718822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8784cf473729161592d08c782f4754724d6609756a30040715cbff8c732a09c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eea7c2b7dbea8198cc4709a808f8ecab760514224f4e3eb96d04c3bd7f16df6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c69c1cef25da3355f8dacd4b9acc8d52cdc6e32c3149645679a66420b2ff1fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c69c1cef25da3355f8dacd4b9acc8d52cdc6e32c3149645679a66420b2ff1fc2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T15:08:57Z\\\",\\\"message\\\":\\\"dded to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:57Z is after 2025-08-24T17:21:41Z]\\\\nI0127 15:08:57.846840 6015 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-controller-manager-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"3ec9f67e-7758-4707-a6d0-2dc28f28ac37\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-controller-manager-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-controller-manager-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-z6jxw_openshift-ovn-kubernetes(6a1ce5ad-1a8c-4a28-99d8-fc71649954ad)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://971bf4362650664f5133d9b68b7a5ce76e54dafbf28c88730f678ada0256ffd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9666b8a501ef015431ee3be1fc34ca2b196011df3007d2e4d508f09f9967785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9666b8a501ef015431ee3be1fc34ca2b196011df3007d2e4d508f09f9967785\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z6jxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:59Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:59 crc kubenswrapper[4697]: I0127 15:08:59.973685 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00bcd4fb-11e6-4087-91b7-290cd35a7292\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee5c74f4e3f1154431027a743528e81ec4bed30037b30a858870f74993da4691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b23c092c5d493951a1f6dbbf0482f102f36a830133d843f3c574afba2e1d50d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ad05a5c3b7640af677ede45c27c40da5d118e28a9d45de0ffa60a05684121c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fd615105781bcf4614f8a58cf63eeb89020db12e822192bd652a5ff23e25a2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:24Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:59Z is after 2025-08-24T17:21:41Z" Jan 27 15:08:59 crc kubenswrapper[4697]: I0127 15:08:59.989577 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e13ee612abe9aa03f8ccaf68abbdfdbeb29820484f430097aef6be1679d3efe8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:59Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:00 crc kubenswrapper[4697]: I0127 15:09:00.002099 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a182e147723dd1c9335e6c6a910d5d53bdfc118504b6a0a9f3c91f79b6d3aee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52fcd1c6784720765f18ddc1936d3bdd625b743d27654a647ff80351957797e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:00Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:00 crc kubenswrapper[4697]: I0127 15:09:00.016300 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:00Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:00 crc kubenswrapper[4697]: I0127 15:09:00.030413 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wz495" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9bec8bc-b2a6-4865-83ca-692ae5c022a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2616d07c83d73b63d4b728a30de8a7e1d76986d38f8c4c3fe019bf73e64784f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wqhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://faaced835dbc76e880a1fd29824b00fca5f720686e476bcba6ad4f807e28e8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wqhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wz495\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:00Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:00 crc kubenswrapper[4697]: I0127 15:09:00.044418 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lpz4j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d187caad-2501-44d6-8ced-f8d8ca5fecfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c2b6a00c426e85ca8ca4fe5790bf7badc12e0c2cc72c1454e664e809ace5e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5jqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lpz4j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:00Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:00 crc kubenswrapper[4697]: I0127 15:09:00.045005 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:00 crc kubenswrapper[4697]: I0127 15:09:00.045033 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:00 crc kubenswrapper[4697]: I0127 15:09:00.045042 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:00 crc kubenswrapper[4697]: I0127 15:09:00.045057 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:00 crc kubenswrapper[4697]: I0127 15:09:00.045069 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:00Z","lastTransitionTime":"2026-01-27T15:09:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:00 crc kubenswrapper[4697]: I0127 15:09:00.056705 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:00Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:00 crc kubenswrapper[4697]: I0127 15:09:00.068649 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://955eb03bb38f971417b1af1b193c2008607eaeda5addf30f899830dd84620c4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:00Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:00 crc kubenswrapper[4697]: I0127 15:09:00.081829 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bdclj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed86f7b6-a042-470f-8da3-9cad4e65c550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a701152234da7522fefeed3798f4748c4f8e56fa81edd5011ad4a89bbb2e4be7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f898q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bdclj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:00Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:00 crc kubenswrapper[4697]: I0127 15:09:00.095138 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6lf86" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35bbb68b-046f-482d-8c38-e76dd8a12a61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sf5z9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sf5z9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6lf86\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:00Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:00 crc kubenswrapper[4697]: I0127 15:09:00.117155 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30821478-065e-48b2-85f3-ae69260477fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://841fe2379065903ddc38b4968c1764a6c83d13f42c7587f20be81d8539199c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc09ec12a81a4e2954a0d1146819e9f9b4fc1fd442a3e9c930ea213aff875eb9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa7833382543ce12d026eb8bbc6fb93276a1105a0cc34d215e719591be740f80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3144c28de6be75231118993ba779a42bcc9032d51e927df649d3abb602ffa5dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3144c28de6be75231118993ba779a42bcc9032d51e927df649d3abb602ffa5dd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 15:08:45.318333 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 15:08:45.318446 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 15:08:45.319039 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1083979560/tls.crt::/tmp/serving-cert-1083979560/tls.key\\\\\\\"\\\\nI0127 15:08:45.778691 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 15:08:45.781562 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 15:08:45.781589 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 15:08:45.781614 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 15:08:45.781620 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 15:08:45.799733 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0127 15:08:45.799756 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 15:08:45.799769 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:08:45.799774 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:08:45.799800 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 15:08:45.799806 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 15:08:45.799810 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 15:08:45.799814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 15:08:45.805747 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://772509e08b1dcc68190d81e10a93fe348af55fdc71dbab2f0cadffd65089c044\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d9c79b1675802dcd1800cdbf3562832c4d201ff1b4d7ab4504118a41a245453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d9c79b1675802dcd1800cdbf3562832c4d201ff1b4d7ab4504118a41a245453\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:24Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:00Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:00 crc kubenswrapper[4697]: I0127 15:09:00.133839 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:00Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:00 crc kubenswrapper[4697]: I0127 15:09:00.148105 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:00 crc kubenswrapper[4697]: I0127 15:09:00.148171 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:00 crc kubenswrapper[4697]: I0127 15:09:00.148181 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:00 crc kubenswrapper[4697]: I0127 15:09:00.148195 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:00 crc kubenswrapper[4697]: I0127 15:09:00.148204 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:00Z","lastTransitionTime":"2026-01-27T15:09:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:00 crc kubenswrapper[4697]: I0127 15:09:00.251656 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:00 crc kubenswrapper[4697]: I0127 15:09:00.251727 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:00 crc kubenswrapper[4697]: I0127 15:09:00.251763 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:00 crc kubenswrapper[4697]: I0127 15:09:00.251822 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:00 crc kubenswrapper[4697]: I0127 15:09:00.251836 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:00Z","lastTransitionTime":"2026-01-27T15:09:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:00 crc kubenswrapper[4697]: I0127 15:09:00.309620 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-vwctp"] Jan 27 15:09:00 crc kubenswrapper[4697]: I0127 15:09:00.310253 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vwctp" Jan 27 15:09:00 crc kubenswrapper[4697]: E0127 15:09:00.310332 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vwctp" podUID="11ed6885-450d-477c-8e08-acf5fbde2fa3" Jan 27 15:09:00 crc kubenswrapper[4697]: I0127 15:09:00.327956 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:00Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:00 crc kubenswrapper[4697]: I0127 15:09:00.341956 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://955eb03bb38f971417b1af1b193c2008607eaeda5addf30f899830dd84620c4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:00Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:00 crc kubenswrapper[4697]: I0127 15:09:00.353928 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bdclj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed86f7b6-a042-470f-8da3-9cad4e65c550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a701152234da7522fefeed3798f4748c4f8e56fa81edd5011ad4a89bbb2e4be7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f898q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bdclj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:00Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:00 crc kubenswrapper[4697]: I0127 15:09:00.355774 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:00 crc kubenswrapper[4697]: I0127 15:09:00.355847 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:00 crc kubenswrapper[4697]: I0127 15:09:00.355862 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:00 crc kubenswrapper[4697]: I0127 15:09:00.355901 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:00 crc kubenswrapper[4697]: I0127 15:09:00.355913 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:00Z","lastTransitionTime":"2026-01-27T15:09:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:00 crc kubenswrapper[4697]: I0127 15:09:00.368220 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6lf86" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35bbb68b-046f-482d-8c38-e76dd8a12a61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sf5z9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sf5z9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6lf86\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:00Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:00 crc kubenswrapper[4697]: I0127 15:09:00.384540 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30821478-065e-48b2-85f3-ae69260477fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://841fe2379065903ddc38b4968c1764a6c83d13f42c7587f20be81d8539199c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc09ec12a81a4e2954a0d1146819e9f9b4fc1fd442a3e9c930ea213aff875eb9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa7833382543ce12d026eb8bbc6fb93276a1105a0cc34d215e719591be740f80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3144c28de6be75231118993ba779a42bcc9032d51e927df649d3abb602ffa5dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3144c28de6be75231118993ba779a42bcc9032d51e927df649d3abb602ffa5dd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 15:08:45.318333 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 15:08:45.318446 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 15:08:45.319039 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1083979560/tls.crt::/tmp/serving-cert-1083979560/tls.key\\\\\\\"\\\\nI0127 15:08:45.778691 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 15:08:45.781562 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 15:08:45.781589 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 15:08:45.781614 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 15:08:45.781620 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 15:08:45.799733 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0127 15:08:45.799756 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 15:08:45.799769 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:08:45.799774 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:08:45.799800 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 15:08:45.799806 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 15:08:45.799810 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 15:08:45.799814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 15:08:45.805747 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://772509e08b1dcc68190d81e10a93fe348af55fdc71dbab2f0cadffd65089c044\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d9c79b1675802dcd1800cdbf3562832c4d201ff1b4d7ab4504118a41a245453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d9c79b1675802dcd1800cdbf3562832c4d201ff1b4d7ab4504118a41a245453\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:24Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:00Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:00 crc kubenswrapper[4697]: I0127 15:09:00.399064 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:00Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:00 crc kubenswrapper[4697]: I0127 15:09:00.420230 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bcb9s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7543bea-0b65-44e1-8c0c-bc1a13577d69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fe79de88015d62a290c140e0504b9ef088f39fa79bc9b379d46fa9cdb03123f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0b69d8311464a46854b17dc23de984ff37a24f3de84f8ad6033d26d5dd30afc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0b69d8311464a46854b17dc23de984ff37a24f3de84f8ad6033d26d5dd30afc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d34049aae4e409909bb597c8bf33aa1c1ac85699cf72e33f5643145fdf9fbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d34049aae4e409909bb597c8bf33aa1c1ac85699cf72e33f5643145fdf9fbb9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b85aff4ba7e4c4eddcdfd916b42392fd8f5bd4d18caae739a7490c0576fcff1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b85aff4ba7e4c4eddcdfd916b42392fd8f5bd4d18caae739a7490c0576fcff1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6affaf91a44dec8a9da34068ed68f480ad543e0efc8e0f584fd5002f8f6ed0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6affaf91a44dec8a9da34068ed68f480ad543e0efc8e0f584fd5002f8f6ed0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dede89b14b4d80c8b9e74c45b628b5def6a04f922bb59c06828c3a4e43deca4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dede89b14b4d80c8b9e74c45b628b5def6a04f922bb59c06828c3a4e43deca4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a125d46e355d85444bf125e8184888e9b0c18dab3cd7b09b89ffff202e2c6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a125d46e355d85444bf125e8184888e9b0c18dab3cd7b09b89ffff202e2c6b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bcb9s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:00Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:00 crc kubenswrapper[4697]: I0127 15:09:00.420500 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/11ed6885-450d-477c-8e08-acf5fbde2fa3-metrics-certs\") pod \"network-metrics-daemon-vwctp\" (UID: \"11ed6885-450d-477c-8e08-acf5fbde2fa3\") " pod="openshift-multus/network-metrics-daemon-vwctp" Jan 27 15:09:00 crc kubenswrapper[4697]: I0127 15:09:00.420563 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tr85v\" (UniqueName: \"kubernetes.io/projected/11ed6885-450d-477c-8e08-acf5fbde2fa3-kube-api-access-tr85v\") pod \"network-metrics-daemon-vwctp\" (UID: \"11ed6885-450d-477c-8e08-acf5fbde2fa3\") " pod="openshift-multus/network-metrics-daemon-vwctp" Jan 27 15:09:00 crc kubenswrapper[4697]: I0127 15:09:00.436955 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rq89t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fbc1c27-fba2-40df-95dd-3842bd1f1906\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0c056e48d3130806317f25486fea67d938a0e610f19b6089873f2fcfe4759a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npp7h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rq89t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:00Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:00 crc kubenswrapper[4697]: I0127 15:09:00.458606 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:00 crc kubenswrapper[4697]: I0127 15:09:00.458659 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:00 crc kubenswrapper[4697]: I0127 15:09:00.458671 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:00 crc kubenswrapper[4697]: I0127 15:09:00.458687 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:00 crc kubenswrapper[4697]: I0127 15:09:00.458697 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:00Z","lastTransitionTime":"2026-01-27T15:09:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:00 crc kubenswrapper[4697]: I0127 15:09:00.460203 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6jxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24ac4a674c5fb98082daeabf52736988951ea5c66064ff4bb63f0d40c43b947d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25f52622d494cffbbd36c21f76148b896a10d3c1ace649ac0824e847b812a277\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9146d3d41cb348c99ea78d62aef3aa7d46c5f99855e042fdf5bc38b18556e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e33c68fac5ef11b2704b8a1460588937489a191ea2eacb70548b1e99cf718822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8784cf473729161592d08c782f4754724d6609756a30040715cbff8c732a09c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eea7c2b7dbea8198cc4709a808f8ecab760514224f4e3eb96d04c3bd7f16df6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c69c1cef25da3355f8dacd4b9acc8d52cdc6e32c3149645679a66420b2ff1fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c69c1cef25da3355f8dacd4b9acc8d52cdc6e32c3149645679a66420b2ff1fc2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T15:08:57Z\\\",\\\"message\\\":\\\"dded to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:57Z is after 2025-08-24T17:21:41Z]\\\\nI0127 15:08:57.846840 6015 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-controller-manager-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"3ec9f67e-7758-4707-a6d0-2dc28f28ac37\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-controller-manager-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-controller-manager-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-z6jxw_openshift-ovn-kubernetes(6a1ce5ad-1a8c-4a28-99d8-fc71649954ad)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://971bf4362650664f5133d9b68b7a5ce76e54dafbf28c88730f678ada0256ffd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9666b8a501ef015431ee3be1fc34ca2b196011df3007d2e4d508f09f9967785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9666b8a501ef015431ee3be1fc34ca2b196011df3007d2e4d508f09f9967785\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z6jxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:00Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:00 crc kubenswrapper[4697]: I0127 15:09:00.475948 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00bcd4fb-11e6-4087-91b7-290cd35a7292\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee5c74f4e3f1154431027a743528e81ec4bed30037b30a858870f74993da4691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b23c092c5d493951a1f6dbbf0482f102f36a830133d843f3c574afba2e1d50d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ad05a5c3b7640af677ede45c27c40da5d118e28a9d45de0ffa60a05684121c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fd615105781bcf4614f8a58cf63eeb89020db12e822192bd652a5ff23e25a2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:24Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:00Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:00 crc kubenswrapper[4697]: I0127 15:09:00.491139 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e13ee612abe9aa03f8ccaf68abbdfdbeb29820484f430097aef6be1679d3efe8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:00Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:00 crc kubenswrapper[4697]: I0127 15:09:00.507104 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a182e147723dd1c9335e6c6a910d5d53bdfc118504b6a0a9f3c91f79b6d3aee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52fcd1c6784720765f18ddc1936d3bdd625b743d27654a647ff80351957797e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:00Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:00 crc kubenswrapper[4697]: I0127 15:09:00.518589 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:00Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:00 crc kubenswrapper[4697]: I0127 15:09:00.521246 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/11ed6885-450d-477c-8e08-acf5fbde2fa3-metrics-certs\") pod \"network-metrics-daemon-vwctp\" (UID: \"11ed6885-450d-477c-8e08-acf5fbde2fa3\") " pod="openshift-multus/network-metrics-daemon-vwctp" Jan 27 15:09:00 crc kubenswrapper[4697]: I0127 15:09:00.521300 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tr85v\" (UniqueName: \"kubernetes.io/projected/11ed6885-450d-477c-8e08-acf5fbde2fa3-kube-api-access-tr85v\") pod \"network-metrics-daemon-vwctp\" (UID: \"11ed6885-450d-477c-8e08-acf5fbde2fa3\") " pod="openshift-multus/network-metrics-daemon-vwctp" Jan 27 15:09:00 crc kubenswrapper[4697]: E0127 15:09:00.521919 4697 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 15:09:00 crc kubenswrapper[4697]: E0127 15:09:00.522005 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/11ed6885-450d-477c-8e08-acf5fbde2fa3-metrics-certs podName:11ed6885-450d-477c-8e08-acf5fbde2fa3 nodeName:}" failed. No retries permitted until 2026-01-27 15:09:01.021977185 +0000 UTC m=+37.194376966 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/11ed6885-450d-477c-8e08-acf5fbde2fa3-metrics-certs") pod "network-metrics-daemon-vwctp" (UID: "11ed6885-450d-477c-8e08-acf5fbde2fa3") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 15:09:00 crc kubenswrapper[4697]: I0127 15:09:00.533870 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 13:26:57.998320902 +0000 UTC Jan 27 15:09:00 crc kubenswrapper[4697]: I0127 15:09:00.536488 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wz495" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9bec8bc-b2a6-4865-83ca-692ae5c022a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2616d07c83d73b63d4b728a30de8a7e1d76986d38f8c4c3fe019bf73e64784f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wqhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://faaced835dbc76e880a1fd29824b00fca5f720686e476bcba6ad4f807e28e8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wqhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wz495\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:00Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:00 crc kubenswrapper[4697]: I0127 15:09:00.548281 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vwctp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11ed6885-450d-477c-8e08-acf5fbde2fa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr85v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr85v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:09:00Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vwctp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:00Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:00 crc kubenswrapper[4697]: I0127 15:09:00.549388 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tr85v\" (UniqueName: \"kubernetes.io/projected/11ed6885-450d-477c-8e08-acf5fbde2fa3-kube-api-access-tr85v\") pod \"network-metrics-daemon-vwctp\" (UID: \"11ed6885-450d-477c-8e08-acf5fbde2fa3\") " pod="openshift-multus/network-metrics-daemon-vwctp" Jan 27 15:09:00 crc kubenswrapper[4697]: I0127 15:09:00.561993 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:00 crc kubenswrapper[4697]: I0127 15:09:00.562024 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:00 crc kubenswrapper[4697]: I0127 15:09:00.562050 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:00 crc kubenswrapper[4697]: I0127 15:09:00.562066 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:00 crc kubenswrapper[4697]: I0127 15:09:00.562077 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:00Z","lastTransitionTime":"2026-01-27T15:09:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:00 crc kubenswrapper[4697]: I0127 15:09:00.563287 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lpz4j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d187caad-2501-44d6-8ced-f8d8ca5fecfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c2b6a00c426e85ca8ca4fe5790bf7badc12e0c2cc72c1454e664e809ace5e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5jqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lpz4j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:00Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:00 crc kubenswrapper[4697]: I0127 15:09:00.567688 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:09:00 crc kubenswrapper[4697]: E0127 15:09:00.567861 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:09:00 crc kubenswrapper[4697]: I0127 15:09:00.665521 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:00 crc kubenswrapper[4697]: I0127 15:09:00.665561 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:00 crc kubenswrapper[4697]: I0127 15:09:00.665569 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:00 crc kubenswrapper[4697]: I0127 15:09:00.665583 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:00 crc kubenswrapper[4697]: I0127 15:09:00.665594 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:00Z","lastTransitionTime":"2026-01-27T15:09:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:00 crc kubenswrapper[4697]: I0127 15:09:00.768450 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:00 crc kubenswrapper[4697]: I0127 15:09:00.768484 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:00 crc kubenswrapper[4697]: I0127 15:09:00.768494 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:00 crc kubenswrapper[4697]: I0127 15:09:00.768508 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:00 crc kubenswrapper[4697]: I0127 15:09:00.768518 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:00Z","lastTransitionTime":"2026-01-27T15:09:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:00 crc kubenswrapper[4697]: I0127 15:09:00.871807 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:00 crc kubenswrapper[4697]: I0127 15:09:00.871851 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:00 crc kubenswrapper[4697]: I0127 15:09:00.871863 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:00 crc kubenswrapper[4697]: I0127 15:09:00.871878 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:00 crc kubenswrapper[4697]: I0127 15:09:00.871890 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:00Z","lastTransitionTime":"2026-01-27T15:09:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:00 crc kubenswrapper[4697]: I0127 15:09:00.895285 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6lf86" event={"ID":"35bbb68b-046f-482d-8c38-e76dd8a12a61","Type":"ContainerStarted","Data":"cc445832c9ce25b3b787c029df7baad2f8ad53f7cf8705ab5e2590c85119bec1"} Jan 27 15:09:00 crc kubenswrapper[4697]: I0127 15:09:00.917646 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a182e147723dd1c9335e6c6a910d5d53bdfc118504b6a0a9f3c91f79b6d3aee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52fcd1c6784720765f18ddc1936d3bdd625b743d27654a647ff80351957797e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:00Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:00 crc kubenswrapper[4697]: I0127 15:09:00.932845 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:00Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:00 crc kubenswrapper[4697]: I0127 15:09:00.953575 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bcb9s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7543bea-0b65-44e1-8c0c-bc1a13577d69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fe79de88015d62a290c140e0504b9ef088f39fa79bc9b379d46fa9cdb03123f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0b69d8311464a46854b17dc23de984ff37a24f3de84f8ad6033d26d5dd30afc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0b69d8311464a46854b17dc23de984ff37a24f3de84f8ad6033d26d5dd30afc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d34049aae4e409909bb597c8bf33aa1c1ac85699cf72e33f5643145fdf9fbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d34049aae4e409909bb597c8bf33aa1c1ac85699cf72e33f5643145fdf9fbb9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b85aff4ba7e4c4eddcdfd916b42392fd8f5bd4d18caae739a7490c0576fcff1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b85aff4ba7e4c4eddcdfd916b42392fd8f5bd4d18caae739a7490c0576fcff1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6affaf91a44dec8a9da34068ed68f480ad543e0efc8e0f584fd5002f8f6ed0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6affaf91a44dec8a9da34068ed68f480ad543e0efc8e0f584fd5002f8f6ed0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dede89b14b4d80c8b9e74c45b628b5def6a04f922bb59c06828c3a4e43deca4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dede89b14b4d80c8b9e74c45b628b5def6a04f922bb59c06828c3a4e43deca4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a125d46e355d85444bf125e8184888e9b0c18dab3cd7b09b89ffff202e2c6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a125d46e355d85444bf125e8184888e9b0c18dab3cd7b09b89ffff202e2c6b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bcb9s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:00Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:00 crc kubenswrapper[4697]: I0127 15:09:00.967315 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rq89t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fbc1c27-fba2-40df-95dd-3842bd1f1906\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0c056e48d3130806317f25486fea67d938a0e610f19b6089873f2fcfe4759a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npp7h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rq89t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:00Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:00 crc kubenswrapper[4697]: I0127 15:09:00.973979 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:00 crc kubenswrapper[4697]: I0127 15:09:00.974182 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:00 crc kubenswrapper[4697]: I0127 15:09:00.974308 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:00 crc kubenswrapper[4697]: I0127 15:09:00.974390 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:00 crc kubenswrapper[4697]: I0127 15:09:00.974464 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:00Z","lastTransitionTime":"2026-01-27T15:09:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:00 crc kubenswrapper[4697]: I0127 15:09:00.986665 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6jxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24ac4a674c5fb98082daeabf52736988951ea5c66064ff4bb63f0d40c43b947d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25f52622d494cffbbd36c21f76148b896a10d3c1ace649ac0824e847b812a277\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9146d3d41cb348c99ea78d62aef3aa7d46c5f99855e042fdf5bc38b18556e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e33c68fac5ef11b2704b8a1460588937489a191ea2eacb70548b1e99cf718822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8784cf473729161592d08c782f4754724d6609756a30040715cbff8c732a09c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eea7c2b7dbea8198cc4709a808f8ecab760514224f4e3eb96d04c3bd7f16df6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c69c1cef25da3355f8dacd4b9acc8d52cdc6e32c3149645679a66420b2ff1fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c69c1cef25da3355f8dacd4b9acc8d52cdc6e32c3149645679a66420b2ff1fc2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T15:08:57Z\\\",\\\"message\\\":\\\"dded to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:57Z is after 2025-08-24T17:21:41Z]\\\\nI0127 15:08:57.846840 6015 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-controller-manager-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"3ec9f67e-7758-4707-a6d0-2dc28f28ac37\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-controller-manager-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-controller-manager-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-z6jxw_openshift-ovn-kubernetes(6a1ce5ad-1a8c-4a28-99d8-fc71649954ad)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://971bf4362650664f5133d9b68b7a5ce76e54dafbf28c88730f678ada0256ffd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9666b8a501ef015431ee3be1fc34ca2b196011df3007d2e4d508f09f9967785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9666b8a501ef015431ee3be1fc34ca2b196011df3007d2e4d508f09f9967785\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z6jxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:00Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:00 crc kubenswrapper[4697]: I0127 15:09:00.997800 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00bcd4fb-11e6-4087-91b7-290cd35a7292\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee5c74f4e3f1154431027a743528e81ec4bed30037b30a858870f74993da4691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b23c092c5d493951a1f6dbbf0482f102f36a830133d843f3c574afba2e1d50d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ad05a5c3b7640af677ede45c27c40da5d118e28a9d45de0ffa60a05684121c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fd615105781bcf4614f8a58cf63eeb89020db12e822192bd652a5ff23e25a2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:24Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:00Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:01 crc kubenswrapper[4697]: I0127 15:09:01.010939 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e13ee612abe9aa03f8ccaf68abbdfdbeb29820484f430097aef6be1679d3efe8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:01Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:01 crc kubenswrapper[4697]: I0127 15:09:01.025011 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vwctp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11ed6885-450d-477c-8e08-acf5fbde2fa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr85v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr85v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:09:00Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vwctp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:01Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:01 crc kubenswrapper[4697]: I0127 15:09:01.031795 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/11ed6885-450d-477c-8e08-acf5fbde2fa3-metrics-certs\") pod \"network-metrics-daemon-vwctp\" (UID: \"11ed6885-450d-477c-8e08-acf5fbde2fa3\") " pod="openshift-multus/network-metrics-daemon-vwctp" Jan 27 15:09:01 crc kubenswrapper[4697]: E0127 15:09:01.032393 4697 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 15:09:01 crc kubenswrapper[4697]: E0127 15:09:01.032446 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/11ed6885-450d-477c-8e08-acf5fbde2fa3-metrics-certs podName:11ed6885-450d-477c-8e08-acf5fbde2fa3 nodeName:}" failed. No retries permitted until 2026-01-27 15:09:02.032428992 +0000 UTC m=+38.204828773 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/11ed6885-450d-477c-8e08-acf5fbde2fa3-metrics-certs") pod "network-metrics-daemon-vwctp" (UID: "11ed6885-450d-477c-8e08-acf5fbde2fa3") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 15:09:01 crc kubenswrapper[4697]: I0127 15:09:01.039173 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wz495" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9bec8bc-b2a6-4865-83ca-692ae5c022a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2616d07c83d73b63d4b728a30de8a7e1d76986d38f8c4c3fe019bf73e64784f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wqhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://faaced835dbc76e880a1fd29824b00fca5f720686e476bcba6ad4f807e28e8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wqhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wz495\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:01Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:01 crc kubenswrapper[4697]: I0127 15:09:01.054150 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lpz4j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d187caad-2501-44d6-8ced-f8d8ca5fecfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c2b6a00c426e85ca8ca4fe5790bf7badc12e0c2cc72c1454e664e809ace5e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5jqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lpz4j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:01Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:01 crc kubenswrapper[4697]: I0127 15:09:01.070527 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:01Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:01 crc kubenswrapper[4697]: I0127 15:09:01.077734 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:01 crc kubenswrapper[4697]: I0127 15:09:01.078040 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:01 crc kubenswrapper[4697]: I0127 15:09:01.078136 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:01 crc kubenswrapper[4697]: I0127 15:09:01.078231 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:01 crc kubenswrapper[4697]: I0127 15:09:01.078311 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:01Z","lastTransitionTime":"2026-01-27T15:09:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:01 crc kubenswrapper[4697]: I0127 15:09:01.086572 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:01Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:01 crc kubenswrapper[4697]: I0127 15:09:01.102229 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://955eb03bb38f971417b1af1b193c2008607eaeda5addf30f899830dd84620c4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:01Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:01 crc kubenswrapper[4697]: I0127 15:09:01.114169 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bdclj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed86f7b6-a042-470f-8da3-9cad4e65c550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a701152234da7522fefeed3798f4748c4f8e56fa81edd5011ad4a89bbb2e4be7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f898q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bdclj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:01Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:01 crc kubenswrapper[4697]: I0127 15:09:01.116059 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:01 crc kubenswrapper[4697]: I0127 15:09:01.116133 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:01 crc kubenswrapper[4697]: I0127 15:09:01.116150 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:01 crc kubenswrapper[4697]: I0127 15:09:01.116224 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:01 crc kubenswrapper[4697]: I0127 15:09:01.116239 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:01Z","lastTransitionTime":"2026-01-27T15:09:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:01 crc kubenswrapper[4697]: I0127 15:09:01.125858 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6lf86" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35bbb68b-046f-482d-8c38-e76dd8a12a61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6949b3c1babb1c4c69bf612b869bea5dabf3fedc5e6c930ec3d3a51736c9651f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sf5z9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc445832c9ce25b3b787c029df7baad2f8ad53f7cf8705ab5e2590c85119bec1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sf5z9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6lf86\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:01Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:01 crc kubenswrapper[4697]: E0127 15:09:01.131724 4697 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:09:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:09:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:09:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:09:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"74b869f4-b1e4-4686-af4e-9516e0fb5017\\\",\\\"systemUUID\\\":\\\"69bca9ab-721f-415b-ad88-6626c7795f3c\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:01Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:01 crc kubenswrapper[4697]: I0127 15:09:01.135651 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:01 crc kubenswrapper[4697]: I0127 15:09:01.135811 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:01 crc kubenswrapper[4697]: I0127 15:09:01.135902 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:01 crc kubenswrapper[4697]: I0127 15:09:01.135982 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:01 crc kubenswrapper[4697]: I0127 15:09:01.136066 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:01Z","lastTransitionTime":"2026-01-27T15:09:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:01 crc kubenswrapper[4697]: I0127 15:09:01.144308 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30821478-065e-48b2-85f3-ae69260477fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://841fe2379065903ddc38b4968c1764a6c83d13f42c7587f20be81d8539199c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc09ec12a81a4e2954a0d1146819e9f9b4fc1fd442a3e9c930ea213aff875eb9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa7833382543ce12d026eb8bbc6fb93276a1105a0cc34d215e719591be740f80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3144c28de6be75231118993ba779a42bcc9032d51e927df649d3abb602ffa5dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3144c28de6be75231118993ba779a42bcc9032d51e927df649d3abb602ffa5dd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 15:08:45.318333 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 15:08:45.318446 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 15:08:45.319039 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1083979560/tls.crt::/tmp/serving-cert-1083979560/tls.key\\\\\\\"\\\\nI0127 15:08:45.778691 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 15:08:45.781562 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 15:08:45.781589 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 15:08:45.781614 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 15:08:45.781620 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 15:08:45.799733 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0127 15:08:45.799756 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 15:08:45.799769 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:08:45.799774 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:08:45.799800 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 15:08:45.799806 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 15:08:45.799810 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 15:08:45.799814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 15:08:45.805747 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://772509e08b1dcc68190d81e10a93fe348af55fdc71dbab2f0cadffd65089c044\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d9c79b1675802dcd1800cdbf3562832c4d201ff1b4d7ab4504118a41a245453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d9c79b1675802dcd1800cdbf3562832c4d201ff1b4d7ab4504118a41a245453\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:24Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:01Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:01 crc kubenswrapper[4697]: E0127 15:09:01.150689 4697 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:09:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:09:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:09:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:09:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"74b869f4-b1e4-4686-af4e-9516e0fb5017\\\",\\\"systemUUID\\\":\\\"69bca9ab-721f-415b-ad88-6626c7795f3c\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:01Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:01 crc kubenswrapper[4697]: I0127 15:09:01.154246 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:01 crc kubenswrapper[4697]: I0127 15:09:01.154277 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:01 crc kubenswrapper[4697]: I0127 15:09:01.154285 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:01 crc kubenswrapper[4697]: I0127 15:09:01.154299 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:01 crc kubenswrapper[4697]: I0127 15:09:01.154309 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:01Z","lastTransitionTime":"2026-01-27T15:09:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:01 crc kubenswrapper[4697]: E0127 15:09:01.168181 4697 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:09:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:09:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:09:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:09:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"74b869f4-b1e4-4686-af4e-9516e0fb5017\\\",\\\"systemUUID\\\":\\\"69bca9ab-721f-415b-ad88-6626c7795f3c\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:01Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:01 crc kubenswrapper[4697]: I0127 15:09:01.171216 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:01 crc kubenswrapper[4697]: I0127 15:09:01.171321 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:01 crc kubenswrapper[4697]: I0127 15:09:01.171400 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:01 crc kubenswrapper[4697]: I0127 15:09:01.171477 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:01 crc kubenswrapper[4697]: I0127 15:09:01.171539 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:01Z","lastTransitionTime":"2026-01-27T15:09:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:01 crc kubenswrapper[4697]: E0127 15:09:01.182852 4697 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:09:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:09:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:09:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:09:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"74b869f4-b1e4-4686-af4e-9516e0fb5017\\\",\\\"systemUUID\\\":\\\"69bca9ab-721f-415b-ad88-6626c7795f3c\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:01Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:01 crc kubenswrapper[4697]: I0127 15:09:01.188901 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:01 crc kubenswrapper[4697]: I0127 15:09:01.188948 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:01 crc kubenswrapper[4697]: I0127 15:09:01.188961 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:01 crc kubenswrapper[4697]: I0127 15:09:01.188978 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:01 crc kubenswrapper[4697]: I0127 15:09:01.188993 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:01Z","lastTransitionTime":"2026-01-27T15:09:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:01 crc kubenswrapper[4697]: E0127 15:09:01.200415 4697 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:09:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:09:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:09:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:09:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"74b869f4-b1e4-4686-af4e-9516e0fb5017\\\",\\\"systemUUID\\\":\\\"69bca9ab-721f-415b-ad88-6626c7795f3c\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:01Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:01 crc kubenswrapper[4697]: E0127 15:09:01.200677 4697 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 15:09:01 crc kubenswrapper[4697]: I0127 15:09:01.202500 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:01 crc kubenswrapper[4697]: I0127 15:09:01.202599 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:01 crc kubenswrapper[4697]: I0127 15:09:01.202843 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:01 crc kubenswrapper[4697]: I0127 15:09:01.202945 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:01 crc kubenswrapper[4697]: I0127 15:09:01.203017 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:01Z","lastTransitionTime":"2026-01-27T15:09:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:01 crc kubenswrapper[4697]: I0127 15:09:01.232961 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:09:01 crc kubenswrapper[4697]: E0127 15:09:01.233209 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:09:17.233188381 +0000 UTC m=+53.405588162 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:09:01 crc kubenswrapper[4697]: I0127 15:09:01.305295 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:01 crc kubenswrapper[4697]: I0127 15:09:01.305360 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:01 crc kubenswrapper[4697]: I0127 15:09:01.305377 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:01 crc kubenswrapper[4697]: I0127 15:09:01.305404 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:01 crc kubenswrapper[4697]: I0127 15:09:01.305428 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:01Z","lastTransitionTime":"2026-01-27T15:09:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:01 crc kubenswrapper[4697]: I0127 15:09:01.335135 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:09:01 crc kubenswrapper[4697]: I0127 15:09:01.335199 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:09:01 crc kubenswrapper[4697]: I0127 15:09:01.335226 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:09:01 crc kubenswrapper[4697]: I0127 15:09:01.335252 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:09:01 crc kubenswrapper[4697]: E0127 15:09:01.335294 4697 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 15:09:01 crc kubenswrapper[4697]: E0127 15:09:01.335315 4697 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 15:09:01 crc kubenswrapper[4697]: E0127 15:09:01.335327 4697 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 15:09:01 crc kubenswrapper[4697]: E0127 15:09:01.335341 4697 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 15:09:01 crc kubenswrapper[4697]: E0127 15:09:01.335368 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 15:09:17.335355278 +0000 UTC m=+53.507755059 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 15:09:01 crc kubenswrapper[4697]: E0127 15:09:01.335383 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 15:09:17.335376158 +0000 UTC m=+53.507775939 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 15:09:01 crc kubenswrapper[4697]: E0127 15:09:01.335441 4697 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 15:09:01 crc kubenswrapper[4697]: E0127 15:09:01.335460 4697 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 15:09:01 crc kubenswrapper[4697]: E0127 15:09:01.335486 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 15:09:17.33546995 +0000 UTC m=+53.507869741 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 15:09:01 crc kubenswrapper[4697]: E0127 15:09:01.335486 4697 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 15:09:01 crc kubenswrapper[4697]: E0127 15:09:01.335511 4697 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 15:09:01 crc kubenswrapper[4697]: E0127 15:09:01.335552 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 15:09:17.335541003 +0000 UTC m=+53.507940794 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 15:09:01 crc kubenswrapper[4697]: I0127 15:09:01.407705 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:01 crc kubenswrapper[4697]: I0127 15:09:01.407743 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:01 crc kubenswrapper[4697]: I0127 15:09:01.407754 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:01 crc kubenswrapper[4697]: I0127 15:09:01.407808 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:01 crc kubenswrapper[4697]: I0127 15:09:01.407822 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:01Z","lastTransitionTime":"2026-01-27T15:09:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:01 crc kubenswrapper[4697]: I0127 15:09:01.510178 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:01 crc kubenswrapper[4697]: I0127 15:09:01.510210 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:01 crc kubenswrapper[4697]: I0127 15:09:01.510220 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:01 crc kubenswrapper[4697]: I0127 15:09:01.510236 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:01 crc kubenswrapper[4697]: I0127 15:09:01.510249 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:01Z","lastTransitionTime":"2026-01-27T15:09:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:01 crc kubenswrapper[4697]: I0127 15:09:01.534323 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 07:20:29.021884008 +0000 UTC Jan 27 15:09:01 crc kubenswrapper[4697]: I0127 15:09:01.567868 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:09:01 crc kubenswrapper[4697]: I0127 15:09:01.567946 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:09:01 crc kubenswrapper[4697]: E0127 15:09:01.567994 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:09:01 crc kubenswrapper[4697]: E0127 15:09:01.568104 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:09:01 crc kubenswrapper[4697]: I0127 15:09:01.613525 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:01 crc kubenswrapper[4697]: I0127 15:09:01.613575 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:01 crc kubenswrapper[4697]: I0127 15:09:01.613587 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:01 crc kubenswrapper[4697]: I0127 15:09:01.613605 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:01 crc kubenswrapper[4697]: I0127 15:09:01.613620 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:01Z","lastTransitionTime":"2026-01-27T15:09:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:01 crc kubenswrapper[4697]: I0127 15:09:01.717611 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:01 crc kubenswrapper[4697]: I0127 15:09:01.717669 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:01 crc kubenswrapper[4697]: I0127 15:09:01.717679 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:01 crc kubenswrapper[4697]: I0127 15:09:01.717699 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:01 crc kubenswrapper[4697]: I0127 15:09:01.717710 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:01Z","lastTransitionTime":"2026-01-27T15:09:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:01 crc kubenswrapper[4697]: I0127 15:09:01.820634 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:01 crc kubenswrapper[4697]: I0127 15:09:01.821053 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:01 crc kubenswrapper[4697]: I0127 15:09:01.821150 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:01 crc kubenswrapper[4697]: I0127 15:09:01.821261 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:01 crc kubenswrapper[4697]: I0127 15:09:01.821357 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:01Z","lastTransitionTime":"2026-01-27T15:09:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:01 crc kubenswrapper[4697]: I0127 15:09:01.923561 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:01 crc kubenswrapper[4697]: I0127 15:09:01.923612 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:01 crc kubenswrapper[4697]: I0127 15:09:01.923622 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:01 crc kubenswrapper[4697]: I0127 15:09:01.923635 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:01 crc kubenswrapper[4697]: I0127 15:09:01.923643 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:01Z","lastTransitionTime":"2026-01-27T15:09:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:02 crc kubenswrapper[4697]: I0127 15:09:02.026813 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:02 crc kubenswrapper[4697]: I0127 15:09:02.026854 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:02 crc kubenswrapper[4697]: I0127 15:09:02.026867 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:02 crc kubenswrapper[4697]: I0127 15:09:02.026886 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:02 crc kubenswrapper[4697]: I0127 15:09:02.026898 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:02Z","lastTransitionTime":"2026-01-27T15:09:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:02 crc kubenswrapper[4697]: I0127 15:09:02.044602 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/11ed6885-450d-477c-8e08-acf5fbde2fa3-metrics-certs\") pod \"network-metrics-daemon-vwctp\" (UID: \"11ed6885-450d-477c-8e08-acf5fbde2fa3\") " pod="openshift-multus/network-metrics-daemon-vwctp" Jan 27 15:09:02 crc kubenswrapper[4697]: E0127 15:09:02.044913 4697 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 15:09:02 crc kubenswrapper[4697]: E0127 15:09:02.045028 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/11ed6885-450d-477c-8e08-acf5fbde2fa3-metrics-certs podName:11ed6885-450d-477c-8e08-acf5fbde2fa3 nodeName:}" failed. No retries permitted until 2026-01-27 15:09:04.045010377 +0000 UTC m=+40.217410158 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/11ed6885-450d-477c-8e08-acf5fbde2fa3-metrics-certs") pod "network-metrics-daemon-vwctp" (UID: "11ed6885-450d-477c-8e08-acf5fbde2fa3") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 15:09:02 crc kubenswrapper[4697]: I0127 15:09:02.130186 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:02 crc kubenswrapper[4697]: I0127 15:09:02.130228 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:02 crc kubenswrapper[4697]: I0127 15:09:02.130238 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:02 crc kubenswrapper[4697]: I0127 15:09:02.130256 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:02 crc kubenswrapper[4697]: I0127 15:09:02.130271 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:02Z","lastTransitionTime":"2026-01-27T15:09:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:02 crc kubenswrapper[4697]: I0127 15:09:02.232201 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:02 crc kubenswrapper[4697]: I0127 15:09:02.232231 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:02 crc kubenswrapper[4697]: I0127 15:09:02.232240 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:02 crc kubenswrapper[4697]: I0127 15:09:02.232254 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:02 crc kubenswrapper[4697]: I0127 15:09:02.232264 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:02Z","lastTransitionTime":"2026-01-27T15:09:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:02 crc kubenswrapper[4697]: I0127 15:09:02.335286 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:02 crc kubenswrapper[4697]: I0127 15:09:02.335336 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:02 crc kubenswrapper[4697]: I0127 15:09:02.335352 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:02 crc kubenswrapper[4697]: I0127 15:09:02.335374 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:02 crc kubenswrapper[4697]: I0127 15:09:02.335390 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:02Z","lastTransitionTime":"2026-01-27T15:09:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:02 crc kubenswrapper[4697]: I0127 15:09:02.438173 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:02 crc kubenswrapper[4697]: I0127 15:09:02.438586 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:02 crc kubenswrapper[4697]: I0127 15:09:02.438606 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:02 crc kubenswrapper[4697]: I0127 15:09:02.438629 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:02 crc kubenswrapper[4697]: I0127 15:09:02.438644 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:02Z","lastTransitionTime":"2026-01-27T15:09:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:02 crc kubenswrapper[4697]: I0127 15:09:02.535407 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 05:50:19.879397335 +0000 UTC Jan 27 15:09:02 crc kubenswrapper[4697]: I0127 15:09:02.541271 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:02 crc kubenswrapper[4697]: I0127 15:09:02.541300 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:02 crc kubenswrapper[4697]: I0127 15:09:02.541309 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:02 crc kubenswrapper[4697]: I0127 15:09:02.541322 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:02 crc kubenswrapper[4697]: I0127 15:09:02.541333 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:02Z","lastTransitionTime":"2026-01-27T15:09:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:02 crc kubenswrapper[4697]: I0127 15:09:02.567834 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:09:02 crc kubenswrapper[4697]: E0127 15:09:02.567954 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:09:02 crc kubenswrapper[4697]: I0127 15:09:02.568037 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vwctp" Jan 27 15:09:02 crc kubenswrapper[4697]: E0127 15:09:02.568148 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vwctp" podUID="11ed6885-450d-477c-8e08-acf5fbde2fa3" Jan 27 15:09:02 crc kubenswrapper[4697]: I0127 15:09:02.643500 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:02 crc kubenswrapper[4697]: I0127 15:09:02.643607 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:02 crc kubenswrapper[4697]: I0127 15:09:02.643628 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:02 crc kubenswrapper[4697]: I0127 15:09:02.643651 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:02 crc kubenswrapper[4697]: I0127 15:09:02.643701 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:02Z","lastTransitionTime":"2026-01-27T15:09:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:02 crc kubenswrapper[4697]: I0127 15:09:02.746304 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:02 crc kubenswrapper[4697]: I0127 15:09:02.746340 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:02 crc kubenswrapper[4697]: I0127 15:09:02.746353 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:02 crc kubenswrapper[4697]: I0127 15:09:02.746372 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:02 crc kubenswrapper[4697]: I0127 15:09:02.746388 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:02Z","lastTransitionTime":"2026-01-27T15:09:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:02 crc kubenswrapper[4697]: I0127 15:09:02.848749 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:02 crc kubenswrapper[4697]: I0127 15:09:02.848821 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:02 crc kubenswrapper[4697]: I0127 15:09:02.848834 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:02 crc kubenswrapper[4697]: I0127 15:09:02.848864 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:02 crc kubenswrapper[4697]: I0127 15:09:02.848879 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:02Z","lastTransitionTime":"2026-01-27T15:09:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:02 crc kubenswrapper[4697]: I0127 15:09:02.951460 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:02 crc kubenswrapper[4697]: I0127 15:09:02.951495 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:02 crc kubenswrapper[4697]: I0127 15:09:02.951502 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:02 crc kubenswrapper[4697]: I0127 15:09:02.951518 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:02 crc kubenswrapper[4697]: I0127 15:09:02.951526 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:02Z","lastTransitionTime":"2026-01-27T15:09:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:03 crc kubenswrapper[4697]: I0127 15:09:03.054303 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:03 crc kubenswrapper[4697]: I0127 15:09:03.054601 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:03 crc kubenswrapper[4697]: I0127 15:09:03.054700 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:03 crc kubenswrapper[4697]: I0127 15:09:03.054797 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:03 crc kubenswrapper[4697]: I0127 15:09:03.054858 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:03Z","lastTransitionTime":"2026-01-27T15:09:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:03 crc kubenswrapper[4697]: I0127 15:09:03.157874 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:03 crc kubenswrapper[4697]: I0127 15:09:03.158252 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:03 crc kubenswrapper[4697]: I0127 15:09:03.158337 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:03 crc kubenswrapper[4697]: I0127 15:09:03.158420 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:03 crc kubenswrapper[4697]: I0127 15:09:03.158504 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:03Z","lastTransitionTime":"2026-01-27T15:09:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:03 crc kubenswrapper[4697]: I0127 15:09:03.260540 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:03 crc kubenswrapper[4697]: I0127 15:09:03.260815 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:03 crc kubenswrapper[4697]: I0127 15:09:03.260892 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:03 crc kubenswrapper[4697]: I0127 15:09:03.260958 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:03 crc kubenswrapper[4697]: I0127 15:09:03.261017 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:03Z","lastTransitionTime":"2026-01-27T15:09:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:03 crc kubenswrapper[4697]: I0127 15:09:03.363610 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:03 crc kubenswrapper[4697]: I0127 15:09:03.363663 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:03 crc kubenswrapper[4697]: I0127 15:09:03.363680 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:03 crc kubenswrapper[4697]: I0127 15:09:03.363707 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:03 crc kubenswrapper[4697]: I0127 15:09:03.363724 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:03Z","lastTransitionTime":"2026-01-27T15:09:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:03 crc kubenswrapper[4697]: I0127 15:09:03.467270 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:03 crc kubenswrapper[4697]: I0127 15:09:03.467325 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:03 crc kubenswrapper[4697]: I0127 15:09:03.467336 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:03 crc kubenswrapper[4697]: I0127 15:09:03.467355 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:03 crc kubenswrapper[4697]: I0127 15:09:03.467366 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:03Z","lastTransitionTime":"2026-01-27T15:09:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:03 crc kubenswrapper[4697]: I0127 15:09:03.535852 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 01:08:16.90611119 +0000 UTC Jan 27 15:09:03 crc kubenswrapper[4697]: I0127 15:09:03.568304 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:09:03 crc kubenswrapper[4697]: E0127 15:09:03.568458 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:09:03 crc kubenswrapper[4697]: I0127 15:09:03.568321 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:09:03 crc kubenswrapper[4697]: E0127 15:09:03.568947 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:09:03 crc kubenswrapper[4697]: I0127 15:09:03.570220 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:03 crc kubenswrapper[4697]: I0127 15:09:03.570276 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:03 crc kubenswrapper[4697]: I0127 15:09:03.570299 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:03 crc kubenswrapper[4697]: I0127 15:09:03.570331 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:03 crc kubenswrapper[4697]: I0127 15:09:03.570355 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:03Z","lastTransitionTime":"2026-01-27T15:09:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:03 crc kubenswrapper[4697]: I0127 15:09:03.673365 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:03 crc kubenswrapper[4697]: I0127 15:09:03.673425 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:03 crc kubenswrapper[4697]: I0127 15:09:03.673436 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:03 crc kubenswrapper[4697]: I0127 15:09:03.673461 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:03 crc kubenswrapper[4697]: I0127 15:09:03.673481 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:03Z","lastTransitionTime":"2026-01-27T15:09:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:03 crc kubenswrapper[4697]: I0127 15:09:03.776690 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:03 crc kubenswrapper[4697]: I0127 15:09:03.776753 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:03 crc kubenswrapper[4697]: I0127 15:09:03.776770 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:03 crc kubenswrapper[4697]: I0127 15:09:03.776828 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:03 crc kubenswrapper[4697]: I0127 15:09:03.776852 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:03Z","lastTransitionTime":"2026-01-27T15:09:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:03 crc kubenswrapper[4697]: I0127 15:09:03.879968 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:03 crc kubenswrapper[4697]: I0127 15:09:03.880010 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:03 crc kubenswrapper[4697]: I0127 15:09:03.880023 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:03 crc kubenswrapper[4697]: I0127 15:09:03.880043 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:03 crc kubenswrapper[4697]: I0127 15:09:03.880054 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:03Z","lastTransitionTime":"2026-01-27T15:09:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:03 crc kubenswrapper[4697]: I0127 15:09:03.982659 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:03 crc kubenswrapper[4697]: I0127 15:09:03.983099 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:03 crc kubenswrapper[4697]: I0127 15:09:03.983303 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:03 crc kubenswrapper[4697]: I0127 15:09:03.983583 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:03 crc kubenswrapper[4697]: I0127 15:09:03.983743 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:03Z","lastTransitionTime":"2026-01-27T15:09:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:04 crc kubenswrapper[4697]: I0127 15:09:04.068984 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/11ed6885-450d-477c-8e08-acf5fbde2fa3-metrics-certs\") pod \"network-metrics-daemon-vwctp\" (UID: \"11ed6885-450d-477c-8e08-acf5fbde2fa3\") " pod="openshift-multus/network-metrics-daemon-vwctp" Jan 27 15:09:04 crc kubenswrapper[4697]: E0127 15:09:04.069208 4697 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 15:09:04 crc kubenswrapper[4697]: E0127 15:09:04.069815 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/11ed6885-450d-477c-8e08-acf5fbde2fa3-metrics-certs podName:11ed6885-450d-477c-8e08-acf5fbde2fa3 nodeName:}" failed. No retries permitted until 2026-01-27 15:09:08.069749548 +0000 UTC m=+44.242149369 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/11ed6885-450d-477c-8e08-acf5fbde2fa3-metrics-certs") pod "network-metrics-daemon-vwctp" (UID: "11ed6885-450d-477c-8e08-acf5fbde2fa3") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 15:09:04 crc kubenswrapper[4697]: I0127 15:09:04.086374 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:04 crc kubenswrapper[4697]: I0127 15:09:04.086457 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:04 crc kubenswrapper[4697]: I0127 15:09:04.086606 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:04 crc kubenswrapper[4697]: I0127 15:09:04.086633 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:04 crc kubenswrapper[4697]: I0127 15:09:04.086646 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:04Z","lastTransitionTime":"2026-01-27T15:09:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:04 crc kubenswrapper[4697]: I0127 15:09:04.189638 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:04 crc kubenswrapper[4697]: I0127 15:09:04.189696 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:04 crc kubenswrapper[4697]: I0127 15:09:04.189713 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:04 crc kubenswrapper[4697]: I0127 15:09:04.189736 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:04 crc kubenswrapper[4697]: I0127 15:09:04.189753 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:04Z","lastTransitionTime":"2026-01-27T15:09:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:04 crc kubenswrapper[4697]: I0127 15:09:04.292975 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:04 crc kubenswrapper[4697]: I0127 15:09:04.293047 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:04 crc kubenswrapper[4697]: I0127 15:09:04.293092 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:04 crc kubenswrapper[4697]: I0127 15:09:04.293119 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:04 crc kubenswrapper[4697]: I0127 15:09:04.293138 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:04Z","lastTransitionTime":"2026-01-27T15:09:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:04 crc kubenswrapper[4697]: I0127 15:09:04.396084 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:04 crc kubenswrapper[4697]: I0127 15:09:04.396153 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:04 crc kubenswrapper[4697]: I0127 15:09:04.396167 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:04 crc kubenswrapper[4697]: I0127 15:09:04.396188 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:04 crc kubenswrapper[4697]: I0127 15:09:04.396203 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:04Z","lastTransitionTime":"2026-01-27T15:09:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:04 crc kubenswrapper[4697]: I0127 15:09:04.499409 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:04 crc kubenswrapper[4697]: I0127 15:09:04.499455 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:04 crc kubenswrapper[4697]: I0127 15:09:04.499466 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:04 crc kubenswrapper[4697]: I0127 15:09:04.499487 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:04 crc kubenswrapper[4697]: I0127 15:09:04.499499 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:04Z","lastTransitionTime":"2026-01-27T15:09:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:04 crc kubenswrapper[4697]: I0127 15:09:04.536330 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 22:00:21.797470446 +0000 UTC Jan 27 15:09:04 crc kubenswrapper[4697]: I0127 15:09:04.568220 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vwctp" Jan 27 15:09:04 crc kubenswrapper[4697]: I0127 15:09:04.568266 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:09:04 crc kubenswrapper[4697]: E0127 15:09:04.568366 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vwctp" podUID="11ed6885-450d-477c-8e08-acf5fbde2fa3" Jan 27 15:09:04 crc kubenswrapper[4697]: E0127 15:09:04.568511 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:09:04 crc kubenswrapper[4697]: I0127 15:09:04.580900 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bdclj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed86f7b6-a042-470f-8da3-9cad4e65c550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a701152234da7522fefeed3798f4748c4f8e56fa81edd5011ad4a89bbb2e4be7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f898q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bdclj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:04Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:04 crc kubenswrapper[4697]: I0127 15:09:04.593467 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6lf86" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35bbb68b-046f-482d-8c38-e76dd8a12a61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6949b3c1babb1c4c69bf612b869bea5dabf3fedc5e6c930ec3d3a51736c9651f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sf5z9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc445832c9ce25b3b787c029df7baad2f8ad53f7cf8705ab5e2590c85119bec1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sf5z9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6lf86\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:04Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:04 crc kubenswrapper[4697]: I0127 15:09:04.602553 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:04 crc kubenswrapper[4697]: I0127 15:09:04.602599 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:04 crc kubenswrapper[4697]: I0127 15:09:04.602608 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:04 crc kubenswrapper[4697]: I0127 15:09:04.602620 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:04 crc kubenswrapper[4697]: I0127 15:09:04.602628 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:04Z","lastTransitionTime":"2026-01-27T15:09:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:04 crc kubenswrapper[4697]: I0127 15:09:04.608400 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30821478-065e-48b2-85f3-ae69260477fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://841fe2379065903ddc38b4968c1764a6c83d13f42c7587f20be81d8539199c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc09ec12a81a4e2954a0d1146819e9f9b4fc1fd442a3e9c930ea213aff875eb9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa7833382543ce12d026eb8bbc6fb93276a1105a0cc34d215e719591be740f80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3144c28de6be75231118993ba779a42bcc9032d51e927df649d3abb602ffa5dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3144c28de6be75231118993ba779a42bcc9032d51e927df649d3abb602ffa5dd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 15:08:45.318333 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 15:08:45.318446 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 15:08:45.319039 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1083979560/tls.crt::/tmp/serving-cert-1083979560/tls.key\\\\\\\"\\\\nI0127 15:08:45.778691 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 15:08:45.781562 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 15:08:45.781589 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 15:08:45.781614 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 15:08:45.781620 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 15:08:45.799733 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0127 15:08:45.799756 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 15:08:45.799769 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:08:45.799774 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:08:45.799800 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 15:08:45.799806 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 15:08:45.799810 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 15:08:45.799814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 15:08:45.805747 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://772509e08b1dcc68190d81e10a93fe348af55fdc71dbab2f0cadffd65089c044\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d9c79b1675802dcd1800cdbf3562832c4d201ff1b4d7ab4504118a41a245453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d9c79b1675802dcd1800cdbf3562832c4d201ff1b4d7ab4504118a41a245453\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:24Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:04Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:04 crc kubenswrapper[4697]: I0127 15:09:04.622140 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:04Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:04 crc kubenswrapper[4697]: I0127 15:09:04.634096 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:04Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:04 crc kubenswrapper[4697]: I0127 15:09:04.645349 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://955eb03bb38f971417b1af1b193c2008607eaeda5addf30f899830dd84620c4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:04Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:04 crc kubenswrapper[4697]: I0127 15:09:04.667819 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6jxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24ac4a674c5fb98082daeabf52736988951ea5c66064ff4bb63f0d40c43b947d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25f52622d494cffbbd36c21f76148b896a10d3c1ace649ac0824e847b812a277\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9146d3d41cb348c99ea78d62aef3aa7d46c5f99855e042fdf5bc38b18556e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e33c68fac5ef11b2704b8a1460588937489a191ea2eacb70548b1e99cf718822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8784cf473729161592d08c782f4754724d6609756a30040715cbff8c732a09c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eea7c2b7dbea8198cc4709a808f8ecab760514224f4e3eb96d04c3bd7f16df6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c69c1cef25da3355f8dacd4b9acc8d52cdc6e32c3149645679a66420b2ff1fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c69c1cef25da3355f8dacd4b9acc8d52cdc6e32c3149645679a66420b2ff1fc2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T15:08:57Z\\\",\\\"message\\\":\\\"dded to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:57Z is after 2025-08-24T17:21:41Z]\\\\nI0127 15:08:57.846840 6015 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-controller-manager-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"3ec9f67e-7758-4707-a6d0-2dc28f28ac37\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-controller-manager-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-controller-manager-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-z6jxw_openshift-ovn-kubernetes(6a1ce5ad-1a8c-4a28-99d8-fc71649954ad)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://971bf4362650664f5133d9b68b7a5ce76e54dafbf28c88730f678ada0256ffd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9666b8a501ef015431ee3be1fc34ca2b196011df3007d2e4d508f09f9967785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9666b8a501ef015431ee3be1fc34ca2b196011df3007d2e4d508f09f9967785\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z6jxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:04Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:04 crc kubenswrapper[4697]: I0127 15:09:04.681067 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00bcd4fb-11e6-4087-91b7-290cd35a7292\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee5c74f4e3f1154431027a743528e81ec4bed30037b30a858870f74993da4691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b23c092c5d493951a1f6dbbf0482f102f36a830133d843f3c574afba2e1d50d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ad05a5c3b7640af677ede45c27c40da5d118e28a9d45de0ffa60a05684121c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fd615105781bcf4614f8a58cf63eeb89020db12e822192bd652a5ff23e25a2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:24Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:04Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:04 crc kubenswrapper[4697]: I0127 15:09:04.694620 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e13ee612abe9aa03f8ccaf68abbdfdbeb29820484f430097aef6be1679d3efe8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:04Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:04 crc kubenswrapper[4697]: I0127 15:09:04.705044 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:04 crc kubenswrapper[4697]: I0127 15:09:04.705232 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:04 crc kubenswrapper[4697]: I0127 15:09:04.705301 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:04 crc kubenswrapper[4697]: I0127 15:09:04.705377 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:04 crc kubenswrapper[4697]: I0127 15:09:04.705447 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:04Z","lastTransitionTime":"2026-01-27T15:09:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:04 crc kubenswrapper[4697]: I0127 15:09:04.707268 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a182e147723dd1c9335e6c6a910d5d53bdfc118504b6a0a9f3c91f79b6d3aee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52fcd1c6784720765f18ddc1936d3bdd625b743d27654a647ff80351957797e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:04Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:04 crc kubenswrapper[4697]: I0127 15:09:04.718164 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:04Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:04 crc kubenswrapper[4697]: I0127 15:09:04.732079 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bcb9s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7543bea-0b65-44e1-8c0c-bc1a13577d69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fe79de88015d62a290c140e0504b9ef088f39fa79bc9b379d46fa9cdb03123f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0b69d8311464a46854b17dc23de984ff37a24f3de84f8ad6033d26d5dd30afc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0b69d8311464a46854b17dc23de984ff37a24f3de84f8ad6033d26d5dd30afc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d34049aae4e409909bb597c8bf33aa1c1ac85699cf72e33f5643145fdf9fbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d34049aae4e409909bb597c8bf33aa1c1ac85699cf72e33f5643145fdf9fbb9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b85aff4ba7e4c4eddcdfd916b42392fd8f5bd4d18caae739a7490c0576fcff1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b85aff4ba7e4c4eddcdfd916b42392fd8f5bd4d18caae739a7490c0576fcff1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6affaf91a44dec8a9da34068ed68f480ad543e0efc8e0f584fd5002f8f6ed0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6affaf91a44dec8a9da34068ed68f480ad543e0efc8e0f584fd5002f8f6ed0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dede89b14b4d80c8b9e74c45b628b5def6a04f922bb59c06828c3a4e43deca4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dede89b14b4d80c8b9e74c45b628b5def6a04f922bb59c06828c3a4e43deca4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a125d46e355d85444bf125e8184888e9b0c18dab3cd7b09b89ffff202e2c6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a125d46e355d85444bf125e8184888e9b0c18dab3cd7b09b89ffff202e2c6b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bcb9s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:04Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:04 crc kubenswrapper[4697]: I0127 15:09:04.752472 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rq89t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fbc1c27-fba2-40df-95dd-3842bd1f1906\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0c056e48d3130806317f25486fea67d938a0e610f19b6089873f2fcfe4759a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npp7h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rq89t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:04Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:04 crc kubenswrapper[4697]: I0127 15:09:04.769218 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wz495" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9bec8bc-b2a6-4865-83ca-692ae5c022a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2616d07c83d73b63d4b728a30de8a7e1d76986d38f8c4c3fe019bf73e64784f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wqhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://faaced835dbc76e880a1fd29824b00fca5f720686e476bcba6ad4f807e28e8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wqhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wz495\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:04Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:04 crc kubenswrapper[4697]: I0127 15:09:04.780044 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vwctp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11ed6885-450d-477c-8e08-acf5fbde2fa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr85v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr85v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:09:00Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vwctp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:04Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:04 crc kubenswrapper[4697]: I0127 15:09:04.789693 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lpz4j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d187caad-2501-44d6-8ced-f8d8ca5fecfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c2b6a00c426e85ca8ca4fe5790bf7badc12e0c2cc72c1454e664e809ace5e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5jqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lpz4j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:04Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:04 crc kubenswrapper[4697]: I0127 15:09:04.807576 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:04 crc kubenswrapper[4697]: I0127 15:09:04.807623 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:04 crc kubenswrapper[4697]: I0127 15:09:04.807633 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:04 crc kubenswrapper[4697]: I0127 15:09:04.807647 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:04 crc kubenswrapper[4697]: I0127 15:09:04.807656 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:04Z","lastTransitionTime":"2026-01-27T15:09:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:04 crc kubenswrapper[4697]: I0127 15:09:04.909189 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:04 crc kubenswrapper[4697]: I0127 15:09:04.909220 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:04 crc kubenswrapper[4697]: I0127 15:09:04.909229 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:04 crc kubenswrapper[4697]: I0127 15:09:04.909242 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:04 crc kubenswrapper[4697]: I0127 15:09:04.909254 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:04Z","lastTransitionTime":"2026-01-27T15:09:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:05 crc kubenswrapper[4697]: I0127 15:09:05.012691 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:05 crc kubenswrapper[4697]: I0127 15:09:05.012778 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:05 crc kubenswrapper[4697]: I0127 15:09:05.012853 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:05 crc kubenswrapper[4697]: I0127 15:09:05.012876 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:05 crc kubenswrapper[4697]: I0127 15:09:05.012929 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:05Z","lastTransitionTime":"2026-01-27T15:09:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:05 crc kubenswrapper[4697]: I0127 15:09:05.115828 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:05 crc kubenswrapper[4697]: I0127 15:09:05.115889 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:05 crc kubenswrapper[4697]: I0127 15:09:05.115912 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:05 crc kubenswrapper[4697]: I0127 15:09:05.115941 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:05 crc kubenswrapper[4697]: I0127 15:09:05.115963 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:05Z","lastTransitionTime":"2026-01-27T15:09:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:05 crc kubenswrapper[4697]: I0127 15:09:05.219545 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:05 crc kubenswrapper[4697]: I0127 15:09:05.219584 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:05 crc kubenswrapper[4697]: I0127 15:09:05.219604 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:05 crc kubenswrapper[4697]: I0127 15:09:05.219621 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:05 crc kubenswrapper[4697]: I0127 15:09:05.219630 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:05Z","lastTransitionTime":"2026-01-27T15:09:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:05 crc kubenswrapper[4697]: I0127 15:09:05.323273 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:05 crc kubenswrapper[4697]: I0127 15:09:05.323332 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:05 crc kubenswrapper[4697]: I0127 15:09:05.323349 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:05 crc kubenswrapper[4697]: I0127 15:09:05.323377 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:05 crc kubenswrapper[4697]: I0127 15:09:05.323397 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:05Z","lastTransitionTime":"2026-01-27T15:09:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:05 crc kubenswrapper[4697]: I0127 15:09:05.427569 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:05 crc kubenswrapper[4697]: I0127 15:09:05.427618 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:05 crc kubenswrapper[4697]: I0127 15:09:05.427630 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:05 crc kubenswrapper[4697]: I0127 15:09:05.427650 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:05 crc kubenswrapper[4697]: I0127 15:09:05.427662 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:05Z","lastTransitionTime":"2026-01-27T15:09:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:05 crc kubenswrapper[4697]: I0127 15:09:05.529976 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:05 crc kubenswrapper[4697]: I0127 15:09:05.530030 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:05 crc kubenswrapper[4697]: I0127 15:09:05.530045 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:05 crc kubenswrapper[4697]: I0127 15:09:05.530067 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:05 crc kubenswrapper[4697]: I0127 15:09:05.530083 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:05Z","lastTransitionTime":"2026-01-27T15:09:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:05 crc kubenswrapper[4697]: I0127 15:09:05.537340 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 14:19:44.111845142 +0000 UTC Jan 27 15:09:05 crc kubenswrapper[4697]: I0127 15:09:05.567757 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:09:05 crc kubenswrapper[4697]: E0127 15:09:05.567974 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:09:05 crc kubenswrapper[4697]: I0127 15:09:05.568017 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:09:05 crc kubenswrapper[4697]: E0127 15:09:05.568087 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:09:05 crc kubenswrapper[4697]: I0127 15:09:05.568623 4697 scope.go:117] "RemoveContainer" containerID="3144c28de6be75231118993ba779a42bcc9032d51e927df649d3abb602ffa5dd" Jan 27 15:09:05 crc kubenswrapper[4697]: I0127 15:09:05.634016 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:05 crc kubenswrapper[4697]: I0127 15:09:05.634069 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:05 crc kubenswrapper[4697]: I0127 15:09:05.634081 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:05 crc kubenswrapper[4697]: I0127 15:09:05.634096 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:05 crc kubenswrapper[4697]: I0127 15:09:05.634108 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:05Z","lastTransitionTime":"2026-01-27T15:09:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:05 crc kubenswrapper[4697]: I0127 15:09:05.735887 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:05 crc kubenswrapper[4697]: I0127 15:09:05.735929 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:05 crc kubenswrapper[4697]: I0127 15:09:05.735943 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:05 crc kubenswrapper[4697]: I0127 15:09:05.735964 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:05 crc kubenswrapper[4697]: I0127 15:09:05.735978 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:05Z","lastTransitionTime":"2026-01-27T15:09:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:05 crc kubenswrapper[4697]: I0127 15:09:05.838764 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:05 crc kubenswrapper[4697]: I0127 15:09:05.838828 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:05 crc kubenswrapper[4697]: I0127 15:09:05.838844 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:05 crc kubenswrapper[4697]: I0127 15:09:05.838867 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:05 crc kubenswrapper[4697]: I0127 15:09:05.838884 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:05Z","lastTransitionTime":"2026-01-27T15:09:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:05 crc kubenswrapper[4697]: I0127 15:09:05.913834 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 27 15:09:05 crc kubenswrapper[4697]: I0127 15:09:05.915435 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"9d1140be76b3f274b414e158153723d043089cb9b01d27733976db83dc4601f9"} Jan 27 15:09:05 crc kubenswrapper[4697]: I0127 15:09:05.916649 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 15:09:05 crc kubenswrapper[4697]: I0127 15:09:05.937608 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lpz4j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d187caad-2501-44d6-8ced-f8d8ca5fecfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c2b6a00c426e85ca8ca4fe5790bf7badc12e0c2cc72c1454e664e809ace5e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5jqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lpz4j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:05Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:05 crc kubenswrapper[4697]: I0127 15:09:05.943064 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:05 crc kubenswrapper[4697]: I0127 15:09:05.943122 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:05 crc kubenswrapper[4697]: I0127 15:09:05.943144 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:05 crc kubenswrapper[4697]: I0127 15:09:05.943172 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:05 crc kubenswrapper[4697]: I0127 15:09:05.943193 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:05Z","lastTransitionTime":"2026-01-27T15:09:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:05 crc kubenswrapper[4697]: I0127 15:09:05.954536 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bdclj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed86f7b6-a042-470f-8da3-9cad4e65c550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a701152234da7522fefeed3798f4748c4f8e56fa81edd5011ad4a89bbb2e4be7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f898q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bdclj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:05Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:05 crc kubenswrapper[4697]: I0127 15:09:05.969291 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6lf86" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35bbb68b-046f-482d-8c38-e76dd8a12a61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6949b3c1babb1c4c69bf612b869bea5dabf3fedc5e6c930ec3d3a51736c9651f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sf5z9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc445832c9ce25b3b787c029df7baad2f8ad53f7cf8705ab5e2590c85119bec1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sf5z9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6lf86\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:05Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:05 crc kubenswrapper[4697]: I0127 15:09:05.989646 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30821478-065e-48b2-85f3-ae69260477fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://841fe2379065903ddc38b4968c1764a6c83d13f42c7587f20be81d8539199c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc09ec12a81a4e2954a0d1146819e9f9b4fc1fd442a3e9c930ea213aff875eb9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa7833382543ce12d026eb8bbc6fb93276a1105a0cc34d215e719591be740f80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d1140be76b3f274b414e158153723d043089cb9b01d27733976db83dc4601f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3144c28de6be75231118993ba779a42bcc9032d51e927df649d3abb602ffa5dd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 15:08:45.318333 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 15:08:45.318446 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 15:08:45.319039 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1083979560/tls.crt::/tmp/serving-cert-1083979560/tls.key\\\\\\\"\\\\nI0127 15:08:45.778691 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 15:08:45.781562 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 15:08:45.781589 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 15:08:45.781614 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 15:08:45.781620 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 15:08:45.799733 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0127 15:08:45.799756 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 15:08:45.799769 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:08:45.799774 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:08:45.799800 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 15:08:45.799806 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 15:08:45.799810 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 15:08:45.799814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 15:08:45.805747 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://772509e08b1dcc68190d81e10a93fe348af55fdc71dbab2f0cadffd65089c044\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d9c79b1675802dcd1800cdbf3562832c4d201ff1b4d7ab4504118a41a245453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d9c79b1675802dcd1800cdbf3562832c4d201ff1b4d7ab4504118a41a245453\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:24Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:05Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:06 crc kubenswrapper[4697]: I0127 15:09:06.007380 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:06Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:06 crc kubenswrapper[4697]: I0127 15:09:06.020062 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:06Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:06 crc kubenswrapper[4697]: I0127 15:09:06.032048 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://955eb03bb38f971417b1af1b193c2008607eaeda5addf30f899830dd84620c4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:06Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:06 crc kubenswrapper[4697]: I0127 15:09:06.045658 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:06 crc kubenswrapper[4697]: I0127 15:09:06.045708 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:06 crc kubenswrapper[4697]: I0127 15:09:06.045722 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:06 crc kubenswrapper[4697]: I0127 15:09:06.045742 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:06 crc kubenswrapper[4697]: I0127 15:09:06.045756 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:06Z","lastTransitionTime":"2026-01-27T15:09:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:06 crc kubenswrapper[4697]: I0127 15:09:06.056410 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6jxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24ac4a674c5fb98082daeabf52736988951ea5c66064ff4bb63f0d40c43b947d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25f52622d494cffbbd36c21f76148b896a10d3c1ace649ac0824e847b812a277\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9146d3d41cb348c99ea78d62aef3aa7d46c5f99855e042fdf5bc38b18556e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e33c68fac5ef11b2704b8a1460588937489a191ea2eacb70548b1e99cf718822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8784cf473729161592d08c782f4754724d6609756a30040715cbff8c732a09c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eea7c2b7dbea8198cc4709a808f8ecab760514224f4e3eb96d04c3bd7f16df6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c69c1cef25da3355f8dacd4b9acc8d52cdc6e32c3149645679a66420b2ff1fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c69c1cef25da3355f8dacd4b9acc8d52cdc6e32c3149645679a66420b2ff1fc2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T15:08:57Z\\\",\\\"message\\\":\\\"dded to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:57Z is after 2025-08-24T17:21:41Z]\\\\nI0127 15:08:57.846840 6015 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-controller-manager-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"3ec9f67e-7758-4707-a6d0-2dc28f28ac37\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-controller-manager-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-controller-manager-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-z6jxw_openshift-ovn-kubernetes(6a1ce5ad-1a8c-4a28-99d8-fc71649954ad)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://971bf4362650664f5133d9b68b7a5ce76e54dafbf28c88730f678ada0256ffd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9666b8a501ef015431ee3be1fc34ca2b196011df3007d2e4d508f09f9967785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9666b8a501ef015431ee3be1fc34ca2b196011df3007d2e4d508f09f9967785\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z6jxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:06Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:06 crc kubenswrapper[4697]: I0127 15:09:06.071071 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00bcd4fb-11e6-4087-91b7-290cd35a7292\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee5c74f4e3f1154431027a743528e81ec4bed30037b30a858870f74993da4691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b23c092c5d493951a1f6dbbf0482f102f36a830133d843f3c574afba2e1d50d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ad05a5c3b7640af677ede45c27c40da5d118e28a9d45de0ffa60a05684121c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fd615105781bcf4614f8a58cf63eeb89020db12e822192bd652a5ff23e25a2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:24Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:06Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:06 crc kubenswrapper[4697]: I0127 15:09:06.123686 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e13ee612abe9aa03f8ccaf68abbdfdbeb29820484f430097aef6be1679d3efe8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:06Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:06 crc kubenswrapper[4697]: I0127 15:09:06.147500 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a182e147723dd1c9335e6c6a910d5d53bdfc118504b6a0a9f3c91f79b6d3aee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52fcd1c6784720765f18ddc1936d3bdd625b743d27654a647ff80351957797e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:06Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:06 crc kubenswrapper[4697]: I0127 15:09:06.147907 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:06 crc kubenswrapper[4697]: I0127 15:09:06.147940 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:06 crc kubenswrapper[4697]: I0127 15:09:06.147952 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:06 crc kubenswrapper[4697]: I0127 15:09:06.147966 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:06 crc kubenswrapper[4697]: I0127 15:09:06.147977 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:06Z","lastTransitionTime":"2026-01-27T15:09:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:06 crc kubenswrapper[4697]: I0127 15:09:06.169954 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:06Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:06 crc kubenswrapper[4697]: I0127 15:09:06.184916 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bcb9s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7543bea-0b65-44e1-8c0c-bc1a13577d69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fe79de88015d62a290c140e0504b9ef088f39fa79bc9b379d46fa9cdb03123f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0b69d8311464a46854b17dc23de984ff37a24f3de84f8ad6033d26d5dd30afc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0b69d8311464a46854b17dc23de984ff37a24f3de84f8ad6033d26d5dd30afc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d34049aae4e409909bb597c8bf33aa1c1ac85699cf72e33f5643145fdf9fbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d34049aae4e409909bb597c8bf33aa1c1ac85699cf72e33f5643145fdf9fbb9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b85aff4ba7e4c4eddcdfd916b42392fd8f5bd4d18caae739a7490c0576fcff1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b85aff4ba7e4c4eddcdfd916b42392fd8f5bd4d18caae739a7490c0576fcff1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6affaf91a44dec8a9da34068ed68f480ad543e0efc8e0f584fd5002f8f6ed0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6affaf91a44dec8a9da34068ed68f480ad543e0efc8e0f584fd5002f8f6ed0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dede89b14b4d80c8b9e74c45b628b5def6a04f922bb59c06828c3a4e43deca4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dede89b14b4d80c8b9e74c45b628b5def6a04f922bb59c06828c3a4e43deca4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a125d46e355d85444bf125e8184888e9b0c18dab3cd7b09b89ffff202e2c6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a125d46e355d85444bf125e8184888e9b0c18dab3cd7b09b89ffff202e2c6b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bcb9s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:06Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:06 crc kubenswrapper[4697]: I0127 15:09:06.198278 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rq89t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fbc1c27-fba2-40df-95dd-3842bd1f1906\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0c056e48d3130806317f25486fea67d938a0e610f19b6089873f2fcfe4759a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npp7h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rq89t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:06Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:06 crc kubenswrapper[4697]: I0127 15:09:06.207990 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wz495" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9bec8bc-b2a6-4865-83ca-692ae5c022a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2616d07c83d73b63d4b728a30de8a7e1d76986d38f8c4c3fe019bf73e64784f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wqhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://faaced835dbc76e880a1fd29824b00fca5f720686e476bcba6ad4f807e28e8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wqhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wz495\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:06Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:06 crc kubenswrapper[4697]: I0127 15:09:06.216855 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vwctp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11ed6885-450d-477c-8e08-acf5fbde2fa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr85v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr85v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:09:00Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vwctp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:06Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:06 crc kubenswrapper[4697]: I0127 15:09:06.250410 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:06 crc kubenswrapper[4697]: I0127 15:09:06.250444 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:06 crc kubenswrapper[4697]: I0127 15:09:06.250455 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:06 crc kubenswrapper[4697]: I0127 15:09:06.250471 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:06 crc kubenswrapper[4697]: I0127 15:09:06.250481 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:06Z","lastTransitionTime":"2026-01-27T15:09:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:06 crc kubenswrapper[4697]: I0127 15:09:06.352806 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:06 crc kubenswrapper[4697]: I0127 15:09:06.352833 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:06 crc kubenswrapper[4697]: I0127 15:09:06.352844 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:06 crc kubenswrapper[4697]: I0127 15:09:06.352867 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:06 crc kubenswrapper[4697]: I0127 15:09:06.352879 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:06Z","lastTransitionTime":"2026-01-27T15:09:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:06 crc kubenswrapper[4697]: I0127 15:09:06.455423 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:06 crc kubenswrapper[4697]: I0127 15:09:06.455480 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:06 crc kubenswrapper[4697]: I0127 15:09:06.455499 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:06 crc kubenswrapper[4697]: I0127 15:09:06.455522 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:06 crc kubenswrapper[4697]: I0127 15:09:06.455541 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:06Z","lastTransitionTime":"2026-01-27T15:09:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:06 crc kubenswrapper[4697]: I0127 15:09:06.538002 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 00:34:20.944507243 +0000 UTC Jan 27 15:09:06 crc kubenswrapper[4697]: I0127 15:09:06.557972 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:06 crc kubenswrapper[4697]: I0127 15:09:06.558017 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:06 crc kubenswrapper[4697]: I0127 15:09:06.558026 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:06 crc kubenswrapper[4697]: I0127 15:09:06.558040 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:06 crc kubenswrapper[4697]: I0127 15:09:06.558051 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:06Z","lastTransitionTime":"2026-01-27T15:09:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:06 crc kubenswrapper[4697]: I0127 15:09:06.568212 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:09:06 crc kubenswrapper[4697]: I0127 15:09:06.568232 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vwctp" Jan 27 15:09:06 crc kubenswrapper[4697]: E0127 15:09:06.568299 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:09:06 crc kubenswrapper[4697]: E0127 15:09:06.568372 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vwctp" podUID="11ed6885-450d-477c-8e08-acf5fbde2fa3" Jan 27 15:09:06 crc kubenswrapper[4697]: I0127 15:09:06.660258 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:06 crc kubenswrapper[4697]: I0127 15:09:06.660303 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:06 crc kubenswrapper[4697]: I0127 15:09:06.660312 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:06 crc kubenswrapper[4697]: I0127 15:09:06.660326 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:06 crc kubenswrapper[4697]: I0127 15:09:06.660335 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:06Z","lastTransitionTime":"2026-01-27T15:09:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:06 crc kubenswrapper[4697]: I0127 15:09:06.762992 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:06 crc kubenswrapper[4697]: I0127 15:09:06.763049 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:06 crc kubenswrapper[4697]: I0127 15:09:06.763067 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:06 crc kubenswrapper[4697]: I0127 15:09:06.763088 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:06 crc kubenswrapper[4697]: I0127 15:09:06.763104 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:06Z","lastTransitionTime":"2026-01-27T15:09:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:06 crc kubenswrapper[4697]: I0127 15:09:06.866200 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:06 crc kubenswrapper[4697]: I0127 15:09:06.866250 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:06 crc kubenswrapper[4697]: I0127 15:09:06.866261 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:06 crc kubenswrapper[4697]: I0127 15:09:06.866278 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:06 crc kubenswrapper[4697]: I0127 15:09:06.866290 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:06Z","lastTransitionTime":"2026-01-27T15:09:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:06 crc kubenswrapper[4697]: I0127 15:09:06.971511 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:06 crc kubenswrapper[4697]: I0127 15:09:06.971566 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:06 crc kubenswrapper[4697]: I0127 15:09:06.971584 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:06 crc kubenswrapper[4697]: I0127 15:09:06.971608 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:06 crc kubenswrapper[4697]: I0127 15:09:06.971630 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:06Z","lastTransitionTime":"2026-01-27T15:09:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:07 crc kubenswrapper[4697]: I0127 15:09:07.074307 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:07 crc kubenswrapper[4697]: I0127 15:09:07.074343 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:07 crc kubenswrapper[4697]: I0127 15:09:07.074354 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:07 crc kubenswrapper[4697]: I0127 15:09:07.074368 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:07 crc kubenswrapper[4697]: I0127 15:09:07.074378 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:07Z","lastTransitionTime":"2026-01-27T15:09:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:07 crc kubenswrapper[4697]: I0127 15:09:07.177077 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:07 crc kubenswrapper[4697]: I0127 15:09:07.177114 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:07 crc kubenswrapper[4697]: I0127 15:09:07.177123 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:07 crc kubenswrapper[4697]: I0127 15:09:07.177138 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:07 crc kubenswrapper[4697]: I0127 15:09:07.177148 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:07Z","lastTransitionTime":"2026-01-27T15:09:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:07 crc kubenswrapper[4697]: I0127 15:09:07.279910 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:07 crc kubenswrapper[4697]: I0127 15:09:07.279957 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:07 crc kubenswrapper[4697]: I0127 15:09:07.279967 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:07 crc kubenswrapper[4697]: I0127 15:09:07.279984 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:07 crc kubenswrapper[4697]: I0127 15:09:07.279996 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:07Z","lastTransitionTime":"2026-01-27T15:09:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:07 crc kubenswrapper[4697]: I0127 15:09:07.382414 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:07 crc kubenswrapper[4697]: I0127 15:09:07.382455 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:07 crc kubenswrapper[4697]: I0127 15:09:07.382469 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:07 crc kubenswrapper[4697]: I0127 15:09:07.382525 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:07 crc kubenswrapper[4697]: I0127 15:09:07.382540 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:07Z","lastTransitionTime":"2026-01-27T15:09:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:07 crc kubenswrapper[4697]: I0127 15:09:07.485674 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:07 crc kubenswrapper[4697]: I0127 15:09:07.485742 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:07 crc kubenswrapper[4697]: I0127 15:09:07.485764 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:07 crc kubenswrapper[4697]: I0127 15:09:07.485827 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:07 crc kubenswrapper[4697]: I0127 15:09:07.485856 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:07Z","lastTransitionTime":"2026-01-27T15:09:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:07 crc kubenswrapper[4697]: I0127 15:09:07.538320 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 02:10:12.246747518 +0000 UTC Jan 27 15:09:07 crc kubenswrapper[4697]: I0127 15:09:07.567538 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:09:07 crc kubenswrapper[4697]: I0127 15:09:07.567582 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:09:07 crc kubenswrapper[4697]: E0127 15:09:07.567715 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:09:07 crc kubenswrapper[4697]: E0127 15:09:07.567881 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:09:07 crc kubenswrapper[4697]: I0127 15:09:07.589238 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:07 crc kubenswrapper[4697]: I0127 15:09:07.589271 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:07 crc kubenswrapper[4697]: I0127 15:09:07.589280 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:07 crc kubenswrapper[4697]: I0127 15:09:07.589295 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:07 crc kubenswrapper[4697]: I0127 15:09:07.589305 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:07Z","lastTransitionTime":"2026-01-27T15:09:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:07 crc kubenswrapper[4697]: I0127 15:09:07.692259 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:07 crc kubenswrapper[4697]: I0127 15:09:07.692338 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:07 crc kubenswrapper[4697]: I0127 15:09:07.692356 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:07 crc kubenswrapper[4697]: I0127 15:09:07.692382 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:07 crc kubenswrapper[4697]: I0127 15:09:07.692400 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:07Z","lastTransitionTime":"2026-01-27T15:09:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:07 crc kubenswrapper[4697]: I0127 15:09:07.795566 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:07 crc kubenswrapper[4697]: I0127 15:09:07.795627 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:07 crc kubenswrapper[4697]: I0127 15:09:07.795645 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:07 crc kubenswrapper[4697]: I0127 15:09:07.795672 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:07 crc kubenswrapper[4697]: I0127 15:09:07.795690 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:07Z","lastTransitionTime":"2026-01-27T15:09:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:07 crc kubenswrapper[4697]: I0127 15:09:07.899480 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:07 crc kubenswrapper[4697]: I0127 15:09:07.899557 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:07 crc kubenswrapper[4697]: I0127 15:09:07.899580 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:07 crc kubenswrapper[4697]: I0127 15:09:07.899610 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:07 crc kubenswrapper[4697]: I0127 15:09:07.899633 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:07Z","lastTransitionTime":"2026-01-27T15:09:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:08 crc kubenswrapper[4697]: I0127 15:09:08.003176 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:08 crc kubenswrapper[4697]: I0127 15:09:08.003225 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:08 crc kubenswrapper[4697]: I0127 15:09:08.003241 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:08 crc kubenswrapper[4697]: I0127 15:09:08.003262 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:08 crc kubenswrapper[4697]: I0127 15:09:08.003276 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:08Z","lastTransitionTime":"2026-01-27T15:09:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:08 crc kubenswrapper[4697]: I0127 15:09:08.106476 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:08 crc kubenswrapper[4697]: I0127 15:09:08.106542 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:08 crc kubenswrapper[4697]: I0127 15:09:08.106565 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:08 crc kubenswrapper[4697]: I0127 15:09:08.106589 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:08 crc kubenswrapper[4697]: I0127 15:09:08.106607 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:08Z","lastTransitionTime":"2026-01-27T15:09:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:08 crc kubenswrapper[4697]: I0127 15:09:08.106744 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/11ed6885-450d-477c-8e08-acf5fbde2fa3-metrics-certs\") pod \"network-metrics-daemon-vwctp\" (UID: \"11ed6885-450d-477c-8e08-acf5fbde2fa3\") " pod="openshift-multus/network-metrics-daemon-vwctp" Jan 27 15:09:08 crc kubenswrapper[4697]: E0127 15:09:08.106967 4697 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 15:09:08 crc kubenswrapper[4697]: E0127 15:09:08.107030 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/11ed6885-450d-477c-8e08-acf5fbde2fa3-metrics-certs podName:11ed6885-450d-477c-8e08-acf5fbde2fa3 nodeName:}" failed. No retries permitted until 2026-01-27 15:09:16.10700831 +0000 UTC m=+52.279408131 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/11ed6885-450d-477c-8e08-acf5fbde2fa3-metrics-certs") pod "network-metrics-daemon-vwctp" (UID: "11ed6885-450d-477c-8e08-acf5fbde2fa3") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 15:09:08 crc kubenswrapper[4697]: I0127 15:09:08.210394 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:08 crc kubenswrapper[4697]: I0127 15:09:08.210468 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:08 crc kubenswrapper[4697]: I0127 15:09:08.210487 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:08 crc kubenswrapper[4697]: I0127 15:09:08.210519 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:08 crc kubenswrapper[4697]: I0127 15:09:08.210545 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:08Z","lastTransitionTime":"2026-01-27T15:09:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:08 crc kubenswrapper[4697]: I0127 15:09:08.313518 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:08 crc kubenswrapper[4697]: I0127 15:09:08.313581 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:08 crc kubenswrapper[4697]: I0127 15:09:08.313599 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:08 crc kubenswrapper[4697]: I0127 15:09:08.313625 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:08 crc kubenswrapper[4697]: I0127 15:09:08.313642 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:08Z","lastTransitionTime":"2026-01-27T15:09:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:08 crc kubenswrapper[4697]: I0127 15:09:08.416766 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:08 crc kubenswrapper[4697]: I0127 15:09:08.416864 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:08 crc kubenswrapper[4697]: I0127 15:09:08.416882 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:08 crc kubenswrapper[4697]: I0127 15:09:08.416904 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:08 crc kubenswrapper[4697]: I0127 15:09:08.416921 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:08Z","lastTransitionTime":"2026-01-27T15:09:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:08 crc kubenswrapper[4697]: I0127 15:09:08.519611 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:08 crc kubenswrapper[4697]: I0127 15:09:08.519750 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:08 crc kubenswrapper[4697]: I0127 15:09:08.519772 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:08 crc kubenswrapper[4697]: I0127 15:09:08.519833 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:08 crc kubenswrapper[4697]: I0127 15:09:08.519863 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:08Z","lastTransitionTime":"2026-01-27T15:09:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:08 crc kubenswrapper[4697]: I0127 15:09:08.539171 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 14:48:10.727015177 +0000 UTC Jan 27 15:09:08 crc kubenswrapper[4697]: I0127 15:09:08.567694 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vwctp" Jan 27 15:09:08 crc kubenswrapper[4697]: E0127 15:09:08.567830 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vwctp" podUID="11ed6885-450d-477c-8e08-acf5fbde2fa3" Jan 27 15:09:08 crc kubenswrapper[4697]: I0127 15:09:08.567891 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:09:08 crc kubenswrapper[4697]: E0127 15:09:08.568083 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:09:08 crc kubenswrapper[4697]: I0127 15:09:08.623091 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:08 crc kubenswrapper[4697]: I0127 15:09:08.623141 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:08 crc kubenswrapper[4697]: I0127 15:09:08.623157 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:08 crc kubenswrapper[4697]: I0127 15:09:08.623177 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:08 crc kubenswrapper[4697]: I0127 15:09:08.623188 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:08Z","lastTransitionTime":"2026-01-27T15:09:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:08 crc kubenswrapper[4697]: I0127 15:09:08.726660 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:08 crc kubenswrapper[4697]: I0127 15:09:08.726731 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:08 crc kubenswrapper[4697]: I0127 15:09:08.726756 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:08 crc kubenswrapper[4697]: I0127 15:09:08.726827 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:08 crc kubenswrapper[4697]: I0127 15:09:08.726851 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:08Z","lastTransitionTime":"2026-01-27T15:09:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:08 crc kubenswrapper[4697]: I0127 15:09:08.829964 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:08 crc kubenswrapper[4697]: I0127 15:09:08.830037 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:08 crc kubenswrapper[4697]: I0127 15:09:08.830055 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:08 crc kubenswrapper[4697]: I0127 15:09:08.830081 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:08 crc kubenswrapper[4697]: I0127 15:09:08.830142 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:08Z","lastTransitionTime":"2026-01-27T15:09:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:08 crc kubenswrapper[4697]: I0127 15:09:08.932663 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:08 crc kubenswrapper[4697]: I0127 15:09:08.932722 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:08 crc kubenswrapper[4697]: I0127 15:09:08.932741 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:08 crc kubenswrapper[4697]: I0127 15:09:08.932765 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:08 crc kubenswrapper[4697]: I0127 15:09:08.932812 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:08Z","lastTransitionTime":"2026-01-27T15:09:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:09 crc kubenswrapper[4697]: I0127 15:09:09.036474 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:09 crc kubenswrapper[4697]: I0127 15:09:09.036544 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:09 crc kubenswrapper[4697]: I0127 15:09:09.036566 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:09 crc kubenswrapper[4697]: I0127 15:09:09.036593 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:09 crc kubenswrapper[4697]: I0127 15:09:09.036611 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:09Z","lastTransitionTime":"2026-01-27T15:09:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:09 crc kubenswrapper[4697]: I0127 15:09:09.139834 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:09 crc kubenswrapper[4697]: I0127 15:09:09.139873 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:09 crc kubenswrapper[4697]: I0127 15:09:09.139883 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:09 crc kubenswrapper[4697]: I0127 15:09:09.139899 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:09 crc kubenswrapper[4697]: I0127 15:09:09.139909 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:09Z","lastTransitionTime":"2026-01-27T15:09:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:09 crc kubenswrapper[4697]: I0127 15:09:09.243314 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:09 crc kubenswrapper[4697]: I0127 15:09:09.243366 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:09 crc kubenswrapper[4697]: I0127 15:09:09.243377 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:09 crc kubenswrapper[4697]: I0127 15:09:09.243397 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:09 crc kubenswrapper[4697]: I0127 15:09:09.243431 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:09Z","lastTransitionTime":"2026-01-27T15:09:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:09 crc kubenswrapper[4697]: I0127 15:09:09.346223 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:09 crc kubenswrapper[4697]: I0127 15:09:09.346263 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:09 crc kubenswrapper[4697]: I0127 15:09:09.346281 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:09 crc kubenswrapper[4697]: I0127 15:09:09.346302 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:09 crc kubenswrapper[4697]: I0127 15:09:09.346316 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:09Z","lastTransitionTime":"2026-01-27T15:09:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:09 crc kubenswrapper[4697]: I0127 15:09:09.448963 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:09 crc kubenswrapper[4697]: I0127 15:09:09.448999 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:09 crc kubenswrapper[4697]: I0127 15:09:09.449010 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:09 crc kubenswrapper[4697]: I0127 15:09:09.449026 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:09 crc kubenswrapper[4697]: I0127 15:09:09.449037 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:09Z","lastTransitionTime":"2026-01-27T15:09:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:09 crc kubenswrapper[4697]: I0127 15:09:09.539327 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 16:51:28.847519463 +0000 UTC Jan 27 15:09:09 crc kubenswrapper[4697]: I0127 15:09:09.552598 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:09 crc kubenswrapper[4697]: I0127 15:09:09.552695 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:09 crc kubenswrapper[4697]: I0127 15:09:09.552713 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:09 crc kubenswrapper[4697]: I0127 15:09:09.552739 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:09 crc kubenswrapper[4697]: I0127 15:09:09.552758 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:09Z","lastTransitionTime":"2026-01-27T15:09:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:09 crc kubenswrapper[4697]: I0127 15:09:09.568246 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:09:09 crc kubenswrapper[4697]: E0127 15:09:09.568408 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:09:09 crc kubenswrapper[4697]: I0127 15:09:09.568250 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:09:09 crc kubenswrapper[4697]: E0127 15:09:09.568570 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:09:09 crc kubenswrapper[4697]: I0127 15:09:09.657029 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:09 crc kubenswrapper[4697]: I0127 15:09:09.657083 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:09 crc kubenswrapper[4697]: I0127 15:09:09.657103 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:09 crc kubenswrapper[4697]: I0127 15:09:09.657131 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:09 crc kubenswrapper[4697]: I0127 15:09:09.657151 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:09Z","lastTransitionTime":"2026-01-27T15:09:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:09 crc kubenswrapper[4697]: I0127 15:09:09.761599 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:09 crc kubenswrapper[4697]: I0127 15:09:09.761684 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:09 crc kubenswrapper[4697]: I0127 15:09:09.761709 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:09 crc kubenswrapper[4697]: I0127 15:09:09.761744 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:09 crc kubenswrapper[4697]: I0127 15:09:09.761767 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:09Z","lastTransitionTime":"2026-01-27T15:09:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:09 crc kubenswrapper[4697]: I0127 15:09:09.864513 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:09 crc kubenswrapper[4697]: I0127 15:09:09.864548 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:09 crc kubenswrapper[4697]: I0127 15:09:09.864556 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:09 crc kubenswrapper[4697]: I0127 15:09:09.864571 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:09 crc kubenswrapper[4697]: I0127 15:09:09.864584 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:09Z","lastTransitionTime":"2026-01-27T15:09:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:09 crc kubenswrapper[4697]: I0127 15:09:09.967308 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:09 crc kubenswrapper[4697]: I0127 15:09:09.967345 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:09 crc kubenswrapper[4697]: I0127 15:09:09.967353 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:09 crc kubenswrapper[4697]: I0127 15:09:09.967367 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:09 crc kubenswrapper[4697]: I0127 15:09:09.967376 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:09Z","lastTransitionTime":"2026-01-27T15:09:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:10 crc kubenswrapper[4697]: I0127 15:09:10.069936 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:10 crc kubenswrapper[4697]: I0127 15:09:10.070014 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:10 crc kubenswrapper[4697]: I0127 15:09:10.070034 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:10 crc kubenswrapper[4697]: I0127 15:09:10.070058 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:10 crc kubenswrapper[4697]: I0127 15:09:10.070079 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:10Z","lastTransitionTime":"2026-01-27T15:09:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:10 crc kubenswrapper[4697]: I0127 15:09:10.172762 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:10 crc kubenswrapper[4697]: I0127 15:09:10.172811 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:10 crc kubenswrapper[4697]: I0127 15:09:10.172819 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:10 crc kubenswrapper[4697]: I0127 15:09:10.172834 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:10 crc kubenswrapper[4697]: I0127 15:09:10.172844 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:10Z","lastTransitionTime":"2026-01-27T15:09:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:10 crc kubenswrapper[4697]: I0127 15:09:10.275729 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:10 crc kubenswrapper[4697]: I0127 15:09:10.275819 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:10 crc kubenswrapper[4697]: I0127 15:09:10.275835 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:10 crc kubenswrapper[4697]: I0127 15:09:10.275853 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:10 crc kubenswrapper[4697]: I0127 15:09:10.275866 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:10Z","lastTransitionTime":"2026-01-27T15:09:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:10 crc kubenswrapper[4697]: I0127 15:09:10.380203 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:10 crc kubenswrapper[4697]: I0127 15:09:10.380281 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:10 crc kubenswrapper[4697]: I0127 15:09:10.380302 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:10 crc kubenswrapper[4697]: I0127 15:09:10.380328 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:10 crc kubenswrapper[4697]: I0127 15:09:10.380351 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:10Z","lastTransitionTime":"2026-01-27T15:09:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:10 crc kubenswrapper[4697]: I0127 15:09:10.483546 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:10 crc kubenswrapper[4697]: I0127 15:09:10.483589 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:10 crc kubenswrapper[4697]: I0127 15:09:10.483601 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:10 crc kubenswrapper[4697]: I0127 15:09:10.483618 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:10 crc kubenswrapper[4697]: I0127 15:09:10.483630 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:10Z","lastTransitionTime":"2026-01-27T15:09:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:10 crc kubenswrapper[4697]: I0127 15:09:10.540256 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 12:26:45.620798008 +0000 UTC Jan 27 15:09:10 crc kubenswrapper[4697]: I0127 15:09:10.567901 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vwctp" Jan 27 15:09:10 crc kubenswrapper[4697]: I0127 15:09:10.567902 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:09:10 crc kubenswrapper[4697]: E0127 15:09:10.568068 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vwctp" podUID="11ed6885-450d-477c-8e08-acf5fbde2fa3" Jan 27 15:09:10 crc kubenswrapper[4697]: E0127 15:09:10.568142 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:09:10 crc kubenswrapper[4697]: I0127 15:09:10.585516 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:10 crc kubenswrapper[4697]: I0127 15:09:10.585549 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:10 crc kubenswrapper[4697]: I0127 15:09:10.585557 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:10 crc kubenswrapper[4697]: I0127 15:09:10.585570 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:10 crc kubenswrapper[4697]: I0127 15:09:10.585579 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:10Z","lastTransitionTime":"2026-01-27T15:09:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:10 crc kubenswrapper[4697]: I0127 15:09:10.688912 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:10 crc kubenswrapper[4697]: I0127 15:09:10.688970 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:10 crc kubenswrapper[4697]: I0127 15:09:10.688982 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:10 crc kubenswrapper[4697]: I0127 15:09:10.689001 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:10 crc kubenswrapper[4697]: I0127 15:09:10.689364 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:10Z","lastTransitionTime":"2026-01-27T15:09:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:10 crc kubenswrapper[4697]: I0127 15:09:10.792510 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:10 crc kubenswrapper[4697]: I0127 15:09:10.792616 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:10 crc kubenswrapper[4697]: I0127 15:09:10.792635 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:10 crc kubenswrapper[4697]: I0127 15:09:10.793039 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:10 crc kubenswrapper[4697]: I0127 15:09:10.793056 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:10Z","lastTransitionTime":"2026-01-27T15:09:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:10 crc kubenswrapper[4697]: I0127 15:09:10.895951 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:10 crc kubenswrapper[4697]: I0127 15:09:10.896014 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:10 crc kubenswrapper[4697]: I0127 15:09:10.896030 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:10 crc kubenswrapper[4697]: I0127 15:09:10.896052 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:10 crc kubenswrapper[4697]: I0127 15:09:10.896069 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:10Z","lastTransitionTime":"2026-01-27T15:09:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:10 crc kubenswrapper[4697]: I0127 15:09:10.998697 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:10 crc kubenswrapper[4697]: I0127 15:09:10.998743 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:10 crc kubenswrapper[4697]: I0127 15:09:10.998763 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:10 crc kubenswrapper[4697]: I0127 15:09:10.998798 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:10 crc kubenswrapper[4697]: I0127 15:09:10.998811 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:10Z","lastTransitionTime":"2026-01-27T15:09:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:11 crc kubenswrapper[4697]: I0127 15:09:11.102468 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:11 crc kubenswrapper[4697]: I0127 15:09:11.102523 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:11 crc kubenswrapper[4697]: I0127 15:09:11.102534 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:11 crc kubenswrapper[4697]: I0127 15:09:11.102580 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:11 crc kubenswrapper[4697]: I0127 15:09:11.102594 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:11Z","lastTransitionTime":"2026-01-27T15:09:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:11 crc kubenswrapper[4697]: I0127 15:09:11.205298 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:11 crc kubenswrapper[4697]: I0127 15:09:11.205367 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:11 crc kubenswrapper[4697]: I0127 15:09:11.205380 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:11 crc kubenswrapper[4697]: I0127 15:09:11.205398 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:11 crc kubenswrapper[4697]: I0127 15:09:11.205411 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:11Z","lastTransitionTime":"2026-01-27T15:09:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:11 crc kubenswrapper[4697]: I0127 15:09:11.308622 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:11 crc kubenswrapper[4697]: I0127 15:09:11.308687 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:11 crc kubenswrapper[4697]: I0127 15:09:11.308718 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:11 crc kubenswrapper[4697]: I0127 15:09:11.308747 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:11 crc kubenswrapper[4697]: I0127 15:09:11.308767 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:11Z","lastTransitionTime":"2026-01-27T15:09:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:11 crc kubenswrapper[4697]: I0127 15:09:11.412434 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:11 crc kubenswrapper[4697]: I0127 15:09:11.412514 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:11 crc kubenswrapper[4697]: I0127 15:09:11.412546 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:11 crc kubenswrapper[4697]: I0127 15:09:11.412573 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:11 crc kubenswrapper[4697]: I0127 15:09:11.412592 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:11Z","lastTransitionTime":"2026-01-27T15:09:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:11 crc kubenswrapper[4697]: I0127 15:09:11.481174 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:11 crc kubenswrapper[4697]: I0127 15:09:11.481252 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:11 crc kubenswrapper[4697]: I0127 15:09:11.481275 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:11 crc kubenswrapper[4697]: I0127 15:09:11.481307 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:11 crc kubenswrapper[4697]: I0127 15:09:11.481328 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:11Z","lastTransitionTime":"2026-01-27T15:09:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:11 crc kubenswrapper[4697]: E0127 15:09:11.506123 4697 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:09:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:09:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:09:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:09:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"74b869f4-b1e4-4686-af4e-9516e0fb5017\\\",\\\"systemUUID\\\":\\\"69bca9ab-721f-415b-ad88-6626c7795f3c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:11Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:11 crc kubenswrapper[4697]: I0127 15:09:11.513569 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:11 crc kubenswrapper[4697]: I0127 15:09:11.513651 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:11 crc kubenswrapper[4697]: I0127 15:09:11.513700 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:11 crc kubenswrapper[4697]: I0127 15:09:11.513718 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:11 crc kubenswrapper[4697]: I0127 15:09:11.513737 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:11Z","lastTransitionTime":"2026-01-27T15:09:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:11 crc kubenswrapper[4697]: E0127 15:09:11.529611 4697 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:09:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:09:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:09:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:09:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"74b869f4-b1e4-4686-af4e-9516e0fb5017\\\",\\\"systemUUID\\\":\\\"69bca9ab-721f-415b-ad88-6626c7795f3c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:11Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:11 crc kubenswrapper[4697]: I0127 15:09:11.535649 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:11 crc kubenswrapper[4697]: I0127 15:09:11.535707 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:11 crc kubenswrapper[4697]: I0127 15:09:11.535722 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:11 crc kubenswrapper[4697]: I0127 15:09:11.535746 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:11 crc kubenswrapper[4697]: I0127 15:09:11.535768 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:11Z","lastTransitionTime":"2026-01-27T15:09:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:11 crc kubenswrapper[4697]: I0127 15:09:11.540457 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 21:36:17.134271911 +0000 UTC Jan 27 15:09:11 crc kubenswrapper[4697]: E0127 15:09:11.550094 4697 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:09:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:09:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:09:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:09:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"74b869f4-b1e4-4686-af4e-9516e0fb5017\\\",\\\"systemUUID\\\":\\\"69bca9ab-721f-415b-ad88-6626c7795f3c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:11Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:11 crc kubenswrapper[4697]: I0127 15:09:11.553463 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:11 crc kubenswrapper[4697]: I0127 15:09:11.553494 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:11 crc kubenswrapper[4697]: I0127 15:09:11.553505 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:11 crc kubenswrapper[4697]: I0127 15:09:11.553521 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:11 crc kubenswrapper[4697]: I0127 15:09:11.553532 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:11Z","lastTransitionTime":"2026-01-27T15:09:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:11 crc kubenswrapper[4697]: E0127 15:09:11.567127 4697 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:09:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:09:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:09:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:09:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"74b869f4-b1e4-4686-af4e-9516e0fb5017\\\",\\\"systemUUID\\\":\\\"69bca9ab-721f-415b-ad88-6626c7795f3c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:11Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:11 crc kubenswrapper[4697]: I0127 15:09:11.567768 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:09:11 crc kubenswrapper[4697]: I0127 15:09:11.567867 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:09:11 crc kubenswrapper[4697]: E0127 15:09:11.567899 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:09:11 crc kubenswrapper[4697]: E0127 15:09:11.568087 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:09:11 crc kubenswrapper[4697]: I0127 15:09:11.571853 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:11 crc kubenswrapper[4697]: I0127 15:09:11.571887 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:11 crc kubenswrapper[4697]: I0127 15:09:11.571897 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:11 crc kubenswrapper[4697]: I0127 15:09:11.571913 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:11 crc kubenswrapper[4697]: I0127 15:09:11.571925 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:11Z","lastTransitionTime":"2026-01-27T15:09:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:11 crc kubenswrapper[4697]: E0127 15:09:11.583113 4697 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:09:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:09:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:09:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:09:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"74b869f4-b1e4-4686-af4e-9516e0fb5017\\\",\\\"systemUUID\\\":\\\"69bca9ab-721f-415b-ad88-6626c7795f3c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:11Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:11 crc kubenswrapper[4697]: E0127 15:09:11.583495 4697 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 15:09:11 crc kubenswrapper[4697]: I0127 15:09:11.585013 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:11 crc kubenswrapper[4697]: I0127 15:09:11.585068 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:11 crc kubenswrapper[4697]: I0127 15:09:11.585081 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:11 crc kubenswrapper[4697]: I0127 15:09:11.585100 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:11 crc kubenswrapper[4697]: I0127 15:09:11.585112 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:11Z","lastTransitionTime":"2026-01-27T15:09:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:11 crc kubenswrapper[4697]: I0127 15:09:11.687766 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:11 crc kubenswrapper[4697]: I0127 15:09:11.687839 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:11 crc kubenswrapper[4697]: I0127 15:09:11.687850 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:11 crc kubenswrapper[4697]: I0127 15:09:11.687866 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:11 crc kubenswrapper[4697]: I0127 15:09:11.687877 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:11Z","lastTransitionTime":"2026-01-27T15:09:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:11 crc kubenswrapper[4697]: I0127 15:09:11.790050 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:11 crc kubenswrapper[4697]: I0127 15:09:11.790088 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:11 crc kubenswrapper[4697]: I0127 15:09:11.790098 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:11 crc kubenswrapper[4697]: I0127 15:09:11.790113 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:11 crc kubenswrapper[4697]: I0127 15:09:11.790123 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:11Z","lastTransitionTime":"2026-01-27T15:09:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:11 crc kubenswrapper[4697]: I0127 15:09:11.892590 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:11 crc kubenswrapper[4697]: I0127 15:09:11.892624 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:11 crc kubenswrapper[4697]: I0127 15:09:11.892635 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:11 crc kubenswrapper[4697]: I0127 15:09:11.892650 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:11 crc kubenswrapper[4697]: I0127 15:09:11.892662 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:11Z","lastTransitionTime":"2026-01-27T15:09:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:11 crc kubenswrapper[4697]: I0127 15:09:11.995795 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:11 crc kubenswrapper[4697]: I0127 15:09:11.996077 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:11 crc kubenswrapper[4697]: I0127 15:09:11.996150 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:11 crc kubenswrapper[4697]: I0127 15:09:11.996233 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:11 crc kubenswrapper[4697]: I0127 15:09:11.996303 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:11Z","lastTransitionTime":"2026-01-27T15:09:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:12 crc kubenswrapper[4697]: I0127 15:09:12.099069 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:12 crc kubenswrapper[4697]: I0127 15:09:12.099462 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:12 crc kubenswrapper[4697]: I0127 15:09:12.099600 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:12 crc kubenswrapper[4697]: I0127 15:09:12.099955 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:12 crc kubenswrapper[4697]: I0127 15:09:12.100153 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:12Z","lastTransitionTime":"2026-01-27T15:09:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:12 crc kubenswrapper[4697]: I0127 15:09:12.203046 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:12 crc kubenswrapper[4697]: I0127 15:09:12.203388 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:12 crc kubenswrapper[4697]: I0127 15:09:12.203536 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:12 crc kubenswrapper[4697]: I0127 15:09:12.203650 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:12 crc kubenswrapper[4697]: I0127 15:09:12.203816 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:12Z","lastTransitionTime":"2026-01-27T15:09:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:12 crc kubenswrapper[4697]: I0127 15:09:12.306508 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:12 crc kubenswrapper[4697]: I0127 15:09:12.306746 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:12 crc kubenswrapper[4697]: I0127 15:09:12.306898 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:12 crc kubenswrapper[4697]: I0127 15:09:12.307090 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:12 crc kubenswrapper[4697]: I0127 15:09:12.307226 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:12Z","lastTransitionTime":"2026-01-27T15:09:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:12 crc kubenswrapper[4697]: I0127 15:09:12.410416 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:12 crc kubenswrapper[4697]: I0127 15:09:12.410467 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:12 crc kubenswrapper[4697]: I0127 15:09:12.410483 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:12 crc kubenswrapper[4697]: I0127 15:09:12.410511 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:12 crc kubenswrapper[4697]: I0127 15:09:12.410525 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:12Z","lastTransitionTime":"2026-01-27T15:09:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:12 crc kubenswrapper[4697]: I0127 15:09:12.513292 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:12 crc kubenswrapper[4697]: I0127 15:09:12.513361 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:12 crc kubenswrapper[4697]: I0127 15:09:12.513385 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:12 crc kubenswrapper[4697]: I0127 15:09:12.513415 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:12 crc kubenswrapper[4697]: I0127 15:09:12.513437 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:12Z","lastTransitionTime":"2026-01-27T15:09:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:12 crc kubenswrapper[4697]: I0127 15:09:12.540861 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 10:11:38.942543419 +0000 UTC Jan 27 15:09:12 crc kubenswrapper[4697]: I0127 15:09:12.567958 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vwctp" Jan 27 15:09:12 crc kubenswrapper[4697]: I0127 15:09:12.567958 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:09:12 crc kubenswrapper[4697]: E0127 15:09:12.568111 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vwctp" podUID="11ed6885-450d-477c-8e08-acf5fbde2fa3" Jan 27 15:09:12 crc kubenswrapper[4697]: E0127 15:09:12.568164 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:09:12 crc kubenswrapper[4697]: I0127 15:09:12.616193 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:12 crc kubenswrapper[4697]: I0127 15:09:12.616452 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:12 crc kubenswrapper[4697]: I0127 15:09:12.616530 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:12 crc kubenswrapper[4697]: I0127 15:09:12.616609 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:12 crc kubenswrapper[4697]: I0127 15:09:12.616683 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:12Z","lastTransitionTime":"2026-01-27T15:09:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:12 crc kubenswrapper[4697]: I0127 15:09:12.719931 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:12 crc kubenswrapper[4697]: I0127 15:09:12.719983 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:12 crc kubenswrapper[4697]: I0127 15:09:12.720000 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:12 crc kubenswrapper[4697]: I0127 15:09:12.720023 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:12 crc kubenswrapper[4697]: I0127 15:09:12.720039 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:12Z","lastTransitionTime":"2026-01-27T15:09:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:12 crc kubenswrapper[4697]: I0127 15:09:12.823527 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:12 crc kubenswrapper[4697]: I0127 15:09:12.823581 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:12 crc kubenswrapper[4697]: I0127 15:09:12.823597 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:12 crc kubenswrapper[4697]: I0127 15:09:12.823619 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:12 crc kubenswrapper[4697]: I0127 15:09:12.823637 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:12Z","lastTransitionTime":"2026-01-27T15:09:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:12 crc kubenswrapper[4697]: I0127 15:09:12.927210 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:12 crc kubenswrapper[4697]: I0127 15:09:12.927297 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:12 crc kubenswrapper[4697]: I0127 15:09:12.927321 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:12 crc kubenswrapper[4697]: I0127 15:09:12.927358 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:12 crc kubenswrapper[4697]: I0127 15:09:12.927382 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:12Z","lastTransitionTime":"2026-01-27T15:09:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:13 crc kubenswrapper[4697]: I0127 15:09:13.030482 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:13 crc kubenswrapper[4697]: I0127 15:09:13.030545 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:13 crc kubenswrapper[4697]: I0127 15:09:13.030563 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:13 crc kubenswrapper[4697]: I0127 15:09:13.030585 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:13 crc kubenswrapper[4697]: I0127 15:09:13.030602 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:13Z","lastTransitionTime":"2026-01-27T15:09:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:13 crc kubenswrapper[4697]: I0127 15:09:13.133968 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:13 crc kubenswrapper[4697]: I0127 15:09:13.134041 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:13 crc kubenswrapper[4697]: I0127 15:09:13.134053 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:13 crc kubenswrapper[4697]: I0127 15:09:13.134070 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:13 crc kubenswrapper[4697]: I0127 15:09:13.134083 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:13Z","lastTransitionTime":"2026-01-27T15:09:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:13 crc kubenswrapper[4697]: I0127 15:09:13.236170 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:13 crc kubenswrapper[4697]: I0127 15:09:13.236248 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:13 crc kubenswrapper[4697]: I0127 15:09:13.236276 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:13 crc kubenswrapper[4697]: I0127 15:09:13.236307 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:13 crc kubenswrapper[4697]: I0127 15:09:13.236326 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:13Z","lastTransitionTime":"2026-01-27T15:09:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:13 crc kubenswrapper[4697]: I0127 15:09:13.338868 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:13 crc kubenswrapper[4697]: I0127 15:09:13.339110 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:13 crc kubenswrapper[4697]: I0127 15:09:13.339186 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:13 crc kubenswrapper[4697]: I0127 15:09:13.339264 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:13 crc kubenswrapper[4697]: I0127 15:09:13.339330 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:13Z","lastTransitionTime":"2026-01-27T15:09:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:13 crc kubenswrapper[4697]: I0127 15:09:13.442168 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:13 crc kubenswrapper[4697]: I0127 15:09:13.442237 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:13 crc kubenswrapper[4697]: I0127 15:09:13.442259 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:13 crc kubenswrapper[4697]: I0127 15:09:13.442288 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:13 crc kubenswrapper[4697]: I0127 15:09:13.442306 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:13Z","lastTransitionTime":"2026-01-27T15:09:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:13 crc kubenswrapper[4697]: I0127 15:09:13.541794 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 12:33:51.187903078 +0000 UTC Jan 27 15:09:13 crc kubenswrapper[4697]: I0127 15:09:13.544194 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:13 crc kubenswrapper[4697]: I0127 15:09:13.544226 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:13 crc kubenswrapper[4697]: I0127 15:09:13.544238 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:13 crc kubenswrapper[4697]: I0127 15:09:13.544252 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:13 crc kubenswrapper[4697]: I0127 15:09:13.544260 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:13Z","lastTransitionTime":"2026-01-27T15:09:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:13 crc kubenswrapper[4697]: I0127 15:09:13.567688 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:09:13 crc kubenswrapper[4697]: E0127 15:09:13.567841 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:09:13 crc kubenswrapper[4697]: I0127 15:09:13.567985 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:09:13 crc kubenswrapper[4697]: I0127 15:09:13.569570 4697 scope.go:117] "RemoveContainer" containerID="c69c1cef25da3355f8dacd4b9acc8d52cdc6e32c3149645679a66420b2ff1fc2" Jan 27 15:09:13 crc kubenswrapper[4697]: E0127 15:09:13.572292 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:09:13 crc kubenswrapper[4697]: I0127 15:09:13.646939 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:13 crc kubenswrapper[4697]: I0127 15:09:13.646977 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:13 crc kubenswrapper[4697]: I0127 15:09:13.646992 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:13 crc kubenswrapper[4697]: I0127 15:09:13.647009 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:13 crc kubenswrapper[4697]: I0127 15:09:13.647020 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:13Z","lastTransitionTime":"2026-01-27T15:09:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:13 crc kubenswrapper[4697]: I0127 15:09:13.749303 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:13 crc kubenswrapper[4697]: I0127 15:09:13.749631 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:13 crc kubenswrapper[4697]: I0127 15:09:13.749643 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:13 crc kubenswrapper[4697]: I0127 15:09:13.749660 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:13 crc kubenswrapper[4697]: I0127 15:09:13.749676 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:13Z","lastTransitionTime":"2026-01-27T15:09:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:13 crc kubenswrapper[4697]: I0127 15:09:13.852251 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:13 crc kubenswrapper[4697]: I0127 15:09:13.852300 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:13 crc kubenswrapper[4697]: I0127 15:09:13.852312 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:13 crc kubenswrapper[4697]: I0127 15:09:13.852334 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:13 crc kubenswrapper[4697]: I0127 15:09:13.852347 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:13Z","lastTransitionTime":"2026-01-27T15:09:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:13 crc kubenswrapper[4697]: I0127 15:09:13.945354 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-z6jxw_6a1ce5ad-1a8c-4a28-99d8-fc71649954ad/ovnkube-controller/1.log" Jan 27 15:09:13 crc kubenswrapper[4697]: I0127 15:09:13.948191 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6jxw" event={"ID":"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad","Type":"ContainerStarted","Data":"c922932b6548d6d3070183264d41bc14a0cfc7a122dfc0772c4839066544c36d"} Jan 27 15:09:13 crc kubenswrapper[4697]: I0127 15:09:13.949358 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-z6jxw" Jan 27 15:09:13 crc kubenswrapper[4697]: I0127 15:09:13.954548 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:13 crc kubenswrapper[4697]: I0127 15:09:13.954579 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:13 crc kubenswrapper[4697]: I0127 15:09:13.954591 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:13 crc kubenswrapper[4697]: I0127 15:09:13.954607 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:13 crc kubenswrapper[4697]: I0127 15:09:13.954620 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:13Z","lastTransitionTime":"2026-01-27T15:09:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:13 crc kubenswrapper[4697]: I0127 15:09:13.968391 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lpz4j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d187caad-2501-44d6-8ced-f8d8ca5fecfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c2b6a00c426e85ca8ca4fe5790bf7badc12e0c2cc72c1454e664e809ace5e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5jqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lpz4j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:13Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:13 crc kubenswrapper[4697]: I0127 15:09:13.985775 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bdclj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed86f7b6-a042-470f-8da3-9cad4e65c550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a701152234da7522fefeed3798f4748c4f8e56fa81edd5011ad4a89bbb2e4be7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f898q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bdclj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:13Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:14 crc kubenswrapper[4697]: I0127 15:09:13.999991 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6lf86" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35bbb68b-046f-482d-8c38-e76dd8a12a61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6949b3c1babb1c4c69bf612b869bea5dabf3fedc5e6c930ec3d3a51736c9651f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sf5z9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc445832c9ce25b3b787c029df7baad2f8ad53f7cf8705ab5e2590c85119bec1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sf5z9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6lf86\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:13Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:14 crc kubenswrapper[4697]: I0127 15:09:14.021984 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30821478-065e-48b2-85f3-ae69260477fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://841fe2379065903ddc38b4968c1764a6c83d13f42c7587f20be81d8539199c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc09ec12a81a4e2954a0d1146819e9f9b4fc1fd442a3e9c930ea213aff875eb9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa7833382543ce12d026eb8bbc6fb93276a1105a0cc34d215e719591be740f80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d1140be76b3f274b414e158153723d043089cb9b01d27733976db83dc4601f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3144c28de6be75231118993ba779a42bcc9032d51e927df649d3abb602ffa5dd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 15:08:45.318333 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 15:08:45.318446 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 15:08:45.319039 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1083979560/tls.crt::/tmp/serving-cert-1083979560/tls.key\\\\\\\"\\\\nI0127 15:08:45.778691 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 15:08:45.781562 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 15:08:45.781589 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 15:08:45.781614 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 15:08:45.781620 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 15:08:45.799733 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0127 15:08:45.799756 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 15:08:45.799769 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:08:45.799774 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:08:45.799800 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 15:08:45.799806 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 15:08:45.799810 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 15:08:45.799814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 15:08:45.805747 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://772509e08b1dcc68190d81e10a93fe348af55fdc71dbab2f0cadffd65089c044\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d9c79b1675802dcd1800cdbf3562832c4d201ff1b4d7ab4504118a41a245453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d9c79b1675802dcd1800cdbf3562832c4d201ff1b4d7ab4504118a41a245453\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:24Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:14Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:14 crc kubenswrapper[4697]: I0127 15:09:14.038891 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:14Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:14 crc kubenswrapper[4697]: I0127 15:09:14.050600 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:14Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:14 crc kubenswrapper[4697]: I0127 15:09:14.056251 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:14 crc kubenswrapper[4697]: I0127 15:09:14.056277 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:14 crc kubenswrapper[4697]: I0127 15:09:14.056285 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:14 crc kubenswrapper[4697]: I0127 15:09:14.056297 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:14 crc kubenswrapper[4697]: I0127 15:09:14.056306 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:14Z","lastTransitionTime":"2026-01-27T15:09:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:14 crc kubenswrapper[4697]: I0127 15:09:14.061633 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://955eb03bb38f971417b1af1b193c2008607eaeda5addf30f899830dd84620c4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:14Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:14 crc kubenswrapper[4697]: I0127 15:09:14.078485 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6jxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24ac4a674c5fb98082daeabf52736988951ea5c66064ff4bb63f0d40c43b947d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25f52622d494cffbbd36c21f76148b896a10d3c1ace649ac0824e847b812a277\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9146d3d41cb348c99ea78d62aef3aa7d46c5f99855e042fdf5bc38b18556e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e33c68fac5ef11b2704b8a1460588937489a191ea2eacb70548b1e99cf718822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8784cf473729161592d08c782f4754724d6609756a30040715cbff8c732a09c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eea7c2b7dbea8198cc4709a808f8ecab760514224f4e3eb96d04c3bd7f16df6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c922932b6548d6d3070183264d41bc14a0cfc7a122dfc0772c4839066544c36d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c69c1cef25da3355f8dacd4b9acc8d52cdc6e32c3149645679a66420b2ff1fc2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T15:08:57Z\\\",\\\"message\\\":\\\"dded to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:57Z is after 2025-08-24T17:21:41Z]\\\\nI0127 15:08:57.846840 6015 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-controller-manager-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"3ec9f67e-7758-4707-a6d0-2dc28f28ac37\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-controller-manager-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-controller-manager-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:09:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://971bf4362650664f5133d9b68b7a5ce76e54dafbf28c88730f678ada0256ffd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9666b8a501ef015431ee3be1fc34ca2b196011df3007d2e4d508f09f9967785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9666b8a501ef015431ee3be1fc34ca2b196011df3007d2e4d508f09f9967785\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z6jxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:14Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:14 crc kubenswrapper[4697]: I0127 15:09:14.095040 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00bcd4fb-11e6-4087-91b7-290cd35a7292\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee5c74f4e3f1154431027a743528e81ec4bed30037b30a858870f74993da4691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b23c092c5d493951a1f6dbbf0482f102f36a830133d843f3c574afba2e1d50d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ad05a5c3b7640af677ede45c27c40da5d118e28a9d45de0ffa60a05684121c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fd615105781bcf4614f8a58cf63eeb89020db12e822192bd652a5ff23e25a2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:24Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:14Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:14 crc kubenswrapper[4697]: I0127 15:09:14.108104 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e13ee612abe9aa03f8ccaf68abbdfdbeb29820484f430097aef6be1679d3efe8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:14Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:14 crc kubenswrapper[4697]: I0127 15:09:14.120732 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a182e147723dd1c9335e6c6a910d5d53bdfc118504b6a0a9f3c91f79b6d3aee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52fcd1c6784720765f18ddc1936d3bdd625b743d27654a647ff80351957797e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:14Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:14 crc kubenswrapper[4697]: I0127 15:09:14.136717 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:14Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:14 crc kubenswrapper[4697]: I0127 15:09:14.153423 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bcb9s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7543bea-0b65-44e1-8c0c-bc1a13577d69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fe79de88015d62a290c140e0504b9ef088f39fa79bc9b379d46fa9cdb03123f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0b69d8311464a46854b17dc23de984ff37a24f3de84f8ad6033d26d5dd30afc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0b69d8311464a46854b17dc23de984ff37a24f3de84f8ad6033d26d5dd30afc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d34049aae4e409909bb597c8bf33aa1c1ac85699cf72e33f5643145fdf9fbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d34049aae4e409909bb597c8bf33aa1c1ac85699cf72e33f5643145fdf9fbb9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b85aff4ba7e4c4eddcdfd916b42392fd8f5bd4d18caae739a7490c0576fcff1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b85aff4ba7e4c4eddcdfd916b42392fd8f5bd4d18caae739a7490c0576fcff1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6affaf91a44dec8a9da34068ed68f480ad543e0efc8e0f584fd5002f8f6ed0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6affaf91a44dec8a9da34068ed68f480ad543e0efc8e0f584fd5002f8f6ed0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dede89b14b4d80c8b9e74c45b628b5def6a04f922bb59c06828c3a4e43deca4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dede89b14b4d80c8b9e74c45b628b5def6a04f922bb59c06828c3a4e43deca4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a125d46e355d85444bf125e8184888e9b0c18dab3cd7b09b89ffff202e2c6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a125d46e355d85444bf125e8184888e9b0c18dab3cd7b09b89ffff202e2c6b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bcb9s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:14Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:14 crc kubenswrapper[4697]: I0127 15:09:14.158244 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:14 crc kubenswrapper[4697]: I0127 15:09:14.158277 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:14 crc kubenswrapper[4697]: I0127 15:09:14.158288 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:14 crc kubenswrapper[4697]: I0127 15:09:14.158302 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:14 crc kubenswrapper[4697]: I0127 15:09:14.158312 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:14Z","lastTransitionTime":"2026-01-27T15:09:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:14 crc kubenswrapper[4697]: I0127 15:09:14.167433 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rq89t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fbc1c27-fba2-40df-95dd-3842bd1f1906\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0c056e48d3130806317f25486fea67d938a0e610f19b6089873f2fcfe4759a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npp7h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rq89t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:14Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:14 crc kubenswrapper[4697]: I0127 15:09:14.180896 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wz495" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9bec8bc-b2a6-4865-83ca-692ae5c022a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2616d07c83d73b63d4b728a30de8a7e1d76986d38f8c4c3fe019bf73e64784f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wqhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://faaced835dbc76e880a1fd29824b00fca5f720686e476bcba6ad4f807e28e8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wqhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wz495\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:14Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:14 crc kubenswrapper[4697]: I0127 15:09:14.193196 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vwctp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11ed6885-450d-477c-8e08-acf5fbde2fa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr85v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr85v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:09:00Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vwctp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:14Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:14 crc kubenswrapper[4697]: I0127 15:09:14.260007 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:14 crc kubenswrapper[4697]: I0127 15:09:14.260050 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:14 crc kubenswrapper[4697]: I0127 15:09:14.260058 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:14 crc kubenswrapper[4697]: I0127 15:09:14.260073 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:14 crc kubenswrapper[4697]: I0127 15:09:14.260083 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:14Z","lastTransitionTime":"2026-01-27T15:09:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:14 crc kubenswrapper[4697]: I0127 15:09:14.362048 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:14 crc kubenswrapper[4697]: I0127 15:09:14.362081 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:14 crc kubenswrapper[4697]: I0127 15:09:14.362092 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:14 crc kubenswrapper[4697]: I0127 15:09:14.362107 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:14 crc kubenswrapper[4697]: I0127 15:09:14.362120 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:14Z","lastTransitionTime":"2026-01-27T15:09:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:14 crc kubenswrapper[4697]: I0127 15:09:14.463951 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:14 crc kubenswrapper[4697]: I0127 15:09:14.463978 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:14 crc kubenswrapper[4697]: I0127 15:09:14.463986 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:14 crc kubenswrapper[4697]: I0127 15:09:14.464000 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:14 crc kubenswrapper[4697]: I0127 15:09:14.464009 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:14Z","lastTransitionTime":"2026-01-27T15:09:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:14 crc kubenswrapper[4697]: I0127 15:09:14.543285 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 10:56:35.594559228 +0000 UTC Jan 27 15:09:14 crc kubenswrapper[4697]: I0127 15:09:14.566358 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:14 crc kubenswrapper[4697]: I0127 15:09:14.566417 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:14 crc kubenswrapper[4697]: I0127 15:09:14.566435 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:14 crc kubenswrapper[4697]: I0127 15:09:14.566462 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:14 crc kubenswrapper[4697]: I0127 15:09:14.566494 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:14Z","lastTransitionTime":"2026-01-27T15:09:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:14 crc kubenswrapper[4697]: I0127 15:09:14.567576 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:09:14 crc kubenswrapper[4697]: E0127 15:09:14.567723 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:09:14 crc kubenswrapper[4697]: I0127 15:09:14.567874 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vwctp" Jan 27 15:09:14 crc kubenswrapper[4697]: E0127 15:09:14.568056 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vwctp" podUID="11ed6885-450d-477c-8e08-acf5fbde2fa3" Jan 27 15:09:14 crc kubenswrapper[4697]: I0127 15:09:14.587068 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30821478-065e-48b2-85f3-ae69260477fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://841fe2379065903ddc38b4968c1764a6c83d13f42c7587f20be81d8539199c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc09ec12a81a4e2954a0d1146819e9f9b4fc1fd442a3e9c930ea213aff875eb9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa7833382543ce12d026eb8bbc6fb93276a1105a0cc34d215e719591be740f80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d1140be76b3f274b414e158153723d043089cb9b01d27733976db83dc4601f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3144c28de6be75231118993ba779a42bcc9032d51e927df649d3abb602ffa5dd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 15:08:45.318333 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 15:08:45.318446 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 15:08:45.319039 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1083979560/tls.crt::/tmp/serving-cert-1083979560/tls.key\\\\\\\"\\\\nI0127 15:08:45.778691 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 15:08:45.781562 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 15:08:45.781589 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 15:08:45.781614 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 15:08:45.781620 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 15:08:45.799733 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0127 15:08:45.799756 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 15:08:45.799769 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:08:45.799774 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:08:45.799800 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 15:08:45.799806 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 15:08:45.799810 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 15:08:45.799814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 15:08:45.805747 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://772509e08b1dcc68190d81e10a93fe348af55fdc71dbab2f0cadffd65089c044\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d9c79b1675802dcd1800cdbf3562832c4d201ff1b4d7ab4504118a41a245453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d9c79b1675802dcd1800cdbf3562832c4d201ff1b4d7ab4504118a41a245453\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:24Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:14Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:14 crc kubenswrapper[4697]: I0127 15:09:14.604193 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:14Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:14 crc kubenswrapper[4697]: I0127 15:09:14.619467 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:14Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:14 crc kubenswrapper[4697]: I0127 15:09:14.633940 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://955eb03bb38f971417b1af1b193c2008607eaeda5addf30f899830dd84620c4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:14Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:14 crc kubenswrapper[4697]: I0127 15:09:14.647524 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bdclj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed86f7b6-a042-470f-8da3-9cad4e65c550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a701152234da7522fefeed3798f4748c4f8e56fa81edd5011ad4a89bbb2e4be7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f898q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bdclj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:14Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:14 crc kubenswrapper[4697]: I0127 15:09:14.663994 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6lf86" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35bbb68b-046f-482d-8c38-e76dd8a12a61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6949b3c1babb1c4c69bf612b869bea5dabf3fedc5e6c930ec3d3a51736c9651f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sf5z9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc445832c9ce25b3b787c029df7baad2f8ad53f7cf8705ab5e2590c85119bec1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sf5z9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6lf86\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:14Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:14 crc kubenswrapper[4697]: I0127 15:09:14.668498 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:14 crc kubenswrapper[4697]: I0127 15:09:14.668589 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:14 crc kubenswrapper[4697]: I0127 15:09:14.668613 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:14 crc kubenswrapper[4697]: I0127 15:09:14.668645 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:14 crc kubenswrapper[4697]: I0127 15:09:14.668661 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:14Z","lastTransitionTime":"2026-01-27T15:09:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:14 crc kubenswrapper[4697]: I0127 15:09:14.687004 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e13ee612abe9aa03f8ccaf68abbdfdbeb29820484f430097aef6be1679d3efe8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:14Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:14 crc kubenswrapper[4697]: I0127 15:09:14.702399 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a182e147723dd1c9335e6c6a910d5d53bdfc118504b6a0a9f3c91f79b6d3aee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52fcd1c6784720765f18ddc1936d3bdd625b743d27654a647ff80351957797e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:14Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:14 crc kubenswrapper[4697]: I0127 15:09:14.715305 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:14Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:14 crc kubenswrapper[4697]: I0127 15:09:14.730481 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bcb9s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7543bea-0b65-44e1-8c0c-bc1a13577d69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fe79de88015d62a290c140e0504b9ef088f39fa79bc9b379d46fa9cdb03123f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0b69d8311464a46854b17dc23de984ff37a24f3de84f8ad6033d26d5dd30afc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0b69d8311464a46854b17dc23de984ff37a24f3de84f8ad6033d26d5dd30afc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d34049aae4e409909bb597c8bf33aa1c1ac85699cf72e33f5643145fdf9fbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d34049aae4e409909bb597c8bf33aa1c1ac85699cf72e33f5643145fdf9fbb9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b85aff4ba7e4c4eddcdfd916b42392fd8f5bd4d18caae739a7490c0576fcff1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b85aff4ba7e4c4eddcdfd916b42392fd8f5bd4d18caae739a7490c0576fcff1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6affaf91a44dec8a9da34068ed68f480ad543e0efc8e0f584fd5002f8f6ed0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6affaf91a44dec8a9da34068ed68f480ad543e0efc8e0f584fd5002f8f6ed0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dede89b14b4d80c8b9e74c45b628b5def6a04f922bb59c06828c3a4e43deca4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dede89b14b4d80c8b9e74c45b628b5def6a04f922bb59c06828c3a4e43deca4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a125d46e355d85444bf125e8184888e9b0c18dab3cd7b09b89ffff202e2c6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a125d46e355d85444bf125e8184888e9b0c18dab3cd7b09b89ffff202e2c6b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bcb9s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:14Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:14 crc kubenswrapper[4697]: I0127 15:09:14.741294 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rq89t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fbc1c27-fba2-40df-95dd-3842bd1f1906\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0c056e48d3130806317f25486fea67d938a0e610f19b6089873f2fcfe4759a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npp7h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rq89t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:14Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:14 crc kubenswrapper[4697]: I0127 15:09:14.758054 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6jxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24ac4a674c5fb98082daeabf52736988951ea5c66064ff4bb63f0d40c43b947d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25f52622d494cffbbd36c21f76148b896a10d3c1ace649ac0824e847b812a277\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9146d3d41cb348c99ea78d62aef3aa7d46c5f99855e042fdf5bc38b18556e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e33c68fac5ef11b2704b8a1460588937489a191ea2eacb70548b1e99cf718822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8784cf473729161592d08c782f4754724d6609756a30040715cbff8c732a09c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eea7c2b7dbea8198cc4709a808f8ecab760514224f4e3eb96d04c3bd7f16df6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c922932b6548d6d3070183264d41bc14a0cfc7a122dfc0772c4839066544c36d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c69c1cef25da3355f8dacd4b9acc8d52cdc6e32c3149645679a66420b2ff1fc2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T15:08:57Z\\\",\\\"message\\\":\\\"dded to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:57Z is after 2025-08-24T17:21:41Z]\\\\nI0127 15:08:57.846840 6015 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-controller-manager-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"3ec9f67e-7758-4707-a6d0-2dc28f28ac37\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-controller-manager-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-controller-manager-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:09:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://971bf4362650664f5133d9b68b7a5ce76e54dafbf28c88730f678ada0256ffd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9666b8a501ef015431ee3be1fc34ca2b196011df3007d2e4d508f09f9967785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9666b8a501ef015431ee3be1fc34ca2b196011df3007d2e4d508f09f9967785\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z6jxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:14Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:14 crc kubenswrapper[4697]: I0127 15:09:14.769460 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00bcd4fb-11e6-4087-91b7-290cd35a7292\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee5c74f4e3f1154431027a743528e81ec4bed30037b30a858870f74993da4691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b23c092c5d493951a1f6dbbf0482f102f36a830133d843f3c574afba2e1d50d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ad05a5c3b7640af677ede45c27c40da5d118e28a9d45de0ffa60a05684121c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fd615105781bcf4614f8a58cf63eeb89020db12e822192bd652a5ff23e25a2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:24Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:14Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:14 crc kubenswrapper[4697]: I0127 15:09:14.772017 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:14 crc kubenswrapper[4697]: I0127 15:09:14.772057 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:14 crc kubenswrapper[4697]: I0127 15:09:14.772067 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:14 crc kubenswrapper[4697]: I0127 15:09:14.772082 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:14 crc kubenswrapper[4697]: I0127 15:09:14.772091 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:14Z","lastTransitionTime":"2026-01-27T15:09:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:14 crc kubenswrapper[4697]: I0127 15:09:14.784038 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wz495" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9bec8bc-b2a6-4865-83ca-692ae5c022a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2616d07c83d73b63d4b728a30de8a7e1d76986d38f8c4c3fe019bf73e64784f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wqhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://faaced835dbc76e880a1fd29824b00fca5f720686e476bcba6ad4f807e28e8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wqhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wz495\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:14Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:14 crc kubenswrapper[4697]: I0127 15:09:14.797002 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vwctp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11ed6885-450d-477c-8e08-acf5fbde2fa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr85v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr85v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:09:00Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vwctp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:14Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:14 crc kubenswrapper[4697]: I0127 15:09:14.806388 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lpz4j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d187caad-2501-44d6-8ced-f8d8ca5fecfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c2b6a00c426e85ca8ca4fe5790bf7badc12e0c2cc72c1454e664e809ace5e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5jqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lpz4j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:14Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:14 crc kubenswrapper[4697]: I0127 15:09:14.874198 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:14 crc kubenswrapper[4697]: I0127 15:09:14.874237 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:14 crc kubenswrapper[4697]: I0127 15:09:14.874247 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:14 crc kubenswrapper[4697]: I0127 15:09:14.874263 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:14 crc kubenswrapper[4697]: I0127 15:09:14.874273 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:14Z","lastTransitionTime":"2026-01-27T15:09:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:14 crc kubenswrapper[4697]: I0127 15:09:14.954453 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-z6jxw_6a1ce5ad-1a8c-4a28-99d8-fc71649954ad/ovnkube-controller/2.log" Jan 27 15:09:14 crc kubenswrapper[4697]: I0127 15:09:14.955153 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-z6jxw_6a1ce5ad-1a8c-4a28-99d8-fc71649954ad/ovnkube-controller/1.log" Jan 27 15:09:14 crc kubenswrapper[4697]: I0127 15:09:14.957350 4697 generic.go:334] "Generic (PLEG): container finished" podID="6a1ce5ad-1a8c-4a28-99d8-fc71649954ad" containerID="c922932b6548d6d3070183264d41bc14a0cfc7a122dfc0772c4839066544c36d" exitCode=1 Jan 27 15:09:14 crc kubenswrapper[4697]: I0127 15:09:14.957388 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6jxw" event={"ID":"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad","Type":"ContainerDied","Data":"c922932b6548d6d3070183264d41bc14a0cfc7a122dfc0772c4839066544c36d"} Jan 27 15:09:14 crc kubenswrapper[4697]: I0127 15:09:14.957436 4697 scope.go:117] "RemoveContainer" containerID="c69c1cef25da3355f8dacd4b9acc8d52cdc6e32c3149645679a66420b2ff1fc2" Jan 27 15:09:14 crc kubenswrapper[4697]: I0127 15:09:14.958368 4697 scope.go:117] "RemoveContainer" containerID="c922932b6548d6d3070183264d41bc14a0cfc7a122dfc0772c4839066544c36d" Jan 27 15:09:14 crc kubenswrapper[4697]: E0127 15:09:14.958672 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-z6jxw_openshift-ovn-kubernetes(6a1ce5ad-1a8c-4a28-99d8-fc71649954ad)\"" pod="openshift-ovn-kubernetes/ovnkube-node-z6jxw" podUID="6a1ce5ad-1a8c-4a28-99d8-fc71649954ad" Jan 27 15:09:14 crc kubenswrapper[4697]: I0127 15:09:14.975767 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://955eb03bb38f971417b1af1b193c2008607eaeda5addf30f899830dd84620c4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:14Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:14 crc kubenswrapper[4697]: I0127 15:09:14.976526 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:14 crc kubenswrapper[4697]: I0127 15:09:14.976565 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:14 crc kubenswrapper[4697]: I0127 15:09:14.976574 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:14 crc kubenswrapper[4697]: I0127 15:09:14.976591 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:14 crc kubenswrapper[4697]: I0127 15:09:14.976601 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:14Z","lastTransitionTime":"2026-01-27T15:09:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:14 crc kubenswrapper[4697]: I0127 15:09:14.988324 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bdclj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed86f7b6-a042-470f-8da3-9cad4e65c550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a701152234da7522fefeed3798f4748c4f8e56fa81edd5011ad4a89bbb2e4be7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f898q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bdclj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:14Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:15 crc kubenswrapper[4697]: I0127 15:09:15.002039 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6lf86" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35bbb68b-046f-482d-8c38-e76dd8a12a61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6949b3c1babb1c4c69bf612b869bea5dabf3fedc5e6c930ec3d3a51736c9651f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sf5z9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc445832c9ce25b3b787c029df7baad2f8ad53f7cf8705ab5e2590c85119bec1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sf5z9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6lf86\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:14Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:15 crc kubenswrapper[4697]: I0127 15:09:15.018147 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30821478-065e-48b2-85f3-ae69260477fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://841fe2379065903ddc38b4968c1764a6c83d13f42c7587f20be81d8539199c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc09ec12a81a4e2954a0d1146819e9f9b4fc1fd442a3e9c930ea213aff875eb9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa7833382543ce12d026eb8bbc6fb93276a1105a0cc34d215e719591be740f80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d1140be76b3f274b414e158153723d043089cb9b01d27733976db83dc4601f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3144c28de6be75231118993ba779a42bcc9032d51e927df649d3abb602ffa5dd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 15:08:45.318333 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 15:08:45.318446 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 15:08:45.319039 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1083979560/tls.crt::/tmp/serving-cert-1083979560/tls.key\\\\\\\"\\\\nI0127 15:08:45.778691 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 15:08:45.781562 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 15:08:45.781589 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 15:08:45.781614 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 15:08:45.781620 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 15:08:45.799733 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0127 15:08:45.799756 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 15:08:45.799769 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:08:45.799774 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:08:45.799800 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 15:08:45.799806 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 15:08:45.799810 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 15:08:45.799814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 15:08:45.805747 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://772509e08b1dcc68190d81e10a93fe348af55fdc71dbab2f0cadffd65089c044\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d9c79b1675802dcd1800cdbf3562832c4d201ff1b4d7ab4504118a41a245453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d9c79b1675802dcd1800cdbf3562832c4d201ff1b4d7ab4504118a41a245453\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:24Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:15Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:15 crc kubenswrapper[4697]: I0127 15:09:15.030019 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:15Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:15 crc kubenswrapper[4697]: I0127 15:09:15.041217 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:15Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:15 crc kubenswrapper[4697]: I0127 15:09:15.054014 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rq89t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fbc1c27-fba2-40df-95dd-3842bd1f1906\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0c056e48d3130806317f25486fea67d938a0e610f19b6089873f2fcfe4759a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npp7h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rq89t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:15Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:15 crc kubenswrapper[4697]: I0127 15:09:15.069947 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6jxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24ac4a674c5fb98082daeabf52736988951ea5c66064ff4bb63f0d40c43b947d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25f52622d494cffbbd36c21f76148b896a10d3c1ace649ac0824e847b812a277\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9146d3d41cb348c99ea78d62aef3aa7d46c5f99855e042fdf5bc38b18556e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e33c68fac5ef11b2704b8a1460588937489a191ea2eacb70548b1e99cf718822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8784cf473729161592d08c782f4754724d6609756a30040715cbff8c732a09c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eea7c2b7dbea8198cc4709a808f8ecab760514224f4e3eb96d04c3bd7f16df6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c922932b6548d6d3070183264d41bc14a0cfc7a122dfc0772c4839066544c36d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c69c1cef25da3355f8dacd4b9acc8d52cdc6e32c3149645679a66420b2ff1fc2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T15:08:57Z\\\",\\\"message\\\":\\\"dded to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:08:57Z is after 2025-08-24T17:21:41Z]\\\\nI0127 15:08:57.846840 6015 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-controller-manager-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"3ec9f67e-7758-4707-a6d0-2dc28f28ac37\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-controller-manager-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-controller-manager-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c922932b6548d6d3070183264d41bc14a0cfc7a122dfc0772c4839066544c36d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T15:09:14Z\\\",\\\"message\\\":\\\"d to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:14Z is after 2025-08-24T17:21:41Z]\\\\nI0127 15:09:14.396323 6251 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]} options:{GoMap:map[iface-id-ver:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c94130be-172c-477c-88c4-40cc7eba30fe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 15:09:14.396396 6251 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c94130be-172c-477c-88c4-40cc7eba30fe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 15:09:14.396409 625\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:09:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://971bf4362650664f5133d9b68b7a5ce76e54dafbf28c88730f678ada0256ffd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9666b8a501ef015431ee3be1fc34ca2b196011df3007d2e4d508f09f9967785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9666b8a501ef015431ee3be1fc34ca2b196011df3007d2e4d508f09f9967785\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z6jxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:15Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:15 crc kubenswrapper[4697]: I0127 15:09:15.078848 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:15 crc kubenswrapper[4697]: I0127 15:09:15.078880 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:15 crc kubenswrapper[4697]: I0127 15:09:15.078889 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:15 crc kubenswrapper[4697]: I0127 15:09:15.078902 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:15 crc kubenswrapper[4697]: I0127 15:09:15.078912 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:15Z","lastTransitionTime":"2026-01-27T15:09:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:15 crc kubenswrapper[4697]: I0127 15:09:15.082423 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00bcd4fb-11e6-4087-91b7-290cd35a7292\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee5c74f4e3f1154431027a743528e81ec4bed30037b30a858870f74993da4691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b23c092c5d493951a1f6dbbf0482f102f36a830133d843f3c574afba2e1d50d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ad05a5c3b7640af677ede45c27c40da5d118e28a9d45de0ffa60a05684121c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fd615105781bcf4614f8a58cf63eeb89020db12e822192bd652a5ff23e25a2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:24Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:15Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:15 crc kubenswrapper[4697]: I0127 15:09:15.095813 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e13ee612abe9aa03f8ccaf68abbdfdbeb29820484f430097aef6be1679d3efe8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:15Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:15 crc kubenswrapper[4697]: I0127 15:09:15.112359 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a182e147723dd1c9335e6c6a910d5d53bdfc118504b6a0a9f3c91f79b6d3aee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52fcd1c6784720765f18ddc1936d3bdd625b743d27654a647ff80351957797e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:15Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:15 crc kubenswrapper[4697]: I0127 15:09:15.124171 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:15Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:15 crc kubenswrapper[4697]: I0127 15:09:15.138969 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bcb9s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7543bea-0b65-44e1-8c0c-bc1a13577d69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fe79de88015d62a290c140e0504b9ef088f39fa79bc9b379d46fa9cdb03123f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0b69d8311464a46854b17dc23de984ff37a24f3de84f8ad6033d26d5dd30afc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0b69d8311464a46854b17dc23de984ff37a24f3de84f8ad6033d26d5dd30afc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d34049aae4e409909bb597c8bf33aa1c1ac85699cf72e33f5643145fdf9fbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d34049aae4e409909bb597c8bf33aa1c1ac85699cf72e33f5643145fdf9fbb9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b85aff4ba7e4c4eddcdfd916b42392fd8f5bd4d18caae739a7490c0576fcff1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b85aff4ba7e4c4eddcdfd916b42392fd8f5bd4d18caae739a7490c0576fcff1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6affaf91a44dec8a9da34068ed68f480ad543e0efc8e0f584fd5002f8f6ed0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6affaf91a44dec8a9da34068ed68f480ad543e0efc8e0f584fd5002f8f6ed0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dede89b14b4d80c8b9e74c45b628b5def6a04f922bb59c06828c3a4e43deca4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dede89b14b4d80c8b9e74c45b628b5def6a04f922bb59c06828c3a4e43deca4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a125d46e355d85444bf125e8184888e9b0c18dab3cd7b09b89ffff202e2c6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a125d46e355d85444bf125e8184888e9b0c18dab3cd7b09b89ffff202e2c6b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bcb9s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:15Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:15 crc kubenswrapper[4697]: I0127 15:09:15.150928 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wz495" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9bec8bc-b2a6-4865-83ca-692ae5c022a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2616d07c83d73b63d4b728a30de8a7e1d76986d38f8c4c3fe019bf73e64784f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wqhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://faaced835dbc76e880a1fd29824b00fca5f720686e476bcba6ad4f807e28e8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wqhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wz495\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:15Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:15 crc kubenswrapper[4697]: I0127 15:09:15.165030 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vwctp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11ed6885-450d-477c-8e08-acf5fbde2fa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr85v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr85v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:09:00Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vwctp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:15Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:15 crc kubenswrapper[4697]: I0127 15:09:15.175991 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lpz4j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d187caad-2501-44d6-8ced-f8d8ca5fecfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c2b6a00c426e85ca8ca4fe5790bf7badc12e0c2cc72c1454e664e809ace5e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5jqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lpz4j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:15Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:15 crc kubenswrapper[4697]: I0127 15:09:15.180679 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:15 crc kubenswrapper[4697]: I0127 15:09:15.180738 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:15 crc kubenswrapper[4697]: I0127 15:09:15.180747 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:15 crc kubenswrapper[4697]: I0127 15:09:15.180760 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:15 crc kubenswrapper[4697]: I0127 15:09:15.180769 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:15Z","lastTransitionTime":"2026-01-27T15:09:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:15 crc kubenswrapper[4697]: I0127 15:09:15.283197 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:15 crc kubenswrapper[4697]: I0127 15:09:15.283258 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:15 crc kubenswrapper[4697]: I0127 15:09:15.283269 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:15 crc kubenswrapper[4697]: I0127 15:09:15.283288 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:15 crc kubenswrapper[4697]: I0127 15:09:15.283299 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:15Z","lastTransitionTime":"2026-01-27T15:09:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:15 crc kubenswrapper[4697]: I0127 15:09:15.385581 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:15 crc kubenswrapper[4697]: I0127 15:09:15.385624 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:15 crc kubenswrapper[4697]: I0127 15:09:15.385635 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:15 crc kubenswrapper[4697]: I0127 15:09:15.385651 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:15 crc kubenswrapper[4697]: I0127 15:09:15.385664 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:15Z","lastTransitionTime":"2026-01-27T15:09:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:15 crc kubenswrapper[4697]: I0127 15:09:15.487529 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:15 crc kubenswrapper[4697]: I0127 15:09:15.487570 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:15 crc kubenswrapper[4697]: I0127 15:09:15.487581 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:15 crc kubenswrapper[4697]: I0127 15:09:15.487597 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:15 crc kubenswrapper[4697]: I0127 15:09:15.487607 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:15Z","lastTransitionTime":"2026-01-27T15:09:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:15 crc kubenswrapper[4697]: I0127 15:09:15.544325 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 15:28:38.748773576 +0000 UTC Jan 27 15:09:15 crc kubenswrapper[4697]: I0127 15:09:15.567878 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:09:15 crc kubenswrapper[4697]: I0127 15:09:15.568462 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:09:15 crc kubenswrapper[4697]: E0127 15:09:15.568575 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:09:15 crc kubenswrapper[4697]: E0127 15:09:15.568868 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:09:15 crc kubenswrapper[4697]: I0127 15:09:15.590582 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:15 crc kubenswrapper[4697]: I0127 15:09:15.590626 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:15 crc kubenswrapper[4697]: I0127 15:09:15.590638 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:15 crc kubenswrapper[4697]: I0127 15:09:15.590653 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:15 crc kubenswrapper[4697]: I0127 15:09:15.590663 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:15Z","lastTransitionTime":"2026-01-27T15:09:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:15 crc kubenswrapper[4697]: I0127 15:09:15.693476 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:15 crc kubenswrapper[4697]: I0127 15:09:15.693522 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:15 crc kubenswrapper[4697]: I0127 15:09:15.693535 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:15 crc kubenswrapper[4697]: I0127 15:09:15.693553 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:15 crc kubenswrapper[4697]: I0127 15:09:15.693565 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:15Z","lastTransitionTime":"2026-01-27T15:09:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:15 crc kubenswrapper[4697]: I0127 15:09:15.795624 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:15 crc kubenswrapper[4697]: I0127 15:09:15.795666 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:15 crc kubenswrapper[4697]: I0127 15:09:15.795678 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:15 crc kubenswrapper[4697]: I0127 15:09:15.795694 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:15 crc kubenswrapper[4697]: I0127 15:09:15.795707 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:15Z","lastTransitionTime":"2026-01-27T15:09:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:15 crc kubenswrapper[4697]: I0127 15:09:15.898415 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:15 crc kubenswrapper[4697]: I0127 15:09:15.898854 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:15 crc kubenswrapper[4697]: I0127 15:09:15.898968 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:15 crc kubenswrapper[4697]: I0127 15:09:15.899577 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:15 crc kubenswrapper[4697]: I0127 15:09:15.899653 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:15Z","lastTransitionTime":"2026-01-27T15:09:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:15 crc kubenswrapper[4697]: I0127 15:09:15.964543 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-z6jxw_6a1ce5ad-1a8c-4a28-99d8-fc71649954ad/ovnkube-controller/2.log" Jan 27 15:09:15 crc kubenswrapper[4697]: I0127 15:09:15.970619 4697 scope.go:117] "RemoveContainer" containerID="c922932b6548d6d3070183264d41bc14a0cfc7a122dfc0772c4839066544c36d" Jan 27 15:09:15 crc kubenswrapper[4697]: E0127 15:09:15.970906 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-z6jxw_openshift-ovn-kubernetes(6a1ce5ad-1a8c-4a28-99d8-fc71649954ad)\"" pod="openshift-ovn-kubernetes/ovnkube-node-z6jxw" podUID="6a1ce5ad-1a8c-4a28-99d8-fc71649954ad" Jan 27 15:09:15 crc kubenswrapper[4697]: I0127 15:09:15.995010 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30821478-065e-48b2-85f3-ae69260477fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://841fe2379065903ddc38b4968c1764a6c83d13f42c7587f20be81d8539199c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc09ec12a81a4e2954a0d1146819e9f9b4fc1fd442a3e9c930ea213aff875eb9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa7833382543ce12d026eb8bbc6fb93276a1105a0cc34d215e719591be740f80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d1140be76b3f274b414e158153723d043089cb9b01d27733976db83dc4601f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3144c28de6be75231118993ba779a42bcc9032d51e927df649d3abb602ffa5dd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 15:08:45.318333 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 15:08:45.318446 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 15:08:45.319039 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1083979560/tls.crt::/tmp/serving-cert-1083979560/tls.key\\\\\\\"\\\\nI0127 15:08:45.778691 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 15:08:45.781562 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 15:08:45.781589 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 15:08:45.781614 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 15:08:45.781620 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 15:08:45.799733 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0127 15:08:45.799756 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 15:08:45.799769 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:08:45.799774 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:08:45.799800 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 15:08:45.799806 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 15:08:45.799810 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 15:08:45.799814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 15:08:45.805747 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://772509e08b1dcc68190d81e10a93fe348af55fdc71dbab2f0cadffd65089c044\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d9c79b1675802dcd1800cdbf3562832c4d201ff1b4d7ab4504118a41a245453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d9c79b1675802dcd1800cdbf3562832c4d201ff1b4d7ab4504118a41a245453\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:24Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:15Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:16 crc kubenswrapper[4697]: I0127 15:09:16.002666 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:16 crc kubenswrapper[4697]: I0127 15:09:16.002740 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:16 crc kubenswrapper[4697]: I0127 15:09:16.002762 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:16 crc kubenswrapper[4697]: I0127 15:09:16.002815 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:16 crc kubenswrapper[4697]: I0127 15:09:16.002833 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:16Z","lastTransitionTime":"2026-01-27T15:09:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:16 crc kubenswrapper[4697]: I0127 15:09:16.021633 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:16Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:16 crc kubenswrapper[4697]: I0127 15:09:16.039580 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:16Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:16 crc kubenswrapper[4697]: I0127 15:09:16.058856 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://955eb03bb38f971417b1af1b193c2008607eaeda5addf30f899830dd84620c4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:16Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:16 crc kubenswrapper[4697]: I0127 15:09:16.077711 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bdclj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed86f7b6-a042-470f-8da3-9cad4e65c550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a701152234da7522fefeed3798f4748c4f8e56fa81edd5011ad4a89bbb2e4be7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f898q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bdclj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:16Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:16 crc kubenswrapper[4697]: I0127 15:09:16.090264 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6lf86" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35bbb68b-046f-482d-8c38-e76dd8a12a61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6949b3c1babb1c4c69bf612b869bea5dabf3fedc5e6c930ec3d3a51736c9651f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sf5z9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc445832c9ce25b3b787c029df7baad2f8ad53f7cf8705ab5e2590c85119bec1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sf5z9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6lf86\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:16Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:16 crc kubenswrapper[4697]: I0127 15:09:16.104818 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:16 crc kubenswrapper[4697]: I0127 15:09:16.105015 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:16 crc kubenswrapper[4697]: I0127 15:09:16.105115 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:16 crc kubenswrapper[4697]: I0127 15:09:16.105218 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:16 crc kubenswrapper[4697]: I0127 15:09:16.105315 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:16Z","lastTransitionTime":"2026-01-27T15:09:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:16 crc kubenswrapper[4697]: I0127 15:09:16.106816 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e13ee612abe9aa03f8ccaf68abbdfdbeb29820484f430097aef6be1679d3efe8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:16Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:16 crc kubenswrapper[4697]: I0127 15:09:16.124375 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a182e147723dd1c9335e6c6a910d5d53bdfc118504b6a0a9f3c91f79b6d3aee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52fcd1c6784720765f18ddc1936d3bdd625b743d27654a647ff80351957797e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:16Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:16 crc kubenswrapper[4697]: I0127 15:09:16.140707 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:16Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:16 crc kubenswrapper[4697]: I0127 15:09:16.155016 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bcb9s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7543bea-0b65-44e1-8c0c-bc1a13577d69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fe79de88015d62a290c140e0504b9ef088f39fa79bc9b379d46fa9cdb03123f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0b69d8311464a46854b17dc23de984ff37a24f3de84f8ad6033d26d5dd30afc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0b69d8311464a46854b17dc23de984ff37a24f3de84f8ad6033d26d5dd30afc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d34049aae4e409909bb597c8bf33aa1c1ac85699cf72e33f5643145fdf9fbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d34049aae4e409909bb597c8bf33aa1c1ac85699cf72e33f5643145fdf9fbb9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b85aff4ba7e4c4eddcdfd916b42392fd8f5bd4d18caae739a7490c0576fcff1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b85aff4ba7e4c4eddcdfd916b42392fd8f5bd4d18caae739a7490c0576fcff1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6affaf91a44dec8a9da34068ed68f480ad543e0efc8e0f584fd5002f8f6ed0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6affaf91a44dec8a9da34068ed68f480ad543e0efc8e0f584fd5002f8f6ed0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dede89b14b4d80c8b9e74c45b628b5def6a04f922bb59c06828c3a4e43deca4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dede89b14b4d80c8b9e74c45b628b5def6a04f922bb59c06828c3a4e43deca4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a125d46e355d85444bf125e8184888e9b0c18dab3cd7b09b89ffff202e2c6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a125d46e355d85444bf125e8184888e9b0c18dab3cd7b09b89ffff202e2c6b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bcb9s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:16Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:16 crc kubenswrapper[4697]: I0127 15:09:16.167937 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rq89t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fbc1c27-fba2-40df-95dd-3842bd1f1906\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0c056e48d3130806317f25486fea67d938a0e610f19b6089873f2fcfe4759a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npp7h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rq89t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:16Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:16 crc kubenswrapper[4697]: I0127 15:09:16.188448 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6jxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24ac4a674c5fb98082daeabf52736988951ea5c66064ff4bb63f0d40c43b947d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25f52622d494cffbbd36c21f76148b896a10d3c1ace649ac0824e847b812a277\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9146d3d41cb348c99ea78d62aef3aa7d46c5f99855e042fdf5bc38b18556e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e33c68fac5ef11b2704b8a1460588937489a191ea2eacb70548b1e99cf718822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8784cf473729161592d08c782f4754724d6609756a30040715cbff8c732a09c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eea7c2b7dbea8198cc4709a808f8ecab760514224f4e3eb96d04c3bd7f16df6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c922932b6548d6d3070183264d41bc14a0cfc7a122dfc0772c4839066544c36d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c922932b6548d6d3070183264d41bc14a0cfc7a122dfc0772c4839066544c36d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T15:09:14Z\\\",\\\"message\\\":\\\"d to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:14Z is after 2025-08-24T17:21:41Z]\\\\nI0127 15:09:14.396323 6251 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]} options:{GoMap:map[iface-id-ver:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c94130be-172c-477c-88c4-40cc7eba30fe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 15:09:14.396396 6251 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c94130be-172c-477c-88c4-40cc7eba30fe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 15:09:14.396409 625\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:09:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-z6jxw_openshift-ovn-kubernetes(6a1ce5ad-1a8c-4a28-99d8-fc71649954ad)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://971bf4362650664f5133d9b68b7a5ce76e54dafbf28c88730f678ada0256ffd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9666b8a501ef015431ee3be1fc34ca2b196011df3007d2e4d508f09f9967785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9666b8a501ef015431ee3be1fc34ca2b196011df3007d2e4d508f09f9967785\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z6jxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:16Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:16 crc kubenswrapper[4697]: I0127 15:09:16.198107 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/11ed6885-450d-477c-8e08-acf5fbde2fa3-metrics-certs\") pod \"network-metrics-daemon-vwctp\" (UID: \"11ed6885-450d-477c-8e08-acf5fbde2fa3\") " pod="openshift-multus/network-metrics-daemon-vwctp" Jan 27 15:09:16 crc kubenswrapper[4697]: E0127 15:09:16.198298 4697 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 15:09:16 crc kubenswrapper[4697]: E0127 15:09:16.198364 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/11ed6885-450d-477c-8e08-acf5fbde2fa3-metrics-certs podName:11ed6885-450d-477c-8e08-acf5fbde2fa3 nodeName:}" failed. No retries permitted until 2026-01-27 15:09:32.198342863 +0000 UTC m=+68.370742684 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/11ed6885-450d-477c-8e08-acf5fbde2fa3-metrics-certs") pod "network-metrics-daemon-vwctp" (UID: "11ed6885-450d-477c-8e08-acf5fbde2fa3") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 15:09:16 crc kubenswrapper[4697]: I0127 15:09:16.202629 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00bcd4fb-11e6-4087-91b7-290cd35a7292\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee5c74f4e3f1154431027a743528e81ec4bed30037b30a858870f74993da4691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b23c092c5d493951a1f6dbbf0482f102f36a830133d843f3c574afba2e1d50d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ad05a5c3b7640af677ede45c27c40da5d118e28a9d45de0ffa60a05684121c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fd615105781bcf4614f8a58cf63eeb89020db12e822192bd652a5ff23e25a2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:24Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:16Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:16 crc kubenswrapper[4697]: I0127 15:09:16.207076 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:16 crc kubenswrapper[4697]: I0127 15:09:16.207252 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:16 crc kubenswrapper[4697]: I0127 15:09:16.207418 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:16 crc kubenswrapper[4697]: I0127 15:09:16.207568 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:16 crc kubenswrapper[4697]: I0127 15:09:16.207701 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:16Z","lastTransitionTime":"2026-01-27T15:09:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:16 crc kubenswrapper[4697]: I0127 15:09:16.214051 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wz495" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9bec8bc-b2a6-4865-83ca-692ae5c022a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2616d07c83d73b63d4b728a30de8a7e1d76986d38f8c4c3fe019bf73e64784f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wqhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://faaced835dbc76e880a1fd29824b00fca5f720686e476bcba6ad4f807e28e8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wqhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wz495\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:16Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:16 crc kubenswrapper[4697]: I0127 15:09:16.227143 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vwctp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11ed6885-450d-477c-8e08-acf5fbde2fa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr85v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr85v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:09:00Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vwctp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:16Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:16 crc kubenswrapper[4697]: I0127 15:09:16.236935 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lpz4j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d187caad-2501-44d6-8ced-f8d8ca5fecfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c2b6a00c426e85ca8ca4fe5790bf7badc12e0c2cc72c1454e664e809ace5e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5jqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lpz4j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:16Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:16 crc kubenswrapper[4697]: I0127 15:09:16.309845 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:16 crc kubenswrapper[4697]: I0127 15:09:16.310207 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:16 crc kubenswrapper[4697]: I0127 15:09:16.310387 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:16 crc kubenswrapper[4697]: I0127 15:09:16.310580 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:16 crc kubenswrapper[4697]: I0127 15:09:16.310848 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:16Z","lastTransitionTime":"2026-01-27T15:09:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:16 crc kubenswrapper[4697]: I0127 15:09:16.414005 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:16 crc kubenswrapper[4697]: I0127 15:09:16.414071 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:16 crc kubenswrapper[4697]: I0127 15:09:16.414083 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:16 crc kubenswrapper[4697]: I0127 15:09:16.414102 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:16 crc kubenswrapper[4697]: I0127 15:09:16.414118 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:16Z","lastTransitionTime":"2026-01-27T15:09:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:16 crc kubenswrapper[4697]: I0127 15:09:16.516716 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:16 crc kubenswrapper[4697]: I0127 15:09:16.516755 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:16 crc kubenswrapper[4697]: I0127 15:09:16.516767 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:16 crc kubenswrapper[4697]: I0127 15:09:16.516829 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:16 crc kubenswrapper[4697]: I0127 15:09:16.516844 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:16Z","lastTransitionTime":"2026-01-27T15:09:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:16 crc kubenswrapper[4697]: I0127 15:09:16.545341 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 03:28:31.202132817 +0000 UTC Jan 27 15:09:16 crc kubenswrapper[4697]: I0127 15:09:16.567828 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vwctp" Jan 27 15:09:16 crc kubenswrapper[4697]: I0127 15:09:16.568083 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:09:16 crc kubenswrapper[4697]: E0127 15:09:16.568510 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vwctp" podUID="11ed6885-450d-477c-8e08-acf5fbde2fa3" Jan 27 15:09:16 crc kubenswrapper[4697]: E0127 15:09:16.568619 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:09:16 crc kubenswrapper[4697]: I0127 15:09:16.618717 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:16 crc kubenswrapper[4697]: I0127 15:09:16.618792 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:16 crc kubenswrapper[4697]: I0127 15:09:16.618801 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:16 crc kubenswrapper[4697]: I0127 15:09:16.618817 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:16 crc kubenswrapper[4697]: I0127 15:09:16.618826 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:16Z","lastTransitionTime":"2026-01-27T15:09:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:16 crc kubenswrapper[4697]: I0127 15:09:16.721444 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:16 crc kubenswrapper[4697]: I0127 15:09:16.721472 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:16 crc kubenswrapper[4697]: I0127 15:09:16.721480 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:16 crc kubenswrapper[4697]: I0127 15:09:16.721492 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:16 crc kubenswrapper[4697]: I0127 15:09:16.721502 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:16Z","lastTransitionTime":"2026-01-27T15:09:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:16 crc kubenswrapper[4697]: I0127 15:09:16.824970 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:16 crc kubenswrapper[4697]: I0127 15:09:16.825037 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:16 crc kubenswrapper[4697]: I0127 15:09:16.825054 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:16 crc kubenswrapper[4697]: I0127 15:09:16.825075 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:16 crc kubenswrapper[4697]: I0127 15:09:16.825089 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:16Z","lastTransitionTime":"2026-01-27T15:09:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:16 crc kubenswrapper[4697]: I0127 15:09:16.928081 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:16 crc kubenswrapper[4697]: I0127 15:09:16.928176 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:16 crc kubenswrapper[4697]: I0127 15:09:16.928188 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:16 crc kubenswrapper[4697]: I0127 15:09:16.928205 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:16 crc kubenswrapper[4697]: I0127 15:09:16.928218 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:16Z","lastTransitionTime":"2026-01-27T15:09:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:17 crc kubenswrapper[4697]: I0127 15:09:17.031053 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:17 crc kubenswrapper[4697]: I0127 15:09:17.031089 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:17 crc kubenswrapper[4697]: I0127 15:09:17.031101 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:17 crc kubenswrapper[4697]: I0127 15:09:17.031117 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:17 crc kubenswrapper[4697]: I0127 15:09:17.031130 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:17Z","lastTransitionTime":"2026-01-27T15:09:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:17 crc kubenswrapper[4697]: I0127 15:09:17.133323 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:17 crc kubenswrapper[4697]: I0127 15:09:17.133366 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:17 crc kubenswrapper[4697]: I0127 15:09:17.133377 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:17 crc kubenswrapper[4697]: I0127 15:09:17.133395 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:17 crc kubenswrapper[4697]: I0127 15:09:17.133407 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:17Z","lastTransitionTime":"2026-01-27T15:09:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:17 crc kubenswrapper[4697]: I0127 15:09:17.236504 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:17 crc kubenswrapper[4697]: I0127 15:09:17.236582 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:17 crc kubenswrapper[4697]: I0127 15:09:17.236601 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:17 crc kubenswrapper[4697]: I0127 15:09:17.236628 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:17 crc kubenswrapper[4697]: I0127 15:09:17.236648 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:17Z","lastTransitionTime":"2026-01-27T15:09:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:17 crc kubenswrapper[4697]: I0127 15:09:17.312070 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:09:17 crc kubenswrapper[4697]: E0127 15:09:17.312234 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:09:49.312213834 +0000 UTC m=+85.484613625 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:09:17 crc kubenswrapper[4697]: I0127 15:09:17.338844 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:17 crc kubenswrapper[4697]: I0127 15:09:17.338881 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:17 crc kubenswrapper[4697]: I0127 15:09:17.338889 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:17 crc kubenswrapper[4697]: I0127 15:09:17.338903 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:17 crc kubenswrapper[4697]: I0127 15:09:17.338913 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:17Z","lastTransitionTime":"2026-01-27T15:09:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:17 crc kubenswrapper[4697]: I0127 15:09:17.413060 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:09:17 crc kubenswrapper[4697]: I0127 15:09:17.413110 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:09:17 crc kubenswrapper[4697]: I0127 15:09:17.413130 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:09:17 crc kubenswrapper[4697]: I0127 15:09:17.413150 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:09:17 crc kubenswrapper[4697]: E0127 15:09:17.413215 4697 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 15:09:17 crc kubenswrapper[4697]: E0127 15:09:17.413269 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 15:09:49.413255175 +0000 UTC m=+85.585654956 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 15:09:17 crc kubenswrapper[4697]: E0127 15:09:17.413275 4697 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 15:09:17 crc kubenswrapper[4697]: E0127 15:09:17.413328 4697 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 15:09:17 crc kubenswrapper[4697]: E0127 15:09:17.413348 4697 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 15:09:17 crc kubenswrapper[4697]: E0127 15:09:17.413357 4697 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 15:09:17 crc kubenswrapper[4697]: E0127 15:09:17.413382 4697 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 15:09:17 crc kubenswrapper[4697]: E0127 15:09:17.413407 4697 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 15:09:17 crc kubenswrapper[4697]: E0127 15:09:17.413422 4697 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 15:09:17 crc kubenswrapper[4697]: E0127 15:09:17.413446 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 15:09:49.413406429 +0000 UTC m=+85.585806270 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 15:09:17 crc kubenswrapper[4697]: E0127 15:09:17.413471 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 15:09:49.41346147 +0000 UTC m=+85.585861261 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 15:09:17 crc kubenswrapper[4697]: E0127 15:09:17.413502 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 15:09:49.413490721 +0000 UTC m=+85.585890522 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 15:09:17 crc kubenswrapper[4697]: I0127 15:09:17.441035 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:17 crc kubenswrapper[4697]: I0127 15:09:17.441067 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:17 crc kubenswrapper[4697]: I0127 15:09:17.441076 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:17 crc kubenswrapper[4697]: I0127 15:09:17.441089 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:17 crc kubenswrapper[4697]: I0127 15:09:17.441098 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:17Z","lastTransitionTime":"2026-01-27T15:09:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:17 crc kubenswrapper[4697]: I0127 15:09:17.543126 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:17 crc kubenswrapper[4697]: I0127 15:09:17.543164 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:17 crc kubenswrapper[4697]: I0127 15:09:17.543176 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:17 crc kubenswrapper[4697]: I0127 15:09:17.543195 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:17 crc kubenswrapper[4697]: I0127 15:09:17.543207 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:17Z","lastTransitionTime":"2026-01-27T15:09:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:17 crc kubenswrapper[4697]: I0127 15:09:17.546379 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 22:28:11.992465493 +0000 UTC Jan 27 15:09:17 crc kubenswrapper[4697]: I0127 15:09:17.567710 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:09:17 crc kubenswrapper[4697]: I0127 15:09:17.567816 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:09:17 crc kubenswrapper[4697]: E0127 15:09:17.567909 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:09:17 crc kubenswrapper[4697]: E0127 15:09:17.568008 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:09:17 crc kubenswrapper[4697]: I0127 15:09:17.645939 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:17 crc kubenswrapper[4697]: I0127 15:09:17.645991 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:17 crc kubenswrapper[4697]: I0127 15:09:17.646001 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:17 crc kubenswrapper[4697]: I0127 15:09:17.646017 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:17 crc kubenswrapper[4697]: I0127 15:09:17.646030 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:17Z","lastTransitionTime":"2026-01-27T15:09:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:17 crc kubenswrapper[4697]: I0127 15:09:17.724383 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 15:09:17 crc kubenswrapper[4697]: I0127 15:09:17.736595 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 27 15:09:17 crc kubenswrapper[4697]: I0127 15:09:17.740439 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lpz4j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d187caad-2501-44d6-8ced-f8d8ca5fecfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c2b6a00c426e85ca8ca4fe5790bf7badc12e0c2cc72c1454e664e809ace5e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5jqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lpz4j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:17Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:17 crc kubenswrapper[4697]: I0127 15:09:17.749250 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:17 crc kubenswrapper[4697]: I0127 15:09:17.749298 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:17 crc kubenswrapper[4697]: I0127 15:09:17.749312 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:17 crc kubenswrapper[4697]: I0127 15:09:17.749330 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:17 crc kubenswrapper[4697]: I0127 15:09:17.749342 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:17Z","lastTransitionTime":"2026-01-27T15:09:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:17 crc kubenswrapper[4697]: I0127 15:09:17.754620 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bdclj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed86f7b6-a042-470f-8da3-9cad4e65c550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a701152234da7522fefeed3798f4748c4f8e56fa81edd5011ad4a89bbb2e4be7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f898q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bdclj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:17Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:17 crc kubenswrapper[4697]: I0127 15:09:17.767433 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6lf86" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35bbb68b-046f-482d-8c38-e76dd8a12a61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6949b3c1babb1c4c69bf612b869bea5dabf3fedc5e6c930ec3d3a51736c9651f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sf5z9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc445832c9ce25b3b787c029df7baad2f8ad53f7cf8705ab5e2590c85119bec1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sf5z9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6lf86\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:17Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:17 crc kubenswrapper[4697]: I0127 15:09:17.781698 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30821478-065e-48b2-85f3-ae69260477fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://841fe2379065903ddc38b4968c1764a6c83d13f42c7587f20be81d8539199c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc09ec12a81a4e2954a0d1146819e9f9b4fc1fd442a3e9c930ea213aff875eb9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa7833382543ce12d026eb8bbc6fb93276a1105a0cc34d215e719591be740f80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d1140be76b3f274b414e158153723d043089cb9b01d27733976db83dc4601f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3144c28de6be75231118993ba779a42bcc9032d51e927df649d3abb602ffa5dd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 15:08:45.318333 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 15:08:45.318446 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 15:08:45.319039 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1083979560/tls.crt::/tmp/serving-cert-1083979560/tls.key\\\\\\\"\\\\nI0127 15:08:45.778691 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 15:08:45.781562 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 15:08:45.781589 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 15:08:45.781614 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 15:08:45.781620 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 15:08:45.799733 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0127 15:08:45.799756 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 15:08:45.799769 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:08:45.799774 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:08:45.799800 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 15:08:45.799806 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 15:08:45.799810 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 15:08:45.799814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 15:08:45.805747 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://772509e08b1dcc68190d81e10a93fe348af55fdc71dbab2f0cadffd65089c044\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d9c79b1675802dcd1800cdbf3562832c4d201ff1b4d7ab4504118a41a245453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d9c79b1675802dcd1800cdbf3562832c4d201ff1b4d7ab4504118a41a245453\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:24Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:17Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:17 crc kubenswrapper[4697]: I0127 15:09:17.794801 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:17Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:17 crc kubenswrapper[4697]: I0127 15:09:17.807150 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:17Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:17 crc kubenswrapper[4697]: I0127 15:09:17.817642 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://955eb03bb38f971417b1af1b193c2008607eaeda5addf30f899830dd84620c4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:17Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:17 crc kubenswrapper[4697]: I0127 15:09:17.833560 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6jxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24ac4a674c5fb98082daeabf52736988951ea5c66064ff4bb63f0d40c43b947d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25f52622d494cffbbd36c21f76148b896a10d3c1ace649ac0824e847b812a277\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9146d3d41cb348c99ea78d62aef3aa7d46c5f99855e042fdf5bc38b18556e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e33c68fac5ef11b2704b8a1460588937489a191ea2eacb70548b1e99cf718822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8784cf473729161592d08c782f4754724d6609756a30040715cbff8c732a09c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eea7c2b7dbea8198cc4709a808f8ecab760514224f4e3eb96d04c3bd7f16df6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c922932b6548d6d3070183264d41bc14a0cfc7a122dfc0772c4839066544c36d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c922932b6548d6d3070183264d41bc14a0cfc7a122dfc0772c4839066544c36d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T15:09:14Z\\\",\\\"message\\\":\\\"d to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:14Z is after 2025-08-24T17:21:41Z]\\\\nI0127 15:09:14.396323 6251 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]} options:{GoMap:map[iface-id-ver:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c94130be-172c-477c-88c4-40cc7eba30fe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 15:09:14.396396 6251 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c94130be-172c-477c-88c4-40cc7eba30fe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 15:09:14.396409 625\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:09:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-z6jxw_openshift-ovn-kubernetes(6a1ce5ad-1a8c-4a28-99d8-fc71649954ad)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://971bf4362650664f5133d9b68b7a5ce76e54dafbf28c88730f678ada0256ffd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9666b8a501ef015431ee3be1fc34ca2b196011df3007d2e4d508f09f9967785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9666b8a501ef015431ee3be1fc34ca2b196011df3007d2e4d508f09f9967785\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z6jxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:17Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:17 crc kubenswrapper[4697]: I0127 15:09:17.845986 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00bcd4fb-11e6-4087-91b7-290cd35a7292\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee5c74f4e3f1154431027a743528e81ec4bed30037b30a858870f74993da4691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b23c092c5d493951a1f6dbbf0482f102f36a830133d843f3c574afba2e1d50d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ad05a5c3b7640af677ede45c27c40da5d118e28a9d45de0ffa60a05684121c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fd615105781bcf4614f8a58cf63eeb89020db12e822192bd652a5ff23e25a2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:24Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:17Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:17 crc kubenswrapper[4697]: I0127 15:09:17.851575 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:17 crc kubenswrapper[4697]: I0127 15:09:17.851634 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:17 crc kubenswrapper[4697]: I0127 15:09:17.851649 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:17 crc kubenswrapper[4697]: I0127 15:09:17.851703 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:17 crc kubenswrapper[4697]: I0127 15:09:17.851720 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:17Z","lastTransitionTime":"2026-01-27T15:09:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:17 crc kubenswrapper[4697]: I0127 15:09:17.874419 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e13ee612abe9aa03f8ccaf68abbdfdbeb29820484f430097aef6be1679d3efe8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:17Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:17 crc kubenswrapper[4697]: I0127 15:09:17.900450 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a182e147723dd1c9335e6c6a910d5d53bdfc118504b6a0a9f3c91f79b6d3aee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52fcd1c6784720765f18ddc1936d3bdd625b743d27654a647ff80351957797e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:17Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:17 crc kubenswrapper[4697]: I0127 15:09:17.915438 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:17Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:17 crc kubenswrapper[4697]: I0127 15:09:17.930389 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bcb9s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7543bea-0b65-44e1-8c0c-bc1a13577d69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fe79de88015d62a290c140e0504b9ef088f39fa79bc9b379d46fa9cdb03123f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0b69d8311464a46854b17dc23de984ff37a24f3de84f8ad6033d26d5dd30afc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0b69d8311464a46854b17dc23de984ff37a24f3de84f8ad6033d26d5dd30afc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d34049aae4e409909bb597c8bf33aa1c1ac85699cf72e33f5643145fdf9fbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d34049aae4e409909bb597c8bf33aa1c1ac85699cf72e33f5643145fdf9fbb9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b85aff4ba7e4c4eddcdfd916b42392fd8f5bd4d18caae739a7490c0576fcff1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b85aff4ba7e4c4eddcdfd916b42392fd8f5bd4d18caae739a7490c0576fcff1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6affaf91a44dec8a9da34068ed68f480ad543e0efc8e0f584fd5002f8f6ed0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6affaf91a44dec8a9da34068ed68f480ad543e0efc8e0f584fd5002f8f6ed0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dede89b14b4d80c8b9e74c45b628b5def6a04f922bb59c06828c3a4e43deca4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dede89b14b4d80c8b9e74c45b628b5def6a04f922bb59c06828c3a4e43deca4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a125d46e355d85444bf125e8184888e9b0c18dab3cd7b09b89ffff202e2c6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a125d46e355d85444bf125e8184888e9b0c18dab3cd7b09b89ffff202e2c6b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bcb9s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:17Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:17 crc kubenswrapper[4697]: I0127 15:09:17.948380 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rq89t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fbc1c27-fba2-40df-95dd-3842bd1f1906\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0c056e48d3130806317f25486fea67d938a0e610f19b6089873f2fcfe4759a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npp7h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rq89t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:17Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:17 crc kubenswrapper[4697]: I0127 15:09:17.953912 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:17 crc kubenswrapper[4697]: I0127 15:09:17.953948 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:17 crc kubenswrapper[4697]: I0127 15:09:17.953959 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:17 crc kubenswrapper[4697]: I0127 15:09:17.953976 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:17 crc kubenswrapper[4697]: I0127 15:09:17.953988 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:17Z","lastTransitionTime":"2026-01-27T15:09:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:17 crc kubenswrapper[4697]: I0127 15:09:17.958596 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wz495" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9bec8bc-b2a6-4865-83ca-692ae5c022a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2616d07c83d73b63d4b728a30de8a7e1d76986d38f8c4c3fe019bf73e64784f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wqhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://faaced835dbc76e880a1fd29824b00fca5f720686e476bcba6ad4f807e28e8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wqhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wz495\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:17Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:17 crc kubenswrapper[4697]: I0127 15:09:17.967001 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vwctp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11ed6885-450d-477c-8e08-acf5fbde2fa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr85v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr85v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:09:00Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vwctp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:17Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:18 crc kubenswrapper[4697]: I0127 15:09:18.056203 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:18 crc kubenswrapper[4697]: I0127 15:09:18.056517 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:18 crc kubenswrapper[4697]: I0127 15:09:18.056938 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:18 crc kubenswrapper[4697]: I0127 15:09:18.057198 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:18 crc kubenswrapper[4697]: I0127 15:09:18.057440 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:18Z","lastTransitionTime":"2026-01-27T15:09:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:18 crc kubenswrapper[4697]: I0127 15:09:18.160488 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:18 crc kubenswrapper[4697]: I0127 15:09:18.160771 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:18 crc kubenswrapper[4697]: I0127 15:09:18.161103 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:18 crc kubenswrapper[4697]: I0127 15:09:18.161323 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:18 crc kubenswrapper[4697]: I0127 15:09:18.161442 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:18Z","lastTransitionTime":"2026-01-27T15:09:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:18 crc kubenswrapper[4697]: I0127 15:09:18.264632 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:18 crc kubenswrapper[4697]: I0127 15:09:18.265017 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:18 crc kubenswrapper[4697]: I0127 15:09:18.265147 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:18 crc kubenswrapper[4697]: I0127 15:09:18.265270 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:18 crc kubenswrapper[4697]: I0127 15:09:18.265453 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:18Z","lastTransitionTime":"2026-01-27T15:09:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:18 crc kubenswrapper[4697]: I0127 15:09:18.368925 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:18 crc kubenswrapper[4697]: I0127 15:09:18.368995 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:18 crc kubenswrapper[4697]: I0127 15:09:18.369015 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:18 crc kubenswrapper[4697]: I0127 15:09:18.369045 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:18 crc kubenswrapper[4697]: I0127 15:09:18.369090 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:18Z","lastTransitionTime":"2026-01-27T15:09:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:18 crc kubenswrapper[4697]: I0127 15:09:18.472294 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:18 crc kubenswrapper[4697]: I0127 15:09:18.472362 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:18 crc kubenswrapper[4697]: I0127 15:09:18.472386 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:18 crc kubenswrapper[4697]: I0127 15:09:18.472416 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:18 crc kubenswrapper[4697]: I0127 15:09:18.472437 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:18Z","lastTransitionTime":"2026-01-27T15:09:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:18 crc kubenswrapper[4697]: I0127 15:09:18.547442 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 00:17:44.799002784 +0000 UTC Jan 27 15:09:18 crc kubenswrapper[4697]: I0127 15:09:18.567771 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:09:18 crc kubenswrapper[4697]: I0127 15:09:18.567771 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vwctp" Jan 27 15:09:18 crc kubenswrapper[4697]: E0127 15:09:18.567931 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:09:18 crc kubenswrapper[4697]: E0127 15:09:18.568048 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vwctp" podUID="11ed6885-450d-477c-8e08-acf5fbde2fa3" Jan 27 15:09:18 crc kubenswrapper[4697]: I0127 15:09:18.574537 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:18 crc kubenswrapper[4697]: I0127 15:09:18.574577 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:18 crc kubenswrapper[4697]: I0127 15:09:18.574591 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:18 crc kubenswrapper[4697]: I0127 15:09:18.574608 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:18 crc kubenswrapper[4697]: I0127 15:09:18.574621 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:18Z","lastTransitionTime":"2026-01-27T15:09:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:18 crc kubenswrapper[4697]: I0127 15:09:18.676635 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:18 crc kubenswrapper[4697]: I0127 15:09:18.676671 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:18 crc kubenswrapper[4697]: I0127 15:09:18.676681 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:18 crc kubenswrapper[4697]: I0127 15:09:18.676696 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:18 crc kubenswrapper[4697]: I0127 15:09:18.676706 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:18Z","lastTransitionTime":"2026-01-27T15:09:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:18 crc kubenswrapper[4697]: I0127 15:09:18.780327 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:18 crc kubenswrapper[4697]: I0127 15:09:18.780402 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:18 crc kubenswrapper[4697]: I0127 15:09:18.780425 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:18 crc kubenswrapper[4697]: I0127 15:09:18.780455 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:18 crc kubenswrapper[4697]: I0127 15:09:18.780476 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:18Z","lastTransitionTime":"2026-01-27T15:09:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:18 crc kubenswrapper[4697]: I0127 15:09:18.883126 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:18 crc kubenswrapper[4697]: I0127 15:09:18.883167 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:18 crc kubenswrapper[4697]: I0127 15:09:18.883177 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:18 crc kubenswrapper[4697]: I0127 15:09:18.883193 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:18 crc kubenswrapper[4697]: I0127 15:09:18.883205 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:18Z","lastTransitionTime":"2026-01-27T15:09:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:18 crc kubenswrapper[4697]: I0127 15:09:18.984751 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:18 crc kubenswrapper[4697]: I0127 15:09:18.984812 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:18 crc kubenswrapper[4697]: I0127 15:09:18.984826 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:18 crc kubenswrapper[4697]: I0127 15:09:18.984845 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:18 crc kubenswrapper[4697]: I0127 15:09:18.984872 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:18Z","lastTransitionTime":"2026-01-27T15:09:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:19 crc kubenswrapper[4697]: I0127 15:09:19.088049 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:19 crc kubenswrapper[4697]: I0127 15:09:19.088100 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:19 crc kubenswrapper[4697]: I0127 15:09:19.088118 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:19 crc kubenswrapper[4697]: I0127 15:09:19.088141 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:19 crc kubenswrapper[4697]: I0127 15:09:19.088159 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:19Z","lastTransitionTime":"2026-01-27T15:09:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:19 crc kubenswrapper[4697]: I0127 15:09:19.191254 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:19 crc kubenswrapper[4697]: I0127 15:09:19.191302 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:19 crc kubenswrapper[4697]: I0127 15:09:19.191352 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:19 crc kubenswrapper[4697]: I0127 15:09:19.191369 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:19 crc kubenswrapper[4697]: I0127 15:09:19.191379 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:19Z","lastTransitionTime":"2026-01-27T15:09:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:19 crc kubenswrapper[4697]: I0127 15:09:19.293745 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:19 crc kubenswrapper[4697]: I0127 15:09:19.293829 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:19 crc kubenswrapper[4697]: I0127 15:09:19.293846 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:19 crc kubenswrapper[4697]: I0127 15:09:19.293865 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:19 crc kubenswrapper[4697]: I0127 15:09:19.293875 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:19Z","lastTransitionTime":"2026-01-27T15:09:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:19 crc kubenswrapper[4697]: I0127 15:09:19.396748 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:19 crc kubenswrapper[4697]: I0127 15:09:19.396858 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:19 crc kubenswrapper[4697]: I0127 15:09:19.396876 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:19 crc kubenswrapper[4697]: I0127 15:09:19.396903 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:19 crc kubenswrapper[4697]: I0127 15:09:19.396920 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:19Z","lastTransitionTime":"2026-01-27T15:09:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:19 crc kubenswrapper[4697]: I0127 15:09:19.499076 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:19 crc kubenswrapper[4697]: I0127 15:09:19.499156 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:19 crc kubenswrapper[4697]: I0127 15:09:19.499166 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:19 crc kubenswrapper[4697]: I0127 15:09:19.499253 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:19 crc kubenswrapper[4697]: I0127 15:09:19.499269 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:19Z","lastTransitionTime":"2026-01-27T15:09:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:19 crc kubenswrapper[4697]: I0127 15:09:19.548854 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 07:22:33.222466409 +0000 UTC Jan 27 15:09:19 crc kubenswrapper[4697]: I0127 15:09:19.568158 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:09:19 crc kubenswrapper[4697]: I0127 15:09:19.568234 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:09:19 crc kubenswrapper[4697]: E0127 15:09:19.568283 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:09:19 crc kubenswrapper[4697]: E0127 15:09:19.568446 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:09:19 crc kubenswrapper[4697]: I0127 15:09:19.602210 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:19 crc kubenswrapper[4697]: I0127 15:09:19.602245 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:19 crc kubenswrapper[4697]: I0127 15:09:19.602253 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:19 crc kubenswrapper[4697]: I0127 15:09:19.602268 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:19 crc kubenswrapper[4697]: I0127 15:09:19.602277 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:19Z","lastTransitionTime":"2026-01-27T15:09:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:19 crc kubenswrapper[4697]: I0127 15:09:19.733171 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:19 crc kubenswrapper[4697]: I0127 15:09:19.733200 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:19 crc kubenswrapper[4697]: I0127 15:09:19.733210 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:19 crc kubenswrapper[4697]: I0127 15:09:19.733223 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:19 crc kubenswrapper[4697]: I0127 15:09:19.733231 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:19Z","lastTransitionTime":"2026-01-27T15:09:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:19 crc kubenswrapper[4697]: I0127 15:09:19.834721 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:19 crc kubenswrapper[4697]: I0127 15:09:19.835127 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:19 crc kubenswrapper[4697]: I0127 15:09:19.835173 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:19 crc kubenswrapper[4697]: I0127 15:09:19.835188 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:19 crc kubenswrapper[4697]: I0127 15:09:19.835200 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:19Z","lastTransitionTime":"2026-01-27T15:09:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:19 crc kubenswrapper[4697]: I0127 15:09:19.937539 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:19 crc kubenswrapper[4697]: I0127 15:09:19.937629 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:19 crc kubenswrapper[4697]: I0127 15:09:19.937650 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:19 crc kubenswrapper[4697]: I0127 15:09:19.937673 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:19 crc kubenswrapper[4697]: I0127 15:09:19.937687 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:19Z","lastTransitionTime":"2026-01-27T15:09:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:20 crc kubenswrapper[4697]: I0127 15:09:20.042025 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:20 crc kubenswrapper[4697]: I0127 15:09:20.042076 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:20 crc kubenswrapper[4697]: I0127 15:09:20.042088 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:20 crc kubenswrapper[4697]: I0127 15:09:20.042104 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:20 crc kubenswrapper[4697]: I0127 15:09:20.042124 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:20Z","lastTransitionTime":"2026-01-27T15:09:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:20 crc kubenswrapper[4697]: I0127 15:09:20.144149 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:20 crc kubenswrapper[4697]: I0127 15:09:20.144188 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:20 crc kubenswrapper[4697]: I0127 15:09:20.144198 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:20 crc kubenswrapper[4697]: I0127 15:09:20.144215 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:20 crc kubenswrapper[4697]: I0127 15:09:20.144225 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:20Z","lastTransitionTime":"2026-01-27T15:09:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:20 crc kubenswrapper[4697]: I0127 15:09:20.246301 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:20 crc kubenswrapper[4697]: I0127 15:09:20.246349 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:20 crc kubenswrapper[4697]: I0127 15:09:20.246359 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:20 crc kubenswrapper[4697]: I0127 15:09:20.246374 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:20 crc kubenswrapper[4697]: I0127 15:09:20.246384 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:20Z","lastTransitionTime":"2026-01-27T15:09:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:20 crc kubenswrapper[4697]: I0127 15:09:20.348678 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:20 crc kubenswrapper[4697]: I0127 15:09:20.348721 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:20 crc kubenswrapper[4697]: I0127 15:09:20.348733 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:20 crc kubenswrapper[4697]: I0127 15:09:20.348750 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:20 crc kubenswrapper[4697]: I0127 15:09:20.348762 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:20Z","lastTransitionTime":"2026-01-27T15:09:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:20 crc kubenswrapper[4697]: I0127 15:09:20.451451 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:20 crc kubenswrapper[4697]: I0127 15:09:20.451850 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:20 crc kubenswrapper[4697]: I0127 15:09:20.452012 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:20 crc kubenswrapper[4697]: I0127 15:09:20.452145 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:20 crc kubenswrapper[4697]: I0127 15:09:20.452274 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:20Z","lastTransitionTime":"2026-01-27T15:09:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:20 crc kubenswrapper[4697]: I0127 15:09:20.549492 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 18:00:47.180873057 +0000 UTC Jan 27 15:09:20 crc kubenswrapper[4697]: I0127 15:09:20.554159 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:20 crc kubenswrapper[4697]: I0127 15:09:20.554197 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:20 crc kubenswrapper[4697]: I0127 15:09:20.554208 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:20 crc kubenswrapper[4697]: I0127 15:09:20.554221 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:20 crc kubenswrapper[4697]: I0127 15:09:20.554231 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:20Z","lastTransitionTime":"2026-01-27T15:09:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:20 crc kubenswrapper[4697]: I0127 15:09:20.567561 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vwctp" Jan 27 15:09:20 crc kubenswrapper[4697]: I0127 15:09:20.567660 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:09:20 crc kubenswrapper[4697]: E0127 15:09:20.567727 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vwctp" podUID="11ed6885-450d-477c-8e08-acf5fbde2fa3" Jan 27 15:09:20 crc kubenswrapper[4697]: E0127 15:09:20.567799 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:09:20 crc kubenswrapper[4697]: I0127 15:09:20.657088 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:20 crc kubenswrapper[4697]: I0127 15:09:20.657126 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:20 crc kubenswrapper[4697]: I0127 15:09:20.657134 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:20 crc kubenswrapper[4697]: I0127 15:09:20.657148 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:20 crc kubenswrapper[4697]: I0127 15:09:20.657157 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:20Z","lastTransitionTime":"2026-01-27T15:09:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:20 crc kubenswrapper[4697]: I0127 15:09:20.759953 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:20 crc kubenswrapper[4697]: I0127 15:09:20.759997 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:20 crc kubenswrapper[4697]: I0127 15:09:20.760009 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:20 crc kubenswrapper[4697]: I0127 15:09:20.760026 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:20 crc kubenswrapper[4697]: I0127 15:09:20.760039 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:20Z","lastTransitionTime":"2026-01-27T15:09:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:20 crc kubenswrapper[4697]: I0127 15:09:20.861831 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:20 crc kubenswrapper[4697]: I0127 15:09:20.861866 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:20 crc kubenswrapper[4697]: I0127 15:09:20.861875 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:20 crc kubenswrapper[4697]: I0127 15:09:20.861891 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:20 crc kubenswrapper[4697]: I0127 15:09:20.861903 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:20Z","lastTransitionTime":"2026-01-27T15:09:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:20 crc kubenswrapper[4697]: I0127 15:09:20.964392 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:20 crc kubenswrapper[4697]: I0127 15:09:20.964460 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:20 crc kubenswrapper[4697]: I0127 15:09:20.964476 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:20 crc kubenswrapper[4697]: I0127 15:09:20.964491 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:20 crc kubenswrapper[4697]: I0127 15:09:20.964503 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:20Z","lastTransitionTime":"2026-01-27T15:09:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:21 crc kubenswrapper[4697]: I0127 15:09:21.067311 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:21 crc kubenswrapper[4697]: I0127 15:09:21.067374 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:21 crc kubenswrapper[4697]: I0127 15:09:21.067390 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:21 crc kubenswrapper[4697]: I0127 15:09:21.067410 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:21 crc kubenswrapper[4697]: I0127 15:09:21.067424 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:21Z","lastTransitionTime":"2026-01-27T15:09:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:21 crc kubenswrapper[4697]: I0127 15:09:21.170387 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:21 crc kubenswrapper[4697]: I0127 15:09:21.170477 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:21 crc kubenswrapper[4697]: I0127 15:09:21.170504 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:21 crc kubenswrapper[4697]: I0127 15:09:21.170535 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:21 crc kubenswrapper[4697]: I0127 15:09:21.170563 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:21Z","lastTransitionTime":"2026-01-27T15:09:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:21 crc kubenswrapper[4697]: I0127 15:09:21.273721 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:21 crc kubenswrapper[4697]: I0127 15:09:21.273772 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:21 crc kubenswrapper[4697]: I0127 15:09:21.273818 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:21 crc kubenswrapper[4697]: I0127 15:09:21.273840 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:21 crc kubenswrapper[4697]: I0127 15:09:21.273856 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:21Z","lastTransitionTime":"2026-01-27T15:09:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:21 crc kubenswrapper[4697]: I0127 15:09:21.375868 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:21 crc kubenswrapper[4697]: I0127 15:09:21.375918 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:21 crc kubenswrapper[4697]: I0127 15:09:21.375927 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:21 crc kubenswrapper[4697]: I0127 15:09:21.375939 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:21 crc kubenswrapper[4697]: I0127 15:09:21.375948 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:21Z","lastTransitionTime":"2026-01-27T15:09:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:21 crc kubenswrapper[4697]: I0127 15:09:21.478753 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:21 crc kubenswrapper[4697]: I0127 15:09:21.478832 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:21 crc kubenswrapper[4697]: I0127 15:09:21.478848 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:21 crc kubenswrapper[4697]: I0127 15:09:21.478869 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:21 crc kubenswrapper[4697]: I0127 15:09:21.478886 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:21Z","lastTransitionTime":"2026-01-27T15:09:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:21 crc kubenswrapper[4697]: I0127 15:09:21.550447 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 22:58:16.248936402 +0000 UTC Jan 27 15:09:21 crc kubenswrapper[4697]: I0127 15:09:21.567677 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:09:21 crc kubenswrapper[4697]: I0127 15:09:21.567775 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:09:21 crc kubenswrapper[4697]: E0127 15:09:21.567902 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:09:21 crc kubenswrapper[4697]: E0127 15:09:21.568116 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:09:21 crc kubenswrapper[4697]: I0127 15:09:21.581613 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:21 crc kubenswrapper[4697]: I0127 15:09:21.581652 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:21 crc kubenswrapper[4697]: I0127 15:09:21.581661 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:21 crc kubenswrapper[4697]: I0127 15:09:21.581674 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:21 crc kubenswrapper[4697]: I0127 15:09:21.581683 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:21Z","lastTransitionTime":"2026-01-27T15:09:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:21 crc kubenswrapper[4697]: I0127 15:09:21.601842 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 15:09:21 crc kubenswrapper[4697]: I0127 15:09:21.614288 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00bcd4fb-11e6-4087-91b7-290cd35a7292\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee5c74f4e3f1154431027a743528e81ec4bed30037b30a858870f74993da4691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b23c092c5d493951a1f6dbbf0482f102f36a830133d843f3c574afba2e1d50d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ad05a5c3b7640af677ede45c27c40da5d118e28a9d45de0ffa60a05684121c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fd615105781bcf4614f8a58cf63eeb89020db12e822192bd652a5ff23e25a2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:24Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:21Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:21 crc kubenswrapper[4697]: I0127 15:09:21.626429 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e13ee612abe9aa03f8ccaf68abbdfdbeb29820484f430097aef6be1679d3efe8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:21Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:21 crc kubenswrapper[4697]: I0127 15:09:21.643877 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a182e147723dd1c9335e6c6a910d5d53bdfc118504b6a0a9f3c91f79b6d3aee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52fcd1c6784720765f18ddc1936d3bdd625b743d27654a647ff80351957797e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:21Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:21 crc kubenswrapper[4697]: I0127 15:09:21.657238 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:21Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:21 crc kubenswrapper[4697]: I0127 15:09:21.671250 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bcb9s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7543bea-0b65-44e1-8c0c-bc1a13577d69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fe79de88015d62a290c140e0504b9ef088f39fa79bc9b379d46fa9cdb03123f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0b69d8311464a46854b17dc23de984ff37a24f3de84f8ad6033d26d5dd30afc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0b69d8311464a46854b17dc23de984ff37a24f3de84f8ad6033d26d5dd30afc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d34049aae4e409909bb597c8bf33aa1c1ac85699cf72e33f5643145fdf9fbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d34049aae4e409909bb597c8bf33aa1c1ac85699cf72e33f5643145fdf9fbb9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b85aff4ba7e4c4eddcdfd916b42392fd8f5bd4d18caae739a7490c0576fcff1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b85aff4ba7e4c4eddcdfd916b42392fd8f5bd4d18caae739a7490c0576fcff1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6affaf91a44dec8a9da34068ed68f480ad543e0efc8e0f584fd5002f8f6ed0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6affaf91a44dec8a9da34068ed68f480ad543e0efc8e0f584fd5002f8f6ed0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dede89b14b4d80c8b9e74c45b628b5def6a04f922bb59c06828c3a4e43deca4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dede89b14b4d80c8b9e74c45b628b5def6a04f922bb59c06828c3a4e43deca4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a125d46e355d85444bf125e8184888e9b0c18dab3cd7b09b89ffff202e2c6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a125d46e355d85444bf125e8184888e9b0c18dab3cd7b09b89ffff202e2c6b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bcb9s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:21Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:21 crc kubenswrapper[4697]: I0127 15:09:21.683536 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:21 crc kubenswrapper[4697]: I0127 15:09:21.683590 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:21 crc kubenswrapper[4697]: I0127 15:09:21.683606 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:21 crc kubenswrapper[4697]: I0127 15:09:21.683620 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:21 crc kubenswrapper[4697]: I0127 15:09:21.683630 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:21Z","lastTransitionTime":"2026-01-27T15:09:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:21 crc kubenswrapper[4697]: I0127 15:09:21.686964 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rq89t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fbc1c27-fba2-40df-95dd-3842bd1f1906\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0c056e48d3130806317f25486fea67d938a0e610f19b6089873f2fcfe4759a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npp7h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rq89t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:21Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:21 crc kubenswrapper[4697]: I0127 15:09:21.690217 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:21 crc kubenswrapper[4697]: I0127 15:09:21.690281 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:21 crc kubenswrapper[4697]: I0127 15:09:21.690296 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:21 crc kubenswrapper[4697]: I0127 15:09:21.690312 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:21 crc kubenswrapper[4697]: I0127 15:09:21.690322 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:21Z","lastTransitionTime":"2026-01-27T15:09:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:21 crc kubenswrapper[4697]: E0127 15:09:21.704339 4697 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:09:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:09:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:09:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:09:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"74b869f4-b1e4-4686-af4e-9516e0fb5017\\\",\\\"systemUUID\\\":\\\"69bca9ab-721f-415b-ad88-6626c7795f3c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:21Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:21 crc kubenswrapper[4697]: I0127 15:09:21.713593 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:21 crc kubenswrapper[4697]: I0127 15:09:21.713644 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:21 crc kubenswrapper[4697]: I0127 15:09:21.713671 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:21 crc kubenswrapper[4697]: I0127 15:09:21.713696 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:21 crc kubenswrapper[4697]: I0127 15:09:21.713713 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:21Z","lastTransitionTime":"2026-01-27T15:09:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:21 crc kubenswrapper[4697]: I0127 15:09:21.717498 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6jxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24ac4a674c5fb98082daeabf52736988951ea5c66064ff4bb63f0d40c43b947d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25f52622d494cffbbd36c21f76148b896a10d3c1ace649ac0824e847b812a277\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9146d3d41cb348c99ea78d62aef3aa7d46c5f99855e042fdf5bc38b18556e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e33c68fac5ef11b2704b8a1460588937489a191ea2eacb70548b1e99cf718822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8784cf473729161592d08c782f4754724d6609756a30040715cbff8c732a09c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eea7c2b7dbea8198cc4709a808f8ecab760514224f4e3eb96d04c3bd7f16df6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c922932b6548d6d3070183264d41bc14a0cfc7a122dfc0772c4839066544c36d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c922932b6548d6d3070183264d41bc14a0cfc7a122dfc0772c4839066544c36d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T15:09:14Z\\\",\\\"message\\\":\\\"d to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:14Z is after 2025-08-24T17:21:41Z]\\\\nI0127 15:09:14.396323 6251 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]} options:{GoMap:map[iface-id-ver:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c94130be-172c-477c-88c4-40cc7eba30fe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 15:09:14.396396 6251 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c94130be-172c-477c-88c4-40cc7eba30fe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 15:09:14.396409 625\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:09:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-z6jxw_openshift-ovn-kubernetes(6a1ce5ad-1a8c-4a28-99d8-fc71649954ad)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://971bf4362650664f5133d9b68b7a5ce76e54dafbf28c88730f678ada0256ffd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9666b8a501ef015431ee3be1fc34ca2b196011df3007d2e4d508f09f9967785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9666b8a501ef015431ee3be1fc34ca2b196011df3007d2e4d508f09f9967785\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z6jxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:21Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:21 crc kubenswrapper[4697]: E0127 15:09:21.728554 4697 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:09:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:09:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:09:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:09:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"74b869f4-b1e4-4686-af4e-9516e0fb5017\\\",\\\"systemUUID\\\":\\\"69bca9ab-721f-415b-ad88-6626c7795f3c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:21Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:21 crc kubenswrapper[4697]: I0127 15:09:21.732930 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:21 crc kubenswrapper[4697]: I0127 15:09:21.732969 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:21 crc kubenswrapper[4697]: I0127 15:09:21.732986 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:21 crc kubenswrapper[4697]: I0127 15:09:21.733010 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:21 crc kubenswrapper[4697]: I0127 15:09:21.733027 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:21Z","lastTransitionTime":"2026-01-27T15:09:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:21 crc kubenswrapper[4697]: I0127 15:09:21.733961 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wz495" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9bec8bc-b2a6-4865-83ca-692ae5c022a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2616d07c83d73b63d4b728a30de8a7e1d76986d38f8c4c3fe019bf73e64784f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wqhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://faaced835dbc76e880a1fd29824b00fca5f720686e476bcba6ad4f807e28e8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wqhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wz495\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:21Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:21 crc kubenswrapper[4697]: I0127 15:09:21.745586 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vwctp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11ed6885-450d-477c-8e08-acf5fbde2fa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr85v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr85v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:09:00Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vwctp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:21Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:21 crc kubenswrapper[4697]: E0127 15:09:21.749267 4697 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:09:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:09:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:09:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:09:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"74b869f4-b1e4-4686-af4e-9516e0fb5017\\\",\\\"systemUUID\\\":\\\"69bca9ab-721f-415b-ad88-6626c7795f3c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:21Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:21 crc kubenswrapper[4697]: I0127 15:09:21.752344 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:21 crc kubenswrapper[4697]: I0127 15:09:21.752395 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:21 crc kubenswrapper[4697]: I0127 15:09:21.752411 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:21 crc kubenswrapper[4697]: I0127 15:09:21.752432 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:21 crc kubenswrapper[4697]: I0127 15:09:21.752449 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:21Z","lastTransitionTime":"2026-01-27T15:09:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:21 crc kubenswrapper[4697]: I0127 15:09:21.761262 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lpz4j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d187caad-2501-44d6-8ced-f8d8ca5fecfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c2b6a00c426e85ca8ca4fe5790bf7badc12e0c2cc72c1454e664e809ace5e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5jqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lpz4j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:21Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:21 crc kubenswrapper[4697]: E0127 15:09:21.767191 4697 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:09:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:09:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:09:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:09:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"74b869f4-b1e4-4686-af4e-9516e0fb5017\\\",\\\"systemUUID\\\":\\\"69bca9ab-721f-415b-ad88-6626c7795f3c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:21Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:21 crc kubenswrapper[4697]: I0127 15:09:21.770024 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:21 crc kubenswrapper[4697]: I0127 15:09:21.770119 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:21 crc kubenswrapper[4697]: I0127 15:09:21.770188 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:21 crc kubenswrapper[4697]: I0127 15:09:21.770268 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:21 crc kubenswrapper[4697]: I0127 15:09:21.770390 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:21Z","lastTransitionTime":"2026-01-27T15:09:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:21 crc kubenswrapper[4697]: I0127 15:09:21.780160 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30821478-065e-48b2-85f3-ae69260477fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://841fe2379065903ddc38b4968c1764a6c83d13f42c7587f20be81d8539199c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc09ec12a81a4e2954a0d1146819e9f9b4fc1fd442a3e9c930ea213aff875eb9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa7833382543ce12d026eb8bbc6fb93276a1105a0cc34d215e719591be740f80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d1140be76b3f274b414e158153723d043089cb9b01d27733976db83dc4601f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3144c28de6be75231118993ba779a42bcc9032d51e927df649d3abb602ffa5dd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 15:08:45.318333 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 15:08:45.318446 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 15:08:45.319039 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1083979560/tls.crt::/tmp/serving-cert-1083979560/tls.key\\\\\\\"\\\\nI0127 15:08:45.778691 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 15:08:45.781562 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 15:08:45.781589 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 15:08:45.781614 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 15:08:45.781620 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 15:08:45.799733 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0127 15:08:45.799756 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 15:08:45.799769 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:08:45.799774 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:08:45.799800 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 15:08:45.799806 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 15:08:45.799810 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 15:08:45.799814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 15:08:45.805747 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://772509e08b1dcc68190d81e10a93fe348af55fdc71dbab2f0cadffd65089c044\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d9c79b1675802dcd1800cdbf3562832c4d201ff1b4d7ab4504118a41a245453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d9c79b1675802dcd1800cdbf3562832c4d201ff1b4d7ab4504118a41a245453\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:24Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:21Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:21 crc kubenswrapper[4697]: E0127 15:09:21.782577 4697 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:09:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:09:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:09:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:09:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"74b869f4-b1e4-4686-af4e-9516e0fb5017\\\",\\\"systemUUID\\\":\\\"69bca9ab-721f-415b-ad88-6626c7795f3c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:21Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:21 crc kubenswrapper[4697]: E0127 15:09:21.782721 4697 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 15:09:21 crc kubenswrapper[4697]: I0127 15:09:21.785547 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:21 crc kubenswrapper[4697]: I0127 15:09:21.785574 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:21 crc kubenswrapper[4697]: I0127 15:09:21.785585 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:21 crc kubenswrapper[4697]: I0127 15:09:21.785601 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:21 crc kubenswrapper[4697]: I0127 15:09:21.785614 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:21Z","lastTransitionTime":"2026-01-27T15:09:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:21 crc kubenswrapper[4697]: I0127 15:09:21.791172 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ed572c3-ca0d-4d38-9ac0-81080c32efe5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdb01f592a7ee00906befc039b4ac006fa96e5d36ae7cf4029af12500c42d0a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9af166859a55cb5f718a1750f4ce20f5c4259e1adad06c609ce66a907974b3ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0baf6862ad66d010a3e2ca21560d76f0de57cf5afc64cc594d4b6204f5653904\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b800aa3d330bd36d5613b410b0b73f5d175f0ec70a76d4eb479dcb0db8957a72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b800aa3d330bd36d5613b410b0b73f5d175f0ec70a76d4eb479dcb0db8957a72\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:25Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:24Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:21Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:21 crc kubenswrapper[4697]: I0127 15:09:21.802587 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:21Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:21 crc kubenswrapper[4697]: I0127 15:09:21.813775 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:21Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:21 crc kubenswrapper[4697]: I0127 15:09:21.824853 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://955eb03bb38f971417b1af1b193c2008607eaeda5addf30f899830dd84620c4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:21Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:21 crc kubenswrapper[4697]: I0127 15:09:21.833215 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bdclj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed86f7b6-a042-470f-8da3-9cad4e65c550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a701152234da7522fefeed3798f4748c4f8e56fa81edd5011ad4a89bbb2e4be7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f898q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bdclj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:21Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:21 crc kubenswrapper[4697]: I0127 15:09:21.843847 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6lf86" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35bbb68b-046f-482d-8c38-e76dd8a12a61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6949b3c1babb1c4c69bf612b869bea5dabf3fedc5e6c930ec3d3a51736c9651f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sf5z9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc445832c9ce25b3b787c029df7baad2f8ad53f7cf8705ab5e2590c85119bec1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sf5z9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6lf86\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:21Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:21 crc kubenswrapper[4697]: I0127 15:09:21.888324 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:21 crc kubenswrapper[4697]: I0127 15:09:21.888369 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:21 crc kubenswrapper[4697]: I0127 15:09:21.888382 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:21 crc kubenswrapper[4697]: I0127 15:09:21.888399 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:21 crc kubenswrapper[4697]: I0127 15:09:21.888410 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:21Z","lastTransitionTime":"2026-01-27T15:09:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:21 crc kubenswrapper[4697]: I0127 15:09:21.990934 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:21 crc kubenswrapper[4697]: I0127 15:09:21.990993 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:21 crc kubenswrapper[4697]: I0127 15:09:21.991011 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:21 crc kubenswrapper[4697]: I0127 15:09:21.991035 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:21 crc kubenswrapper[4697]: I0127 15:09:21.991053 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:21Z","lastTransitionTime":"2026-01-27T15:09:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:22 crc kubenswrapper[4697]: I0127 15:09:22.093997 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:22 crc kubenswrapper[4697]: I0127 15:09:22.094075 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:22 crc kubenswrapper[4697]: I0127 15:09:22.094097 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:22 crc kubenswrapper[4697]: I0127 15:09:22.094129 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:22 crc kubenswrapper[4697]: I0127 15:09:22.094151 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:22Z","lastTransitionTime":"2026-01-27T15:09:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:22 crc kubenswrapper[4697]: I0127 15:09:22.196732 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:22 crc kubenswrapper[4697]: I0127 15:09:22.196844 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:22 crc kubenswrapper[4697]: I0127 15:09:22.196869 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:22 crc kubenswrapper[4697]: I0127 15:09:22.196897 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:22 crc kubenswrapper[4697]: I0127 15:09:22.196918 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:22Z","lastTransitionTime":"2026-01-27T15:09:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:22 crc kubenswrapper[4697]: I0127 15:09:22.299639 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:22 crc kubenswrapper[4697]: I0127 15:09:22.299721 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:22 crc kubenswrapper[4697]: I0127 15:09:22.299734 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:22 crc kubenswrapper[4697]: I0127 15:09:22.299759 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:22 crc kubenswrapper[4697]: I0127 15:09:22.299801 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:22Z","lastTransitionTime":"2026-01-27T15:09:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:22 crc kubenswrapper[4697]: I0127 15:09:22.403090 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:22 crc kubenswrapper[4697]: I0127 15:09:22.403163 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:22 crc kubenswrapper[4697]: I0127 15:09:22.403186 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:22 crc kubenswrapper[4697]: I0127 15:09:22.403214 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:22 crc kubenswrapper[4697]: I0127 15:09:22.403236 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:22Z","lastTransitionTime":"2026-01-27T15:09:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:22 crc kubenswrapper[4697]: I0127 15:09:22.506638 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:22 crc kubenswrapper[4697]: I0127 15:09:22.506716 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:22 crc kubenswrapper[4697]: I0127 15:09:22.506738 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:22 crc kubenswrapper[4697]: I0127 15:09:22.506765 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:22 crc kubenswrapper[4697]: I0127 15:09:22.506824 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:22Z","lastTransitionTime":"2026-01-27T15:09:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:22 crc kubenswrapper[4697]: I0127 15:09:22.551116 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 14:35:58.595153161 +0000 UTC Jan 27 15:09:22 crc kubenswrapper[4697]: I0127 15:09:22.567944 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vwctp" Jan 27 15:09:22 crc kubenswrapper[4697]: I0127 15:09:22.568069 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:09:22 crc kubenswrapper[4697]: E0127 15:09:22.568206 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vwctp" podUID="11ed6885-450d-477c-8e08-acf5fbde2fa3" Jan 27 15:09:22 crc kubenswrapper[4697]: E0127 15:09:22.568363 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:09:22 crc kubenswrapper[4697]: I0127 15:09:22.609931 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:22 crc kubenswrapper[4697]: I0127 15:09:22.609972 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:22 crc kubenswrapper[4697]: I0127 15:09:22.609985 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:22 crc kubenswrapper[4697]: I0127 15:09:22.610002 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:22 crc kubenswrapper[4697]: I0127 15:09:22.610015 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:22Z","lastTransitionTime":"2026-01-27T15:09:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:22 crc kubenswrapper[4697]: I0127 15:09:22.713262 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:22 crc kubenswrapper[4697]: I0127 15:09:22.713349 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:22 crc kubenswrapper[4697]: I0127 15:09:22.713367 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:22 crc kubenswrapper[4697]: I0127 15:09:22.713417 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:22 crc kubenswrapper[4697]: I0127 15:09:22.713434 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:22Z","lastTransitionTime":"2026-01-27T15:09:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:22 crc kubenswrapper[4697]: I0127 15:09:22.815844 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:22 crc kubenswrapper[4697]: I0127 15:09:22.815896 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:22 crc kubenswrapper[4697]: I0127 15:09:22.815908 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:22 crc kubenswrapper[4697]: I0127 15:09:22.815927 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:22 crc kubenswrapper[4697]: I0127 15:09:22.815941 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:22Z","lastTransitionTime":"2026-01-27T15:09:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:22 crc kubenswrapper[4697]: I0127 15:09:22.919004 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:22 crc kubenswrapper[4697]: I0127 15:09:22.919101 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:22 crc kubenswrapper[4697]: I0127 15:09:22.919121 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:22 crc kubenswrapper[4697]: I0127 15:09:22.919145 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:22 crc kubenswrapper[4697]: I0127 15:09:22.919164 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:22Z","lastTransitionTime":"2026-01-27T15:09:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:23 crc kubenswrapper[4697]: I0127 15:09:23.021857 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:23 crc kubenswrapper[4697]: I0127 15:09:23.021901 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:23 crc kubenswrapper[4697]: I0127 15:09:23.021916 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:23 crc kubenswrapper[4697]: I0127 15:09:23.021937 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:23 crc kubenswrapper[4697]: I0127 15:09:23.021953 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:23Z","lastTransitionTime":"2026-01-27T15:09:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:23 crc kubenswrapper[4697]: I0127 15:09:23.124889 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:23 crc kubenswrapper[4697]: I0127 15:09:23.124929 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:23 crc kubenswrapper[4697]: I0127 15:09:23.124941 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:23 crc kubenswrapper[4697]: I0127 15:09:23.124958 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:23 crc kubenswrapper[4697]: I0127 15:09:23.124969 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:23Z","lastTransitionTime":"2026-01-27T15:09:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:23 crc kubenswrapper[4697]: I0127 15:09:23.227685 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:23 crc kubenswrapper[4697]: I0127 15:09:23.227738 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:23 crc kubenswrapper[4697]: I0127 15:09:23.227748 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:23 crc kubenswrapper[4697]: I0127 15:09:23.227764 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:23 crc kubenswrapper[4697]: I0127 15:09:23.227776 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:23Z","lastTransitionTime":"2026-01-27T15:09:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:23 crc kubenswrapper[4697]: I0127 15:09:23.330532 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:23 crc kubenswrapper[4697]: I0127 15:09:23.330611 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:23 crc kubenswrapper[4697]: I0127 15:09:23.330635 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:23 crc kubenswrapper[4697]: I0127 15:09:23.330663 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:23 crc kubenswrapper[4697]: I0127 15:09:23.330686 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:23Z","lastTransitionTime":"2026-01-27T15:09:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:23 crc kubenswrapper[4697]: I0127 15:09:23.434384 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:23 crc kubenswrapper[4697]: I0127 15:09:23.434423 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:23 crc kubenswrapper[4697]: I0127 15:09:23.434434 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:23 crc kubenswrapper[4697]: I0127 15:09:23.434451 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:23 crc kubenswrapper[4697]: I0127 15:09:23.434470 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:23Z","lastTransitionTime":"2026-01-27T15:09:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:23 crc kubenswrapper[4697]: I0127 15:09:23.537007 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:23 crc kubenswrapper[4697]: I0127 15:09:23.537042 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:23 crc kubenswrapper[4697]: I0127 15:09:23.537053 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:23 crc kubenswrapper[4697]: I0127 15:09:23.537069 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:23 crc kubenswrapper[4697]: I0127 15:09:23.537079 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:23Z","lastTransitionTime":"2026-01-27T15:09:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:23 crc kubenswrapper[4697]: I0127 15:09:23.552315 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 11:20:24.756081611 +0000 UTC Jan 27 15:09:23 crc kubenswrapper[4697]: I0127 15:09:23.567698 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:09:23 crc kubenswrapper[4697]: I0127 15:09:23.567704 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:09:23 crc kubenswrapper[4697]: E0127 15:09:23.567872 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:09:23 crc kubenswrapper[4697]: E0127 15:09:23.567970 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:09:23 crc kubenswrapper[4697]: I0127 15:09:23.639572 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:23 crc kubenswrapper[4697]: I0127 15:09:23.639608 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:23 crc kubenswrapper[4697]: I0127 15:09:23.639617 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:23 crc kubenswrapper[4697]: I0127 15:09:23.639629 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:23 crc kubenswrapper[4697]: I0127 15:09:23.639637 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:23Z","lastTransitionTime":"2026-01-27T15:09:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:23 crc kubenswrapper[4697]: I0127 15:09:23.742242 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:23 crc kubenswrapper[4697]: I0127 15:09:23.742280 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:23 crc kubenswrapper[4697]: I0127 15:09:23.742298 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:23 crc kubenswrapper[4697]: I0127 15:09:23.742319 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:23 crc kubenswrapper[4697]: I0127 15:09:23.742331 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:23Z","lastTransitionTime":"2026-01-27T15:09:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:23 crc kubenswrapper[4697]: I0127 15:09:23.844502 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:23 crc kubenswrapper[4697]: I0127 15:09:23.844535 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:23 crc kubenswrapper[4697]: I0127 15:09:23.844546 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:23 crc kubenswrapper[4697]: I0127 15:09:23.844562 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:23 crc kubenswrapper[4697]: I0127 15:09:23.844574 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:23Z","lastTransitionTime":"2026-01-27T15:09:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:23 crc kubenswrapper[4697]: I0127 15:09:23.947342 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:23 crc kubenswrapper[4697]: I0127 15:09:23.947397 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:23 crc kubenswrapper[4697]: I0127 15:09:23.947425 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:23 crc kubenswrapper[4697]: I0127 15:09:23.947446 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:23 crc kubenswrapper[4697]: I0127 15:09:23.947459 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:23Z","lastTransitionTime":"2026-01-27T15:09:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:24 crc kubenswrapper[4697]: I0127 15:09:24.051069 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:24 crc kubenswrapper[4697]: I0127 15:09:24.051141 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:24 crc kubenswrapper[4697]: I0127 15:09:24.051159 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:24 crc kubenswrapper[4697]: I0127 15:09:24.051184 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:24 crc kubenswrapper[4697]: I0127 15:09:24.051203 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:24Z","lastTransitionTime":"2026-01-27T15:09:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:24 crc kubenswrapper[4697]: I0127 15:09:24.154927 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:24 crc kubenswrapper[4697]: I0127 15:09:24.154997 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:24 crc kubenswrapper[4697]: I0127 15:09:24.155020 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:24 crc kubenswrapper[4697]: I0127 15:09:24.155050 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:24 crc kubenswrapper[4697]: I0127 15:09:24.155076 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:24Z","lastTransitionTime":"2026-01-27T15:09:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:24 crc kubenswrapper[4697]: I0127 15:09:24.257853 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:24 crc kubenswrapper[4697]: I0127 15:09:24.257888 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:24 crc kubenswrapper[4697]: I0127 15:09:24.257897 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:24 crc kubenswrapper[4697]: I0127 15:09:24.257913 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:24 crc kubenswrapper[4697]: I0127 15:09:24.257922 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:24Z","lastTransitionTime":"2026-01-27T15:09:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:24 crc kubenswrapper[4697]: I0127 15:09:24.359756 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:24 crc kubenswrapper[4697]: I0127 15:09:24.359820 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:24 crc kubenswrapper[4697]: I0127 15:09:24.359837 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:24 crc kubenswrapper[4697]: I0127 15:09:24.359852 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:24 crc kubenswrapper[4697]: I0127 15:09:24.359863 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:24Z","lastTransitionTime":"2026-01-27T15:09:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:24 crc kubenswrapper[4697]: I0127 15:09:24.462556 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:24 crc kubenswrapper[4697]: I0127 15:09:24.462640 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:24 crc kubenswrapper[4697]: I0127 15:09:24.462683 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:24 crc kubenswrapper[4697]: I0127 15:09:24.462700 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:24 crc kubenswrapper[4697]: I0127 15:09:24.462712 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:24Z","lastTransitionTime":"2026-01-27T15:09:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:24 crc kubenswrapper[4697]: I0127 15:09:24.553823 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 15:46:12.643891831 +0000 UTC Jan 27 15:09:24 crc kubenswrapper[4697]: I0127 15:09:24.565270 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:24 crc kubenswrapper[4697]: I0127 15:09:24.565546 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:24 crc kubenswrapper[4697]: I0127 15:09:24.565655 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:24 crc kubenswrapper[4697]: I0127 15:09:24.565761 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:24 crc kubenswrapper[4697]: I0127 15:09:24.565903 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:24Z","lastTransitionTime":"2026-01-27T15:09:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:24 crc kubenswrapper[4697]: I0127 15:09:24.567259 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:09:24 crc kubenswrapper[4697]: E0127 15:09:24.567473 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:09:24 crc kubenswrapper[4697]: I0127 15:09:24.568565 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vwctp" Jan 27 15:09:24 crc kubenswrapper[4697]: E0127 15:09:24.568669 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vwctp" podUID="11ed6885-450d-477c-8e08-acf5fbde2fa3" Jan 27 15:09:24 crc kubenswrapper[4697]: I0127 15:09:24.597439 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6jxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24ac4a674c5fb98082daeabf52736988951ea5c66064ff4bb63f0d40c43b947d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25f52622d494cffbbd36c21f76148b896a10d3c1ace649ac0824e847b812a277\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9146d3d41cb348c99ea78d62aef3aa7d46c5f99855e042fdf5bc38b18556e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e33c68fac5ef11b2704b8a1460588937489a191ea2eacb70548b1e99cf718822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8784cf473729161592d08c782f4754724d6609756a30040715cbff8c732a09c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eea7c2b7dbea8198cc4709a808f8ecab760514224f4e3eb96d04c3bd7f16df6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c922932b6548d6d3070183264d41bc14a0cfc7a122dfc0772c4839066544c36d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c922932b6548d6d3070183264d41bc14a0cfc7a122dfc0772c4839066544c36d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T15:09:14Z\\\",\\\"message\\\":\\\"d to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:14Z is after 2025-08-24T17:21:41Z]\\\\nI0127 15:09:14.396323 6251 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]} options:{GoMap:map[iface-id-ver:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c94130be-172c-477c-88c4-40cc7eba30fe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 15:09:14.396396 6251 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c94130be-172c-477c-88c4-40cc7eba30fe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 15:09:14.396409 625\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:09:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-z6jxw_openshift-ovn-kubernetes(6a1ce5ad-1a8c-4a28-99d8-fc71649954ad)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://971bf4362650664f5133d9b68b7a5ce76e54dafbf28c88730f678ada0256ffd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9666b8a501ef015431ee3be1fc34ca2b196011df3007d2e4d508f09f9967785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9666b8a501ef015431ee3be1fc34ca2b196011df3007d2e4d508f09f9967785\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z6jxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:24Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:24 crc kubenswrapper[4697]: I0127 15:09:24.612304 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00bcd4fb-11e6-4087-91b7-290cd35a7292\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee5c74f4e3f1154431027a743528e81ec4bed30037b30a858870f74993da4691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b23c092c5d493951a1f6dbbf0482f102f36a830133d843f3c574afba2e1d50d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ad05a5c3b7640af677ede45c27c40da5d118e28a9d45de0ffa60a05684121c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fd615105781bcf4614f8a58cf63eeb89020db12e822192bd652a5ff23e25a2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:24Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:24Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:24 crc kubenswrapper[4697]: I0127 15:09:24.625117 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e13ee612abe9aa03f8ccaf68abbdfdbeb29820484f430097aef6be1679d3efe8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:24Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:24 crc kubenswrapper[4697]: I0127 15:09:24.636259 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a182e147723dd1c9335e6c6a910d5d53bdfc118504b6a0a9f3c91f79b6d3aee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52fcd1c6784720765f18ddc1936d3bdd625b743d27654a647ff80351957797e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:24Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:24 crc kubenswrapper[4697]: I0127 15:09:24.651007 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:24Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:24 crc kubenswrapper[4697]: I0127 15:09:24.665660 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bcb9s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7543bea-0b65-44e1-8c0c-bc1a13577d69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fe79de88015d62a290c140e0504b9ef088f39fa79bc9b379d46fa9cdb03123f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0b69d8311464a46854b17dc23de984ff37a24f3de84f8ad6033d26d5dd30afc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0b69d8311464a46854b17dc23de984ff37a24f3de84f8ad6033d26d5dd30afc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d34049aae4e409909bb597c8bf33aa1c1ac85699cf72e33f5643145fdf9fbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d34049aae4e409909bb597c8bf33aa1c1ac85699cf72e33f5643145fdf9fbb9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b85aff4ba7e4c4eddcdfd916b42392fd8f5bd4d18caae739a7490c0576fcff1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b85aff4ba7e4c4eddcdfd916b42392fd8f5bd4d18caae739a7490c0576fcff1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6affaf91a44dec8a9da34068ed68f480ad543e0efc8e0f584fd5002f8f6ed0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6affaf91a44dec8a9da34068ed68f480ad543e0efc8e0f584fd5002f8f6ed0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dede89b14b4d80c8b9e74c45b628b5def6a04f922bb59c06828c3a4e43deca4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dede89b14b4d80c8b9e74c45b628b5def6a04f922bb59c06828c3a4e43deca4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a125d46e355d85444bf125e8184888e9b0c18dab3cd7b09b89ffff202e2c6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a125d46e355d85444bf125e8184888e9b0c18dab3cd7b09b89ffff202e2c6b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bcb9s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:24Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:24 crc kubenswrapper[4697]: I0127 15:09:24.668130 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:24 crc kubenswrapper[4697]: I0127 15:09:24.668157 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:24 crc kubenswrapper[4697]: I0127 15:09:24.668166 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:24 crc kubenswrapper[4697]: I0127 15:09:24.668180 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:24 crc kubenswrapper[4697]: I0127 15:09:24.668209 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:24Z","lastTransitionTime":"2026-01-27T15:09:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:24 crc kubenswrapper[4697]: I0127 15:09:24.677911 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rq89t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fbc1c27-fba2-40df-95dd-3842bd1f1906\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0c056e48d3130806317f25486fea67d938a0e610f19b6089873f2fcfe4759a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npp7h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rq89t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:24Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:24 crc kubenswrapper[4697]: I0127 15:09:24.690521 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wz495" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9bec8bc-b2a6-4865-83ca-692ae5c022a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2616d07c83d73b63d4b728a30de8a7e1d76986d38f8c4c3fe019bf73e64784f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wqhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://faaced835dbc76e880a1fd29824b00fca5f720686e476bcba6ad4f807e28e8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wqhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wz495\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:24Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:24 crc kubenswrapper[4697]: I0127 15:09:24.701928 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vwctp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11ed6885-450d-477c-8e08-acf5fbde2fa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr85v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr85v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:09:00Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vwctp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:24Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:24 crc kubenswrapper[4697]: I0127 15:09:24.713500 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lpz4j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d187caad-2501-44d6-8ced-f8d8ca5fecfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c2b6a00c426e85ca8ca4fe5790bf7badc12e0c2cc72c1454e664e809ace5e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5jqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lpz4j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:24Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:24 crc kubenswrapper[4697]: I0127 15:09:24.722662 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bdclj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed86f7b6-a042-470f-8da3-9cad4e65c550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a701152234da7522fefeed3798f4748c4f8e56fa81edd5011ad4a89bbb2e4be7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f898q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bdclj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:24Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:24 crc kubenswrapper[4697]: I0127 15:09:24.731862 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6lf86" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35bbb68b-046f-482d-8c38-e76dd8a12a61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6949b3c1babb1c4c69bf612b869bea5dabf3fedc5e6c930ec3d3a51736c9651f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sf5z9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc445832c9ce25b3b787c029df7baad2f8ad53f7cf8705ab5e2590c85119bec1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sf5z9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6lf86\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:24Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:24 crc kubenswrapper[4697]: I0127 15:09:24.743892 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30821478-065e-48b2-85f3-ae69260477fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://841fe2379065903ddc38b4968c1764a6c83d13f42c7587f20be81d8539199c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc09ec12a81a4e2954a0d1146819e9f9b4fc1fd442a3e9c930ea213aff875eb9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa7833382543ce12d026eb8bbc6fb93276a1105a0cc34d215e719591be740f80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d1140be76b3f274b414e158153723d043089cb9b01d27733976db83dc4601f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3144c28de6be75231118993ba779a42bcc9032d51e927df649d3abb602ffa5dd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 15:08:45.318333 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 15:08:45.318446 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 15:08:45.319039 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1083979560/tls.crt::/tmp/serving-cert-1083979560/tls.key\\\\\\\"\\\\nI0127 15:08:45.778691 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 15:08:45.781562 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 15:08:45.781589 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 15:08:45.781614 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 15:08:45.781620 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 15:08:45.799733 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0127 15:08:45.799756 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 15:08:45.799769 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:08:45.799774 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:08:45.799800 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 15:08:45.799806 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 15:08:45.799810 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 15:08:45.799814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 15:08:45.805747 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://772509e08b1dcc68190d81e10a93fe348af55fdc71dbab2f0cadffd65089c044\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d9c79b1675802dcd1800cdbf3562832c4d201ff1b4d7ab4504118a41a245453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d9c79b1675802dcd1800cdbf3562832c4d201ff1b4d7ab4504118a41a245453\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:24Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:24Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:24 crc kubenswrapper[4697]: I0127 15:09:24.755094 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ed572c3-ca0d-4d38-9ac0-81080c32efe5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdb01f592a7ee00906befc039b4ac006fa96e5d36ae7cf4029af12500c42d0a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9af166859a55cb5f718a1750f4ce20f5c4259e1adad06c609ce66a907974b3ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0baf6862ad66d010a3e2ca21560d76f0de57cf5afc64cc594d4b6204f5653904\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b800aa3d330bd36d5613b410b0b73f5d175f0ec70a76d4eb479dcb0db8957a72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b800aa3d330bd36d5613b410b0b73f5d175f0ec70a76d4eb479dcb0db8957a72\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:25Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:24Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:24Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:24 crc kubenswrapper[4697]: I0127 15:09:24.768641 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:24Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:24 crc kubenswrapper[4697]: I0127 15:09:24.769972 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:24 crc kubenswrapper[4697]: I0127 15:09:24.769997 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:24 crc kubenswrapper[4697]: I0127 15:09:24.770005 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:24 crc kubenswrapper[4697]: I0127 15:09:24.770026 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:24 crc kubenswrapper[4697]: I0127 15:09:24.770035 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:24Z","lastTransitionTime":"2026-01-27T15:09:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:24 crc kubenswrapper[4697]: I0127 15:09:24.780134 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:24Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:24 crc kubenswrapper[4697]: I0127 15:09:24.791321 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://955eb03bb38f971417b1af1b193c2008607eaeda5addf30f899830dd84620c4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:24Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:24 crc kubenswrapper[4697]: I0127 15:09:24.872539 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:24 crc kubenswrapper[4697]: I0127 15:09:24.872582 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:24 crc kubenswrapper[4697]: I0127 15:09:24.872591 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:24 crc kubenswrapper[4697]: I0127 15:09:24.872607 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:24 crc kubenswrapper[4697]: I0127 15:09:24.872624 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:24Z","lastTransitionTime":"2026-01-27T15:09:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:24 crc kubenswrapper[4697]: I0127 15:09:24.974690 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:24 crc kubenswrapper[4697]: I0127 15:09:24.974727 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:24 crc kubenswrapper[4697]: I0127 15:09:24.974734 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:24 crc kubenswrapper[4697]: I0127 15:09:24.974748 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:24 crc kubenswrapper[4697]: I0127 15:09:24.974757 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:24Z","lastTransitionTime":"2026-01-27T15:09:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:25 crc kubenswrapper[4697]: I0127 15:09:25.077562 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:25 crc kubenswrapper[4697]: I0127 15:09:25.077599 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:25 crc kubenswrapper[4697]: I0127 15:09:25.077607 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:25 crc kubenswrapper[4697]: I0127 15:09:25.077623 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:25 crc kubenswrapper[4697]: I0127 15:09:25.077633 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:25Z","lastTransitionTime":"2026-01-27T15:09:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:25 crc kubenswrapper[4697]: I0127 15:09:25.180029 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:25 crc kubenswrapper[4697]: I0127 15:09:25.180063 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:25 crc kubenswrapper[4697]: I0127 15:09:25.180076 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:25 crc kubenswrapper[4697]: I0127 15:09:25.180092 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:25 crc kubenswrapper[4697]: I0127 15:09:25.180103 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:25Z","lastTransitionTime":"2026-01-27T15:09:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:25 crc kubenswrapper[4697]: I0127 15:09:25.283185 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:25 crc kubenswrapper[4697]: I0127 15:09:25.283215 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:25 crc kubenswrapper[4697]: I0127 15:09:25.283224 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:25 crc kubenswrapper[4697]: I0127 15:09:25.283238 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:25 crc kubenswrapper[4697]: I0127 15:09:25.283250 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:25Z","lastTransitionTime":"2026-01-27T15:09:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:25 crc kubenswrapper[4697]: I0127 15:09:25.386908 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:25 crc kubenswrapper[4697]: I0127 15:09:25.386970 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:25 crc kubenswrapper[4697]: I0127 15:09:25.386991 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:25 crc kubenswrapper[4697]: I0127 15:09:25.387017 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:25 crc kubenswrapper[4697]: I0127 15:09:25.387034 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:25Z","lastTransitionTime":"2026-01-27T15:09:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:25 crc kubenswrapper[4697]: I0127 15:09:25.489698 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:25 crc kubenswrapper[4697]: I0127 15:09:25.489735 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:25 crc kubenswrapper[4697]: I0127 15:09:25.489747 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:25 crc kubenswrapper[4697]: I0127 15:09:25.489763 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:25 crc kubenswrapper[4697]: I0127 15:09:25.489773 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:25Z","lastTransitionTime":"2026-01-27T15:09:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:25 crc kubenswrapper[4697]: I0127 15:09:25.554876 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 02:26:25.724529223 +0000 UTC Jan 27 15:09:25 crc kubenswrapper[4697]: I0127 15:09:25.568220 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:09:25 crc kubenswrapper[4697]: I0127 15:09:25.568289 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:09:25 crc kubenswrapper[4697]: E0127 15:09:25.568335 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:09:25 crc kubenswrapper[4697]: E0127 15:09:25.568433 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:09:25 crc kubenswrapper[4697]: I0127 15:09:25.593275 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:25 crc kubenswrapper[4697]: I0127 15:09:25.593330 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:25 crc kubenswrapper[4697]: I0127 15:09:25.593347 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:25 crc kubenswrapper[4697]: I0127 15:09:25.593370 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:25 crc kubenswrapper[4697]: I0127 15:09:25.593388 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:25Z","lastTransitionTime":"2026-01-27T15:09:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:25 crc kubenswrapper[4697]: I0127 15:09:25.696103 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:25 crc kubenswrapper[4697]: I0127 15:09:25.696138 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:25 crc kubenswrapper[4697]: I0127 15:09:25.696149 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:25 crc kubenswrapper[4697]: I0127 15:09:25.696167 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:25 crc kubenswrapper[4697]: I0127 15:09:25.696177 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:25Z","lastTransitionTime":"2026-01-27T15:09:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:25 crc kubenswrapper[4697]: I0127 15:09:25.799146 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:25 crc kubenswrapper[4697]: I0127 15:09:25.799211 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:25 crc kubenswrapper[4697]: I0127 15:09:25.799247 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:25 crc kubenswrapper[4697]: I0127 15:09:25.799275 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:25 crc kubenswrapper[4697]: I0127 15:09:25.799296 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:25Z","lastTransitionTime":"2026-01-27T15:09:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:25 crc kubenswrapper[4697]: I0127 15:09:25.902281 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:25 crc kubenswrapper[4697]: I0127 15:09:25.902331 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:25 crc kubenswrapper[4697]: I0127 15:09:25.902342 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:25 crc kubenswrapper[4697]: I0127 15:09:25.902359 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:25 crc kubenswrapper[4697]: I0127 15:09:25.902371 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:25Z","lastTransitionTime":"2026-01-27T15:09:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:26 crc kubenswrapper[4697]: I0127 15:09:26.005241 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:26 crc kubenswrapper[4697]: I0127 15:09:26.005303 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:26 crc kubenswrapper[4697]: I0127 15:09:26.005322 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:26 crc kubenswrapper[4697]: I0127 15:09:26.005348 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:26 crc kubenswrapper[4697]: I0127 15:09:26.005373 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:26Z","lastTransitionTime":"2026-01-27T15:09:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:26 crc kubenswrapper[4697]: I0127 15:09:26.107585 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:26 crc kubenswrapper[4697]: I0127 15:09:26.107638 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:26 crc kubenswrapper[4697]: I0127 15:09:26.107657 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:26 crc kubenswrapper[4697]: I0127 15:09:26.107679 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:26 crc kubenswrapper[4697]: I0127 15:09:26.107695 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:26Z","lastTransitionTime":"2026-01-27T15:09:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:26 crc kubenswrapper[4697]: I0127 15:09:26.210153 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:26 crc kubenswrapper[4697]: I0127 15:09:26.210447 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:26 crc kubenswrapper[4697]: I0127 15:09:26.210534 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:26 crc kubenswrapper[4697]: I0127 15:09:26.210613 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:26 crc kubenswrapper[4697]: I0127 15:09:26.210723 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:26Z","lastTransitionTime":"2026-01-27T15:09:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:26 crc kubenswrapper[4697]: I0127 15:09:26.312731 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:26 crc kubenswrapper[4697]: I0127 15:09:26.312778 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:26 crc kubenswrapper[4697]: I0127 15:09:26.312832 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:26 crc kubenswrapper[4697]: I0127 15:09:26.312864 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:26 crc kubenswrapper[4697]: I0127 15:09:26.312875 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:26Z","lastTransitionTime":"2026-01-27T15:09:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:26 crc kubenswrapper[4697]: I0127 15:09:26.415274 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:26 crc kubenswrapper[4697]: I0127 15:09:26.415322 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:26 crc kubenswrapper[4697]: I0127 15:09:26.415340 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:26 crc kubenswrapper[4697]: I0127 15:09:26.415358 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:26 crc kubenswrapper[4697]: I0127 15:09:26.415407 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:26Z","lastTransitionTime":"2026-01-27T15:09:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:26 crc kubenswrapper[4697]: I0127 15:09:26.518520 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:26 crc kubenswrapper[4697]: I0127 15:09:26.518569 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:26 crc kubenswrapper[4697]: I0127 15:09:26.518584 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:26 crc kubenswrapper[4697]: I0127 15:09:26.518603 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:26 crc kubenswrapper[4697]: I0127 15:09:26.518616 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:26Z","lastTransitionTime":"2026-01-27T15:09:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:26 crc kubenswrapper[4697]: I0127 15:09:26.555065 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 07:27:41.674226662 +0000 UTC Jan 27 15:09:26 crc kubenswrapper[4697]: I0127 15:09:26.567389 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vwctp" Jan 27 15:09:26 crc kubenswrapper[4697]: E0127 15:09:26.567513 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vwctp" podUID="11ed6885-450d-477c-8e08-acf5fbde2fa3" Jan 27 15:09:26 crc kubenswrapper[4697]: I0127 15:09:26.567562 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:09:26 crc kubenswrapper[4697]: E0127 15:09:26.567689 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:09:26 crc kubenswrapper[4697]: I0127 15:09:26.620705 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:26 crc kubenswrapper[4697]: I0127 15:09:26.620759 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:26 crc kubenswrapper[4697]: I0127 15:09:26.620825 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:26 crc kubenswrapper[4697]: I0127 15:09:26.620854 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:26 crc kubenswrapper[4697]: I0127 15:09:26.620892 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:26Z","lastTransitionTime":"2026-01-27T15:09:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:26 crc kubenswrapper[4697]: I0127 15:09:26.723286 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:26 crc kubenswrapper[4697]: I0127 15:09:26.723324 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:26 crc kubenswrapper[4697]: I0127 15:09:26.723336 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:26 crc kubenswrapper[4697]: I0127 15:09:26.723352 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:26 crc kubenswrapper[4697]: I0127 15:09:26.723364 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:26Z","lastTransitionTime":"2026-01-27T15:09:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:26 crc kubenswrapper[4697]: I0127 15:09:26.825515 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:26 crc kubenswrapper[4697]: I0127 15:09:26.825551 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:26 crc kubenswrapper[4697]: I0127 15:09:26.825559 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:26 crc kubenswrapper[4697]: I0127 15:09:26.825574 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:26 crc kubenswrapper[4697]: I0127 15:09:26.825583 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:26Z","lastTransitionTime":"2026-01-27T15:09:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:26 crc kubenswrapper[4697]: I0127 15:09:26.927895 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:26 crc kubenswrapper[4697]: I0127 15:09:26.927925 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:26 crc kubenswrapper[4697]: I0127 15:09:26.927934 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:26 crc kubenswrapper[4697]: I0127 15:09:26.927947 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:26 crc kubenswrapper[4697]: I0127 15:09:26.927956 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:26Z","lastTransitionTime":"2026-01-27T15:09:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:27 crc kubenswrapper[4697]: I0127 15:09:27.031061 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:27 crc kubenswrapper[4697]: I0127 15:09:27.031107 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:27 crc kubenswrapper[4697]: I0127 15:09:27.031119 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:27 crc kubenswrapper[4697]: I0127 15:09:27.031134 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:27 crc kubenswrapper[4697]: I0127 15:09:27.031146 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:27Z","lastTransitionTime":"2026-01-27T15:09:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:27 crc kubenswrapper[4697]: I0127 15:09:27.134132 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:27 crc kubenswrapper[4697]: I0127 15:09:27.134173 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:27 crc kubenswrapper[4697]: I0127 15:09:27.134183 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:27 crc kubenswrapper[4697]: I0127 15:09:27.134196 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:27 crc kubenswrapper[4697]: I0127 15:09:27.134205 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:27Z","lastTransitionTime":"2026-01-27T15:09:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:27 crc kubenswrapper[4697]: I0127 15:09:27.239533 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:27 crc kubenswrapper[4697]: I0127 15:09:27.239592 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:27 crc kubenswrapper[4697]: I0127 15:09:27.239610 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:27 crc kubenswrapper[4697]: I0127 15:09:27.239634 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:27 crc kubenswrapper[4697]: I0127 15:09:27.239658 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:27Z","lastTransitionTime":"2026-01-27T15:09:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:27 crc kubenswrapper[4697]: I0127 15:09:27.342072 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:27 crc kubenswrapper[4697]: I0127 15:09:27.342350 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:27 crc kubenswrapper[4697]: I0127 15:09:27.342467 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:27 crc kubenswrapper[4697]: I0127 15:09:27.342574 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:27 crc kubenswrapper[4697]: I0127 15:09:27.342679 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:27Z","lastTransitionTime":"2026-01-27T15:09:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:27 crc kubenswrapper[4697]: I0127 15:09:27.445259 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:27 crc kubenswrapper[4697]: I0127 15:09:27.445546 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:27 crc kubenswrapper[4697]: I0127 15:09:27.445650 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:27 crc kubenswrapper[4697]: I0127 15:09:27.445766 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:27 crc kubenswrapper[4697]: I0127 15:09:27.445909 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:27Z","lastTransitionTime":"2026-01-27T15:09:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:27 crc kubenswrapper[4697]: I0127 15:09:27.548228 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:27 crc kubenswrapper[4697]: I0127 15:09:27.548269 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:27 crc kubenswrapper[4697]: I0127 15:09:27.548278 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:27 crc kubenswrapper[4697]: I0127 15:09:27.548296 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:27 crc kubenswrapper[4697]: I0127 15:09:27.548307 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:27Z","lastTransitionTime":"2026-01-27T15:09:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:27 crc kubenswrapper[4697]: I0127 15:09:27.555852 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 19:33:43.661221465 +0000 UTC Jan 27 15:09:27 crc kubenswrapper[4697]: I0127 15:09:27.568145 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:09:27 crc kubenswrapper[4697]: E0127 15:09:27.568489 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:09:27 crc kubenswrapper[4697]: I0127 15:09:27.568139 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:09:27 crc kubenswrapper[4697]: E0127 15:09:27.568765 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:09:27 crc kubenswrapper[4697]: I0127 15:09:27.650828 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:27 crc kubenswrapper[4697]: I0127 15:09:27.650890 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:27 crc kubenswrapper[4697]: I0127 15:09:27.650914 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:27 crc kubenswrapper[4697]: I0127 15:09:27.650948 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:27 crc kubenswrapper[4697]: I0127 15:09:27.650970 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:27Z","lastTransitionTime":"2026-01-27T15:09:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:27 crc kubenswrapper[4697]: I0127 15:09:27.754903 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:27 crc kubenswrapper[4697]: I0127 15:09:27.754966 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:27 crc kubenswrapper[4697]: I0127 15:09:27.754984 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:27 crc kubenswrapper[4697]: I0127 15:09:27.755007 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:27 crc kubenswrapper[4697]: I0127 15:09:27.755025 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:27Z","lastTransitionTime":"2026-01-27T15:09:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:27 crc kubenswrapper[4697]: I0127 15:09:27.857842 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:27 crc kubenswrapper[4697]: I0127 15:09:27.857895 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:27 crc kubenswrapper[4697]: I0127 15:09:27.857906 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:27 crc kubenswrapper[4697]: I0127 15:09:27.857927 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:27 crc kubenswrapper[4697]: I0127 15:09:27.857940 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:27Z","lastTransitionTime":"2026-01-27T15:09:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:27 crc kubenswrapper[4697]: I0127 15:09:27.959970 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:27 crc kubenswrapper[4697]: I0127 15:09:27.960003 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:27 crc kubenswrapper[4697]: I0127 15:09:27.960012 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:27 crc kubenswrapper[4697]: I0127 15:09:27.960031 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:27 crc kubenswrapper[4697]: I0127 15:09:27.960040 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:27Z","lastTransitionTime":"2026-01-27T15:09:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:28 crc kubenswrapper[4697]: I0127 15:09:28.062291 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:28 crc kubenswrapper[4697]: I0127 15:09:28.062347 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:28 crc kubenswrapper[4697]: I0127 15:09:28.062360 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:28 crc kubenswrapper[4697]: I0127 15:09:28.062377 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:28 crc kubenswrapper[4697]: I0127 15:09:28.062396 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:28Z","lastTransitionTime":"2026-01-27T15:09:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:28 crc kubenswrapper[4697]: I0127 15:09:28.165694 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:28 crc kubenswrapper[4697]: I0127 15:09:28.165732 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:28 crc kubenswrapper[4697]: I0127 15:09:28.165742 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:28 crc kubenswrapper[4697]: I0127 15:09:28.165756 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:28 crc kubenswrapper[4697]: I0127 15:09:28.165765 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:28Z","lastTransitionTime":"2026-01-27T15:09:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:28 crc kubenswrapper[4697]: I0127 15:09:28.268341 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:28 crc kubenswrapper[4697]: I0127 15:09:28.268436 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:28 crc kubenswrapper[4697]: I0127 15:09:28.268453 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:28 crc kubenswrapper[4697]: I0127 15:09:28.268471 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:28 crc kubenswrapper[4697]: I0127 15:09:28.268484 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:28Z","lastTransitionTime":"2026-01-27T15:09:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:28 crc kubenswrapper[4697]: I0127 15:09:28.371524 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:28 crc kubenswrapper[4697]: I0127 15:09:28.371804 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:28 crc kubenswrapper[4697]: I0127 15:09:28.371873 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:28 crc kubenswrapper[4697]: I0127 15:09:28.371964 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:28 crc kubenswrapper[4697]: I0127 15:09:28.372054 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:28Z","lastTransitionTime":"2026-01-27T15:09:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:28 crc kubenswrapper[4697]: I0127 15:09:28.474846 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:28 crc kubenswrapper[4697]: I0127 15:09:28.475249 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:28 crc kubenswrapper[4697]: I0127 15:09:28.475408 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:28 crc kubenswrapper[4697]: I0127 15:09:28.475622 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:28 crc kubenswrapper[4697]: I0127 15:09:28.475767 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:28Z","lastTransitionTime":"2026-01-27T15:09:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:28 crc kubenswrapper[4697]: I0127 15:09:28.555977 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 01:02:13.839728759 +0000 UTC Jan 27 15:09:28 crc kubenswrapper[4697]: I0127 15:09:28.568404 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vwctp" Jan 27 15:09:28 crc kubenswrapper[4697]: E0127 15:09:28.568608 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vwctp" podUID="11ed6885-450d-477c-8e08-acf5fbde2fa3" Jan 27 15:09:28 crc kubenswrapper[4697]: I0127 15:09:28.568678 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:09:28 crc kubenswrapper[4697]: E0127 15:09:28.568989 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:09:28 crc kubenswrapper[4697]: I0127 15:09:28.570612 4697 scope.go:117] "RemoveContainer" containerID="c922932b6548d6d3070183264d41bc14a0cfc7a122dfc0772c4839066544c36d" Jan 27 15:09:28 crc kubenswrapper[4697]: E0127 15:09:28.571093 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-z6jxw_openshift-ovn-kubernetes(6a1ce5ad-1a8c-4a28-99d8-fc71649954ad)\"" pod="openshift-ovn-kubernetes/ovnkube-node-z6jxw" podUID="6a1ce5ad-1a8c-4a28-99d8-fc71649954ad" Jan 27 15:09:28 crc kubenswrapper[4697]: I0127 15:09:28.578858 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:28 crc kubenswrapper[4697]: I0127 15:09:28.579157 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:28 crc kubenswrapper[4697]: I0127 15:09:28.579272 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:28 crc kubenswrapper[4697]: I0127 15:09:28.579400 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:28 crc kubenswrapper[4697]: I0127 15:09:28.579809 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:28Z","lastTransitionTime":"2026-01-27T15:09:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:28 crc kubenswrapper[4697]: I0127 15:09:28.682218 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:28 crc kubenswrapper[4697]: I0127 15:09:28.682245 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:28 crc kubenswrapper[4697]: I0127 15:09:28.682253 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:28 crc kubenswrapper[4697]: I0127 15:09:28.682265 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:28 crc kubenswrapper[4697]: I0127 15:09:28.682273 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:28Z","lastTransitionTime":"2026-01-27T15:09:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:28 crc kubenswrapper[4697]: I0127 15:09:28.784518 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:28 crc kubenswrapper[4697]: I0127 15:09:28.784567 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:28 crc kubenswrapper[4697]: I0127 15:09:28.784582 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:28 crc kubenswrapper[4697]: I0127 15:09:28.784599 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:28 crc kubenswrapper[4697]: I0127 15:09:28.784609 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:28Z","lastTransitionTime":"2026-01-27T15:09:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:28 crc kubenswrapper[4697]: I0127 15:09:28.887205 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:28 crc kubenswrapper[4697]: I0127 15:09:28.887261 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:28 crc kubenswrapper[4697]: I0127 15:09:28.887277 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:28 crc kubenswrapper[4697]: I0127 15:09:28.887296 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:28 crc kubenswrapper[4697]: I0127 15:09:28.887308 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:28Z","lastTransitionTime":"2026-01-27T15:09:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:28 crc kubenswrapper[4697]: I0127 15:09:28.990044 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:28 crc kubenswrapper[4697]: I0127 15:09:28.990081 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:28 crc kubenswrapper[4697]: I0127 15:09:28.990092 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:28 crc kubenswrapper[4697]: I0127 15:09:28.990108 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:28 crc kubenswrapper[4697]: I0127 15:09:28.990119 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:28Z","lastTransitionTime":"2026-01-27T15:09:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:29 crc kubenswrapper[4697]: I0127 15:09:29.093322 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:29 crc kubenswrapper[4697]: I0127 15:09:29.093720 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:29 crc kubenswrapper[4697]: I0127 15:09:29.093951 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:29 crc kubenswrapper[4697]: I0127 15:09:29.094099 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:29 crc kubenswrapper[4697]: I0127 15:09:29.094214 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:29Z","lastTransitionTime":"2026-01-27T15:09:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:29 crc kubenswrapper[4697]: I0127 15:09:29.196703 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:29 crc kubenswrapper[4697]: I0127 15:09:29.196741 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:29 crc kubenswrapper[4697]: I0127 15:09:29.196750 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:29 crc kubenswrapper[4697]: I0127 15:09:29.196766 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:29 crc kubenswrapper[4697]: I0127 15:09:29.196775 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:29Z","lastTransitionTime":"2026-01-27T15:09:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:29 crc kubenswrapper[4697]: I0127 15:09:29.299132 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:29 crc kubenswrapper[4697]: I0127 15:09:29.299172 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:29 crc kubenswrapper[4697]: I0127 15:09:29.299182 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:29 crc kubenswrapper[4697]: I0127 15:09:29.299196 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:29 crc kubenswrapper[4697]: I0127 15:09:29.299204 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:29Z","lastTransitionTime":"2026-01-27T15:09:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:29 crc kubenswrapper[4697]: I0127 15:09:29.401528 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:29 crc kubenswrapper[4697]: I0127 15:09:29.401756 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:29 crc kubenswrapper[4697]: I0127 15:09:29.401852 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:29 crc kubenswrapper[4697]: I0127 15:09:29.401925 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:29 crc kubenswrapper[4697]: I0127 15:09:29.401983 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:29Z","lastTransitionTime":"2026-01-27T15:09:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:29 crc kubenswrapper[4697]: I0127 15:09:29.504133 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:29 crc kubenswrapper[4697]: I0127 15:09:29.504165 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:29 crc kubenswrapper[4697]: I0127 15:09:29.504202 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:29 crc kubenswrapper[4697]: I0127 15:09:29.504219 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:29 crc kubenswrapper[4697]: I0127 15:09:29.504229 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:29Z","lastTransitionTime":"2026-01-27T15:09:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:29 crc kubenswrapper[4697]: I0127 15:09:29.557167 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 10:30:55.920091425 +0000 UTC Jan 27 15:09:29 crc kubenswrapper[4697]: I0127 15:09:29.567431 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:09:29 crc kubenswrapper[4697]: I0127 15:09:29.567444 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:09:29 crc kubenswrapper[4697]: E0127 15:09:29.567559 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:09:29 crc kubenswrapper[4697]: E0127 15:09:29.567677 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:09:29 crc kubenswrapper[4697]: I0127 15:09:29.606085 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:29 crc kubenswrapper[4697]: I0127 15:09:29.606321 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:29 crc kubenswrapper[4697]: I0127 15:09:29.606396 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:29 crc kubenswrapper[4697]: I0127 15:09:29.606485 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:29 crc kubenswrapper[4697]: I0127 15:09:29.606585 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:29Z","lastTransitionTime":"2026-01-27T15:09:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:29 crc kubenswrapper[4697]: I0127 15:09:29.709248 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:29 crc kubenswrapper[4697]: I0127 15:09:29.709297 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:29 crc kubenswrapper[4697]: I0127 15:09:29.709309 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:29 crc kubenswrapper[4697]: I0127 15:09:29.709326 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:29 crc kubenswrapper[4697]: I0127 15:09:29.709337 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:29Z","lastTransitionTime":"2026-01-27T15:09:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:29 crc kubenswrapper[4697]: I0127 15:09:29.811543 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:29 crc kubenswrapper[4697]: I0127 15:09:29.811606 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:29 crc kubenswrapper[4697]: I0127 15:09:29.811616 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:29 crc kubenswrapper[4697]: I0127 15:09:29.811631 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:29 crc kubenswrapper[4697]: I0127 15:09:29.811656 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:29Z","lastTransitionTime":"2026-01-27T15:09:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:29 crc kubenswrapper[4697]: I0127 15:09:29.913849 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:29 crc kubenswrapper[4697]: I0127 15:09:29.913892 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:29 crc kubenswrapper[4697]: I0127 15:09:29.913902 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:29 crc kubenswrapper[4697]: I0127 15:09:29.913918 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:29 crc kubenswrapper[4697]: I0127 15:09:29.913930 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:29Z","lastTransitionTime":"2026-01-27T15:09:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:30 crc kubenswrapper[4697]: I0127 15:09:30.015515 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:30 crc kubenswrapper[4697]: I0127 15:09:30.015564 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:30 crc kubenswrapper[4697]: I0127 15:09:30.015574 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:30 crc kubenswrapper[4697]: I0127 15:09:30.015589 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:30 crc kubenswrapper[4697]: I0127 15:09:30.015600 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:30Z","lastTransitionTime":"2026-01-27T15:09:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:30 crc kubenswrapper[4697]: I0127 15:09:30.117837 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:30 crc kubenswrapper[4697]: I0127 15:09:30.117884 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:30 crc kubenswrapper[4697]: I0127 15:09:30.117899 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:30 crc kubenswrapper[4697]: I0127 15:09:30.117915 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:30 crc kubenswrapper[4697]: I0127 15:09:30.117929 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:30Z","lastTransitionTime":"2026-01-27T15:09:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:30 crc kubenswrapper[4697]: I0127 15:09:30.220753 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:30 crc kubenswrapper[4697]: I0127 15:09:30.220804 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:30 crc kubenswrapper[4697]: I0127 15:09:30.220814 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:30 crc kubenswrapper[4697]: I0127 15:09:30.220829 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:30 crc kubenswrapper[4697]: I0127 15:09:30.220838 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:30Z","lastTransitionTime":"2026-01-27T15:09:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:30 crc kubenswrapper[4697]: I0127 15:09:30.326406 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:30 crc kubenswrapper[4697]: I0127 15:09:30.326478 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:30 crc kubenswrapper[4697]: I0127 15:09:30.326500 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:30 crc kubenswrapper[4697]: I0127 15:09:30.326532 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:30 crc kubenswrapper[4697]: I0127 15:09:30.326555 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:30Z","lastTransitionTime":"2026-01-27T15:09:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:30 crc kubenswrapper[4697]: I0127 15:09:30.428608 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:30 crc kubenswrapper[4697]: I0127 15:09:30.428639 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:30 crc kubenswrapper[4697]: I0127 15:09:30.428646 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:30 crc kubenswrapper[4697]: I0127 15:09:30.428658 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:30 crc kubenswrapper[4697]: I0127 15:09:30.428668 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:30Z","lastTransitionTime":"2026-01-27T15:09:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:30 crc kubenswrapper[4697]: I0127 15:09:30.531137 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:30 crc kubenswrapper[4697]: I0127 15:09:30.531168 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:30 crc kubenswrapper[4697]: I0127 15:09:30.531179 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:30 crc kubenswrapper[4697]: I0127 15:09:30.531194 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:30 crc kubenswrapper[4697]: I0127 15:09:30.531204 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:30Z","lastTransitionTime":"2026-01-27T15:09:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:30 crc kubenswrapper[4697]: I0127 15:09:30.557756 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 08:48:23.431514923 +0000 UTC Jan 27 15:09:30 crc kubenswrapper[4697]: I0127 15:09:30.568102 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:09:30 crc kubenswrapper[4697]: I0127 15:09:30.568124 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vwctp" Jan 27 15:09:30 crc kubenswrapper[4697]: E0127 15:09:30.568237 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:09:30 crc kubenswrapper[4697]: E0127 15:09:30.568404 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vwctp" podUID="11ed6885-450d-477c-8e08-acf5fbde2fa3" Jan 27 15:09:30 crc kubenswrapper[4697]: I0127 15:09:30.633081 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:30 crc kubenswrapper[4697]: I0127 15:09:30.633117 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:30 crc kubenswrapper[4697]: I0127 15:09:30.633135 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:30 crc kubenswrapper[4697]: I0127 15:09:30.633156 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:30 crc kubenswrapper[4697]: I0127 15:09:30.633171 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:30Z","lastTransitionTime":"2026-01-27T15:09:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:30 crc kubenswrapper[4697]: I0127 15:09:30.735031 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:30 crc kubenswrapper[4697]: I0127 15:09:30.735084 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:30 crc kubenswrapper[4697]: I0127 15:09:30.735093 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:30 crc kubenswrapper[4697]: I0127 15:09:30.735108 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:30 crc kubenswrapper[4697]: I0127 15:09:30.735119 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:30Z","lastTransitionTime":"2026-01-27T15:09:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:30 crc kubenswrapper[4697]: I0127 15:09:30.837817 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:30 crc kubenswrapper[4697]: I0127 15:09:30.837852 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:30 crc kubenswrapper[4697]: I0127 15:09:30.837866 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:30 crc kubenswrapper[4697]: I0127 15:09:30.837881 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:30 crc kubenswrapper[4697]: I0127 15:09:30.837893 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:30Z","lastTransitionTime":"2026-01-27T15:09:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:30 crc kubenswrapper[4697]: I0127 15:09:30.941166 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:30 crc kubenswrapper[4697]: I0127 15:09:30.941221 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:30 crc kubenswrapper[4697]: I0127 15:09:30.941236 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:30 crc kubenswrapper[4697]: I0127 15:09:30.941263 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:30 crc kubenswrapper[4697]: I0127 15:09:30.941280 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:30Z","lastTransitionTime":"2026-01-27T15:09:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:31 crc kubenswrapper[4697]: I0127 15:09:31.043700 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:31 crc kubenswrapper[4697]: I0127 15:09:31.043755 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:31 crc kubenswrapper[4697]: I0127 15:09:31.043855 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:31 crc kubenswrapper[4697]: I0127 15:09:31.043882 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:31 crc kubenswrapper[4697]: I0127 15:09:31.043898 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:31Z","lastTransitionTime":"2026-01-27T15:09:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:31 crc kubenswrapper[4697]: I0127 15:09:31.146609 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:31 crc kubenswrapper[4697]: I0127 15:09:31.146649 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:31 crc kubenswrapper[4697]: I0127 15:09:31.146659 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:31 crc kubenswrapper[4697]: I0127 15:09:31.146675 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:31 crc kubenswrapper[4697]: I0127 15:09:31.146689 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:31Z","lastTransitionTime":"2026-01-27T15:09:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:31 crc kubenswrapper[4697]: I0127 15:09:31.248867 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:31 crc kubenswrapper[4697]: I0127 15:09:31.248908 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:31 crc kubenswrapper[4697]: I0127 15:09:31.248916 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:31 crc kubenswrapper[4697]: I0127 15:09:31.248932 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:31 crc kubenswrapper[4697]: I0127 15:09:31.248942 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:31Z","lastTransitionTime":"2026-01-27T15:09:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:31 crc kubenswrapper[4697]: I0127 15:09:31.351130 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:31 crc kubenswrapper[4697]: I0127 15:09:31.351173 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:31 crc kubenswrapper[4697]: I0127 15:09:31.351182 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:31 crc kubenswrapper[4697]: I0127 15:09:31.351195 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:31 crc kubenswrapper[4697]: I0127 15:09:31.351203 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:31Z","lastTransitionTime":"2026-01-27T15:09:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:31 crc kubenswrapper[4697]: I0127 15:09:31.453205 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:31 crc kubenswrapper[4697]: I0127 15:09:31.453251 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:31 crc kubenswrapper[4697]: I0127 15:09:31.453263 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:31 crc kubenswrapper[4697]: I0127 15:09:31.453281 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:31 crc kubenswrapper[4697]: I0127 15:09:31.453295 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:31Z","lastTransitionTime":"2026-01-27T15:09:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:31 crc kubenswrapper[4697]: I0127 15:09:31.555732 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:31 crc kubenswrapper[4697]: I0127 15:09:31.555762 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:31 crc kubenswrapper[4697]: I0127 15:09:31.555770 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:31 crc kubenswrapper[4697]: I0127 15:09:31.555802 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:31 crc kubenswrapper[4697]: I0127 15:09:31.555812 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:31Z","lastTransitionTime":"2026-01-27T15:09:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:31 crc kubenswrapper[4697]: I0127 15:09:31.558076 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 20:50:48.144615164 +0000 UTC Jan 27 15:09:31 crc kubenswrapper[4697]: I0127 15:09:31.567389 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:09:31 crc kubenswrapper[4697]: I0127 15:09:31.567400 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:09:31 crc kubenswrapper[4697]: E0127 15:09:31.567475 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:09:31 crc kubenswrapper[4697]: E0127 15:09:31.567539 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:09:31 crc kubenswrapper[4697]: I0127 15:09:31.657837 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:31 crc kubenswrapper[4697]: I0127 15:09:31.657888 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:31 crc kubenswrapper[4697]: I0127 15:09:31.657898 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:31 crc kubenswrapper[4697]: I0127 15:09:31.657919 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:31 crc kubenswrapper[4697]: I0127 15:09:31.657930 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:31Z","lastTransitionTime":"2026-01-27T15:09:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:31 crc kubenswrapper[4697]: I0127 15:09:31.761275 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:31 crc kubenswrapper[4697]: I0127 15:09:31.761314 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:31 crc kubenswrapper[4697]: I0127 15:09:31.761326 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:31 crc kubenswrapper[4697]: I0127 15:09:31.761343 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:31 crc kubenswrapper[4697]: I0127 15:09:31.761354 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:31Z","lastTransitionTime":"2026-01-27T15:09:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:31 crc kubenswrapper[4697]: I0127 15:09:31.863080 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:31 crc kubenswrapper[4697]: I0127 15:09:31.863126 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:31 crc kubenswrapper[4697]: I0127 15:09:31.863140 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:31 crc kubenswrapper[4697]: I0127 15:09:31.863158 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:31 crc kubenswrapper[4697]: I0127 15:09:31.863170 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:31Z","lastTransitionTime":"2026-01-27T15:09:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:31 crc kubenswrapper[4697]: I0127 15:09:31.878624 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:31 crc kubenswrapper[4697]: I0127 15:09:31.878656 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:31 crc kubenswrapper[4697]: I0127 15:09:31.878664 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:31 crc kubenswrapper[4697]: I0127 15:09:31.878677 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:31 crc kubenswrapper[4697]: I0127 15:09:31.878704 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:31Z","lastTransitionTime":"2026-01-27T15:09:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:31 crc kubenswrapper[4697]: E0127 15:09:31.894667 4697 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:09:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:09:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:09:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:09:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"74b869f4-b1e4-4686-af4e-9516e0fb5017\\\",\\\"systemUUID\\\":\\\"69bca9ab-721f-415b-ad88-6626c7795f3c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:31Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:31 crc kubenswrapper[4697]: I0127 15:09:31.898122 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:31 crc kubenswrapper[4697]: I0127 15:09:31.898278 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:31 crc kubenswrapper[4697]: I0127 15:09:31.898292 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:31 crc kubenswrapper[4697]: I0127 15:09:31.898307 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:31 crc kubenswrapper[4697]: I0127 15:09:31.898357 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:31Z","lastTransitionTime":"2026-01-27T15:09:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:31 crc kubenswrapper[4697]: E0127 15:09:31.912160 4697 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:09:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:09:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:09:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:09:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"74b869f4-b1e4-4686-af4e-9516e0fb5017\\\",\\\"systemUUID\\\":\\\"69bca9ab-721f-415b-ad88-6626c7795f3c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:31Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:31 crc kubenswrapper[4697]: I0127 15:09:31.915407 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:31 crc kubenswrapper[4697]: I0127 15:09:31.915435 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:31 crc kubenswrapper[4697]: I0127 15:09:31.915447 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:31 crc kubenswrapper[4697]: I0127 15:09:31.915465 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:31 crc kubenswrapper[4697]: I0127 15:09:31.915477 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:31Z","lastTransitionTime":"2026-01-27T15:09:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:31 crc kubenswrapper[4697]: E0127 15:09:31.929571 4697 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:09:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:09:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:09:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:09:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"74b869f4-b1e4-4686-af4e-9516e0fb5017\\\",\\\"systemUUID\\\":\\\"69bca9ab-721f-415b-ad88-6626c7795f3c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:31Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:31 crc kubenswrapper[4697]: I0127 15:09:31.932869 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:31 crc kubenswrapper[4697]: I0127 15:09:31.932903 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:31 crc kubenswrapper[4697]: I0127 15:09:31.932915 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:31 crc kubenswrapper[4697]: I0127 15:09:31.932933 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:31 crc kubenswrapper[4697]: I0127 15:09:31.932948 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:31Z","lastTransitionTime":"2026-01-27T15:09:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:31 crc kubenswrapper[4697]: E0127 15:09:31.944852 4697 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:09:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:09:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:09:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:09:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"74b869f4-b1e4-4686-af4e-9516e0fb5017\\\",\\\"systemUUID\\\":\\\"69bca9ab-721f-415b-ad88-6626c7795f3c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:31Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:31 crc kubenswrapper[4697]: I0127 15:09:31.948178 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:31 crc kubenswrapper[4697]: I0127 15:09:31.948216 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:31 crc kubenswrapper[4697]: I0127 15:09:31.948228 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:31 crc kubenswrapper[4697]: I0127 15:09:31.948246 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:31 crc kubenswrapper[4697]: I0127 15:09:31.948258 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:31Z","lastTransitionTime":"2026-01-27T15:09:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:31 crc kubenswrapper[4697]: E0127 15:09:31.959059 4697 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:09:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:09:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:09:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:09:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"74b869f4-b1e4-4686-af4e-9516e0fb5017\\\",\\\"systemUUID\\\":\\\"69bca9ab-721f-415b-ad88-6626c7795f3c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:31Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:31 crc kubenswrapper[4697]: E0127 15:09:31.959175 4697 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 15:09:31 crc kubenswrapper[4697]: I0127 15:09:31.965039 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:31 crc kubenswrapper[4697]: I0127 15:09:31.965061 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:31 crc kubenswrapper[4697]: I0127 15:09:31.965070 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:31 crc kubenswrapper[4697]: I0127 15:09:31.965082 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:31 crc kubenswrapper[4697]: I0127 15:09:31.965092 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:31Z","lastTransitionTime":"2026-01-27T15:09:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:32 crc kubenswrapper[4697]: I0127 15:09:32.067007 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:32 crc kubenswrapper[4697]: I0127 15:09:32.067044 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:32 crc kubenswrapper[4697]: I0127 15:09:32.067058 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:32 crc kubenswrapper[4697]: I0127 15:09:32.067074 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:32 crc kubenswrapper[4697]: I0127 15:09:32.067087 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:32Z","lastTransitionTime":"2026-01-27T15:09:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:32 crc kubenswrapper[4697]: I0127 15:09:32.168763 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:32 crc kubenswrapper[4697]: I0127 15:09:32.168817 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:32 crc kubenswrapper[4697]: I0127 15:09:32.168828 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:32 crc kubenswrapper[4697]: I0127 15:09:32.168843 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:32 crc kubenswrapper[4697]: I0127 15:09:32.168855 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:32Z","lastTransitionTime":"2026-01-27T15:09:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:32 crc kubenswrapper[4697]: I0127 15:09:32.266419 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/11ed6885-450d-477c-8e08-acf5fbde2fa3-metrics-certs\") pod \"network-metrics-daemon-vwctp\" (UID: \"11ed6885-450d-477c-8e08-acf5fbde2fa3\") " pod="openshift-multus/network-metrics-daemon-vwctp" Jan 27 15:09:32 crc kubenswrapper[4697]: E0127 15:09:32.266588 4697 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 15:09:32 crc kubenswrapper[4697]: E0127 15:09:32.266645 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/11ed6885-450d-477c-8e08-acf5fbde2fa3-metrics-certs podName:11ed6885-450d-477c-8e08-acf5fbde2fa3 nodeName:}" failed. No retries permitted until 2026-01-27 15:10:04.266627932 +0000 UTC m=+100.439027723 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/11ed6885-450d-477c-8e08-acf5fbde2fa3-metrics-certs") pod "network-metrics-daemon-vwctp" (UID: "11ed6885-450d-477c-8e08-acf5fbde2fa3") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 15:09:32 crc kubenswrapper[4697]: I0127 15:09:32.271130 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:32 crc kubenswrapper[4697]: I0127 15:09:32.271156 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:32 crc kubenswrapper[4697]: I0127 15:09:32.271166 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:32 crc kubenswrapper[4697]: I0127 15:09:32.271180 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:32 crc kubenswrapper[4697]: I0127 15:09:32.271190 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:32Z","lastTransitionTime":"2026-01-27T15:09:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:32 crc kubenswrapper[4697]: I0127 15:09:32.373825 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:32 crc kubenswrapper[4697]: I0127 15:09:32.373872 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:32 crc kubenswrapper[4697]: I0127 15:09:32.373883 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:32 crc kubenswrapper[4697]: I0127 15:09:32.373899 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:32 crc kubenswrapper[4697]: I0127 15:09:32.373910 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:32Z","lastTransitionTime":"2026-01-27T15:09:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:32 crc kubenswrapper[4697]: I0127 15:09:32.475582 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:32 crc kubenswrapper[4697]: I0127 15:09:32.475635 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:32 crc kubenswrapper[4697]: I0127 15:09:32.475646 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:32 crc kubenswrapper[4697]: I0127 15:09:32.475661 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:32 crc kubenswrapper[4697]: I0127 15:09:32.475672 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:32Z","lastTransitionTime":"2026-01-27T15:09:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:32 crc kubenswrapper[4697]: I0127 15:09:32.558490 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 15:51:24.704056233 +0000 UTC Jan 27 15:09:32 crc kubenswrapper[4697]: I0127 15:09:32.567909 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vwctp" Jan 27 15:09:32 crc kubenswrapper[4697]: I0127 15:09:32.568005 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:09:32 crc kubenswrapper[4697]: E0127 15:09:32.568043 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vwctp" podUID="11ed6885-450d-477c-8e08-acf5fbde2fa3" Jan 27 15:09:32 crc kubenswrapper[4697]: E0127 15:09:32.568121 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:09:32 crc kubenswrapper[4697]: I0127 15:09:32.577584 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:32 crc kubenswrapper[4697]: I0127 15:09:32.577611 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:32 crc kubenswrapper[4697]: I0127 15:09:32.577620 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:32 crc kubenswrapper[4697]: I0127 15:09:32.577631 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:32 crc kubenswrapper[4697]: I0127 15:09:32.577641 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:32Z","lastTransitionTime":"2026-01-27T15:09:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:32 crc kubenswrapper[4697]: I0127 15:09:32.679648 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:32 crc kubenswrapper[4697]: I0127 15:09:32.679709 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:32 crc kubenswrapper[4697]: I0127 15:09:32.679725 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:32 crc kubenswrapper[4697]: I0127 15:09:32.679749 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:32 crc kubenswrapper[4697]: I0127 15:09:32.679759 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:32Z","lastTransitionTime":"2026-01-27T15:09:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:32 crc kubenswrapper[4697]: I0127 15:09:32.782228 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:32 crc kubenswrapper[4697]: I0127 15:09:32.782272 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:32 crc kubenswrapper[4697]: I0127 15:09:32.782281 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:32 crc kubenswrapper[4697]: I0127 15:09:32.782298 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:32 crc kubenswrapper[4697]: I0127 15:09:32.782308 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:32Z","lastTransitionTime":"2026-01-27T15:09:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:32 crc kubenswrapper[4697]: I0127 15:09:32.884424 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:32 crc kubenswrapper[4697]: I0127 15:09:32.884459 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:32 crc kubenswrapper[4697]: I0127 15:09:32.884470 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:32 crc kubenswrapper[4697]: I0127 15:09:32.884486 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:32 crc kubenswrapper[4697]: I0127 15:09:32.884501 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:32Z","lastTransitionTime":"2026-01-27T15:09:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:32 crc kubenswrapper[4697]: I0127 15:09:32.986206 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:32 crc kubenswrapper[4697]: I0127 15:09:32.986243 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:32 crc kubenswrapper[4697]: I0127 15:09:32.986252 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:32 crc kubenswrapper[4697]: I0127 15:09:32.986267 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:32 crc kubenswrapper[4697]: I0127 15:09:32.986278 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:32Z","lastTransitionTime":"2026-01-27T15:09:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:33 crc kubenswrapper[4697]: I0127 15:09:33.088122 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:33 crc kubenswrapper[4697]: I0127 15:09:33.088154 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:33 crc kubenswrapper[4697]: I0127 15:09:33.088162 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:33 crc kubenswrapper[4697]: I0127 15:09:33.088175 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:33 crc kubenswrapper[4697]: I0127 15:09:33.088185 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:33Z","lastTransitionTime":"2026-01-27T15:09:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:33 crc kubenswrapper[4697]: I0127 15:09:33.190886 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:33 crc kubenswrapper[4697]: I0127 15:09:33.190926 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:33 crc kubenswrapper[4697]: I0127 15:09:33.190937 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:33 crc kubenswrapper[4697]: I0127 15:09:33.190955 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:33 crc kubenswrapper[4697]: I0127 15:09:33.190966 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:33Z","lastTransitionTime":"2026-01-27T15:09:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:33 crc kubenswrapper[4697]: I0127 15:09:33.293369 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:33 crc kubenswrapper[4697]: I0127 15:09:33.293420 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:33 crc kubenswrapper[4697]: I0127 15:09:33.293435 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:33 crc kubenswrapper[4697]: I0127 15:09:33.293452 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:33 crc kubenswrapper[4697]: I0127 15:09:33.293466 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:33Z","lastTransitionTime":"2026-01-27T15:09:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:33 crc kubenswrapper[4697]: I0127 15:09:33.395577 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:33 crc kubenswrapper[4697]: I0127 15:09:33.395618 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:33 crc kubenswrapper[4697]: I0127 15:09:33.395627 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:33 crc kubenswrapper[4697]: I0127 15:09:33.395643 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:33 crc kubenswrapper[4697]: I0127 15:09:33.395653 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:33Z","lastTransitionTime":"2026-01-27T15:09:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:33 crc kubenswrapper[4697]: I0127 15:09:33.498061 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:33 crc kubenswrapper[4697]: I0127 15:09:33.498254 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:33 crc kubenswrapper[4697]: I0127 15:09:33.498311 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:33 crc kubenswrapper[4697]: I0127 15:09:33.498397 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:33 crc kubenswrapper[4697]: I0127 15:09:33.498460 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:33Z","lastTransitionTime":"2026-01-27T15:09:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:33 crc kubenswrapper[4697]: I0127 15:09:33.558746 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 02:56:00.032858397 +0000 UTC Jan 27 15:09:33 crc kubenswrapper[4697]: I0127 15:09:33.568162 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:09:33 crc kubenswrapper[4697]: E0127 15:09:33.568385 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:09:33 crc kubenswrapper[4697]: I0127 15:09:33.568653 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:09:33 crc kubenswrapper[4697]: E0127 15:09:33.568816 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:09:33 crc kubenswrapper[4697]: I0127 15:09:33.601161 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:33 crc kubenswrapper[4697]: I0127 15:09:33.601205 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:33 crc kubenswrapper[4697]: I0127 15:09:33.601218 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:33 crc kubenswrapper[4697]: I0127 15:09:33.601233 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:33 crc kubenswrapper[4697]: I0127 15:09:33.601246 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:33Z","lastTransitionTime":"2026-01-27T15:09:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:33 crc kubenswrapper[4697]: I0127 15:09:33.704006 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:33 crc kubenswrapper[4697]: I0127 15:09:33.704285 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:33 crc kubenswrapper[4697]: I0127 15:09:33.704350 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:33 crc kubenswrapper[4697]: I0127 15:09:33.704422 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:33 crc kubenswrapper[4697]: I0127 15:09:33.704485 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:33Z","lastTransitionTime":"2026-01-27T15:09:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:33 crc kubenswrapper[4697]: I0127 15:09:33.807510 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:33 crc kubenswrapper[4697]: I0127 15:09:33.807552 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:33 crc kubenswrapper[4697]: I0127 15:09:33.807567 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:33 crc kubenswrapper[4697]: I0127 15:09:33.807583 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:33 crc kubenswrapper[4697]: I0127 15:09:33.807596 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:33Z","lastTransitionTime":"2026-01-27T15:09:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:33 crc kubenswrapper[4697]: I0127 15:09:33.909854 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:33 crc kubenswrapper[4697]: I0127 15:09:33.909895 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:33 crc kubenswrapper[4697]: I0127 15:09:33.909911 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:33 crc kubenswrapper[4697]: I0127 15:09:33.909932 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:33 crc kubenswrapper[4697]: I0127 15:09:33.909947 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:33Z","lastTransitionTime":"2026-01-27T15:09:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:34 crc kubenswrapper[4697]: I0127 15:09:34.012334 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:34 crc kubenswrapper[4697]: I0127 15:09:34.012377 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:34 crc kubenswrapper[4697]: I0127 15:09:34.012388 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:34 crc kubenswrapper[4697]: I0127 15:09:34.012403 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:34 crc kubenswrapper[4697]: I0127 15:09:34.012414 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:34Z","lastTransitionTime":"2026-01-27T15:09:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:34 crc kubenswrapper[4697]: I0127 15:09:34.114879 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:34 crc kubenswrapper[4697]: I0127 15:09:34.114918 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:34 crc kubenswrapper[4697]: I0127 15:09:34.114930 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:34 crc kubenswrapper[4697]: I0127 15:09:34.114948 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:34 crc kubenswrapper[4697]: I0127 15:09:34.114958 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:34Z","lastTransitionTime":"2026-01-27T15:09:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:34 crc kubenswrapper[4697]: I0127 15:09:34.217407 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:34 crc kubenswrapper[4697]: I0127 15:09:34.217464 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:34 crc kubenswrapper[4697]: I0127 15:09:34.217519 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:34 crc kubenswrapper[4697]: I0127 15:09:34.217537 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:34 crc kubenswrapper[4697]: I0127 15:09:34.217548 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:34Z","lastTransitionTime":"2026-01-27T15:09:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:34 crc kubenswrapper[4697]: I0127 15:09:34.320607 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:34 crc kubenswrapper[4697]: I0127 15:09:34.320663 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:34 crc kubenswrapper[4697]: I0127 15:09:34.320676 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:34 crc kubenswrapper[4697]: I0127 15:09:34.320690 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:34 crc kubenswrapper[4697]: I0127 15:09:34.320699 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:34Z","lastTransitionTime":"2026-01-27T15:09:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:34 crc kubenswrapper[4697]: I0127 15:09:34.423516 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:34 crc kubenswrapper[4697]: I0127 15:09:34.423556 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:34 crc kubenswrapper[4697]: I0127 15:09:34.423569 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:34 crc kubenswrapper[4697]: I0127 15:09:34.423585 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:34 crc kubenswrapper[4697]: I0127 15:09:34.423597 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:34Z","lastTransitionTime":"2026-01-27T15:09:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:34 crc kubenswrapper[4697]: I0127 15:09:34.525824 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:34 crc kubenswrapper[4697]: I0127 15:09:34.525872 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:34 crc kubenswrapper[4697]: I0127 15:09:34.525883 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:34 crc kubenswrapper[4697]: I0127 15:09:34.525900 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:34 crc kubenswrapper[4697]: I0127 15:09:34.525910 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:34Z","lastTransitionTime":"2026-01-27T15:09:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:34 crc kubenswrapper[4697]: I0127 15:09:34.560333 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 22:48:52.4975374 +0000 UTC Jan 27 15:09:34 crc kubenswrapper[4697]: I0127 15:09:34.567891 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vwctp" Jan 27 15:09:34 crc kubenswrapper[4697]: E0127 15:09:34.568189 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vwctp" podUID="11ed6885-450d-477c-8e08-acf5fbde2fa3" Jan 27 15:09:34 crc kubenswrapper[4697]: I0127 15:09:34.568507 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:09:34 crc kubenswrapper[4697]: E0127 15:09:34.568558 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:09:34 crc kubenswrapper[4697]: I0127 15:09:34.580373 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:34Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:34 crc kubenswrapper[4697]: I0127 15:09:34.592913 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:34Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:34 crc kubenswrapper[4697]: I0127 15:09:34.603521 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://955eb03bb38f971417b1af1b193c2008607eaeda5addf30f899830dd84620c4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:34Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:34 crc kubenswrapper[4697]: I0127 15:09:34.612759 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bdclj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed86f7b6-a042-470f-8da3-9cad4e65c550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a701152234da7522fefeed3798f4748c4f8e56fa81edd5011ad4a89bbb2e4be7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f898q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bdclj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:34Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:34 crc kubenswrapper[4697]: I0127 15:09:34.622362 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6lf86" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35bbb68b-046f-482d-8c38-e76dd8a12a61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6949b3c1babb1c4c69bf612b869bea5dabf3fedc5e6c930ec3d3a51736c9651f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sf5z9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc445832c9ce25b3b787c029df7baad2f8ad53f7cf8705ab5e2590c85119bec1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sf5z9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6lf86\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:34Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:34 crc kubenswrapper[4697]: I0127 15:09:34.628108 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:34 crc kubenswrapper[4697]: I0127 15:09:34.628291 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:34 crc kubenswrapper[4697]: I0127 15:09:34.628375 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:34 crc kubenswrapper[4697]: I0127 15:09:34.628441 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:34 crc kubenswrapper[4697]: I0127 15:09:34.628508 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:34Z","lastTransitionTime":"2026-01-27T15:09:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:34 crc kubenswrapper[4697]: I0127 15:09:34.634543 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30821478-065e-48b2-85f3-ae69260477fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://841fe2379065903ddc38b4968c1764a6c83d13f42c7587f20be81d8539199c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc09ec12a81a4e2954a0d1146819e9f9b4fc1fd442a3e9c930ea213aff875eb9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa7833382543ce12d026eb8bbc6fb93276a1105a0cc34d215e719591be740f80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d1140be76b3f274b414e158153723d043089cb9b01d27733976db83dc4601f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3144c28de6be75231118993ba779a42bcc9032d51e927df649d3abb602ffa5dd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 15:08:45.318333 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 15:08:45.318446 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 15:08:45.319039 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1083979560/tls.crt::/tmp/serving-cert-1083979560/tls.key\\\\\\\"\\\\nI0127 15:08:45.778691 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 15:08:45.781562 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 15:08:45.781589 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 15:08:45.781614 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 15:08:45.781620 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 15:08:45.799733 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0127 15:08:45.799756 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 15:08:45.799769 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:08:45.799774 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:08:45.799800 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 15:08:45.799806 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 15:08:45.799810 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 15:08:45.799814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 15:08:45.805747 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://772509e08b1dcc68190d81e10a93fe348af55fdc71dbab2f0cadffd65089c044\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d9c79b1675802dcd1800cdbf3562832c4d201ff1b4d7ab4504118a41a245453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d9c79b1675802dcd1800cdbf3562832c4d201ff1b4d7ab4504118a41a245453\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:24Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:34Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:34 crc kubenswrapper[4697]: I0127 15:09:34.645774 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ed572c3-ca0d-4d38-9ac0-81080c32efe5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdb01f592a7ee00906befc039b4ac006fa96e5d36ae7cf4029af12500c42d0a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9af166859a55cb5f718a1750f4ce20f5c4259e1adad06c609ce66a907974b3ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0baf6862ad66d010a3e2ca21560d76f0de57cf5afc64cc594d4b6204f5653904\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b800aa3d330bd36d5613b410b0b73f5d175f0ec70a76d4eb479dcb0db8957a72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b800aa3d330bd36d5613b410b0b73f5d175f0ec70a76d4eb479dcb0db8957a72\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:25Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:24Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:34Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:34 crc kubenswrapper[4697]: I0127 15:09:34.658139 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:34Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:34 crc kubenswrapper[4697]: I0127 15:09:34.674541 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bcb9s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7543bea-0b65-44e1-8c0c-bc1a13577d69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fe79de88015d62a290c140e0504b9ef088f39fa79bc9b379d46fa9cdb03123f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0b69d8311464a46854b17dc23de984ff37a24f3de84f8ad6033d26d5dd30afc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0b69d8311464a46854b17dc23de984ff37a24f3de84f8ad6033d26d5dd30afc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d34049aae4e409909bb597c8bf33aa1c1ac85699cf72e33f5643145fdf9fbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d34049aae4e409909bb597c8bf33aa1c1ac85699cf72e33f5643145fdf9fbb9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b85aff4ba7e4c4eddcdfd916b42392fd8f5bd4d18caae739a7490c0576fcff1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b85aff4ba7e4c4eddcdfd916b42392fd8f5bd4d18caae739a7490c0576fcff1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6affaf91a44dec8a9da34068ed68f480ad543e0efc8e0f584fd5002f8f6ed0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6affaf91a44dec8a9da34068ed68f480ad543e0efc8e0f584fd5002f8f6ed0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dede89b14b4d80c8b9e74c45b628b5def6a04f922bb59c06828c3a4e43deca4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dede89b14b4d80c8b9e74c45b628b5def6a04f922bb59c06828c3a4e43deca4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a125d46e355d85444bf125e8184888e9b0c18dab3cd7b09b89ffff202e2c6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a125d46e355d85444bf125e8184888e9b0c18dab3cd7b09b89ffff202e2c6b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bcb9s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:34Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:34 crc kubenswrapper[4697]: I0127 15:09:34.688221 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rq89t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fbc1c27-fba2-40df-95dd-3842bd1f1906\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0c056e48d3130806317f25486fea67d938a0e610f19b6089873f2fcfe4759a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npp7h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rq89t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:34Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:34 crc kubenswrapper[4697]: I0127 15:09:34.706026 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6jxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24ac4a674c5fb98082daeabf52736988951ea5c66064ff4bb63f0d40c43b947d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25f52622d494cffbbd36c21f76148b896a10d3c1ace649ac0824e847b812a277\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9146d3d41cb348c99ea78d62aef3aa7d46c5f99855e042fdf5bc38b18556e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e33c68fac5ef11b2704b8a1460588937489a191ea2eacb70548b1e99cf718822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8784cf473729161592d08c782f4754724d6609756a30040715cbff8c732a09c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eea7c2b7dbea8198cc4709a808f8ecab760514224f4e3eb96d04c3bd7f16df6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c922932b6548d6d3070183264d41bc14a0cfc7a122dfc0772c4839066544c36d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c922932b6548d6d3070183264d41bc14a0cfc7a122dfc0772c4839066544c36d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T15:09:14Z\\\",\\\"message\\\":\\\"d to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:14Z is after 2025-08-24T17:21:41Z]\\\\nI0127 15:09:14.396323 6251 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]} options:{GoMap:map[iface-id-ver:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c94130be-172c-477c-88c4-40cc7eba30fe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 15:09:14.396396 6251 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c94130be-172c-477c-88c4-40cc7eba30fe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 15:09:14.396409 625\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:09:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-z6jxw_openshift-ovn-kubernetes(6a1ce5ad-1a8c-4a28-99d8-fc71649954ad)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://971bf4362650664f5133d9b68b7a5ce76e54dafbf28c88730f678ada0256ffd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9666b8a501ef015431ee3be1fc34ca2b196011df3007d2e4d508f09f9967785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9666b8a501ef015431ee3be1fc34ca2b196011df3007d2e4d508f09f9967785\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z6jxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:34Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:34 crc kubenswrapper[4697]: I0127 15:09:34.717291 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00bcd4fb-11e6-4087-91b7-290cd35a7292\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee5c74f4e3f1154431027a743528e81ec4bed30037b30a858870f74993da4691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b23c092c5d493951a1f6dbbf0482f102f36a830133d843f3c574afba2e1d50d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ad05a5c3b7640af677ede45c27c40da5d118e28a9d45de0ffa60a05684121c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fd615105781bcf4614f8a58cf63eeb89020db12e822192bd652a5ff23e25a2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:24Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:34Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:34 crc kubenswrapper[4697]: I0127 15:09:34.729142 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e13ee612abe9aa03f8ccaf68abbdfdbeb29820484f430097aef6be1679d3efe8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:34Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:34 crc kubenswrapper[4697]: I0127 15:09:34.730195 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:34 crc kubenswrapper[4697]: I0127 15:09:34.730237 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:34 crc kubenswrapper[4697]: I0127 15:09:34.730248 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:34 crc kubenswrapper[4697]: I0127 15:09:34.730265 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:34 crc kubenswrapper[4697]: I0127 15:09:34.730276 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:34Z","lastTransitionTime":"2026-01-27T15:09:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:34 crc kubenswrapper[4697]: I0127 15:09:34.740512 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a182e147723dd1c9335e6c6a910d5d53bdfc118504b6a0a9f3c91f79b6d3aee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52fcd1c6784720765f18ddc1936d3bdd625b743d27654a647ff80351957797e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:34Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:34 crc kubenswrapper[4697]: I0127 15:09:34.751600 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wz495" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9bec8bc-b2a6-4865-83ca-692ae5c022a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2616d07c83d73b63d4b728a30de8a7e1d76986d38f8c4c3fe019bf73e64784f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wqhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://faaced835dbc76e880a1fd29824b00fca5f720686e476bcba6ad4f807e28e8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wqhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wz495\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:34Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:34 crc kubenswrapper[4697]: I0127 15:09:34.760650 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vwctp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11ed6885-450d-477c-8e08-acf5fbde2fa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr85v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr85v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:09:00Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vwctp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:34Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:34 crc kubenswrapper[4697]: I0127 15:09:34.768355 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lpz4j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d187caad-2501-44d6-8ced-f8d8ca5fecfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c2b6a00c426e85ca8ca4fe5790bf7badc12e0c2cc72c1454e664e809ace5e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5jqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lpz4j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:34Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:34 crc kubenswrapper[4697]: I0127 15:09:34.831870 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:34 crc kubenswrapper[4697]: I0127 15:09:34.831897 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:34 crc kubenswrapper[4697]: I0127 15:09:34.831905 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:34 crc kubenswrapper[4697]: I0127 15:09:34.831917 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:34 crc kubenswrapper[4697]: I0127 15:09:34.831925 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:34Z","lastTransitionTime":"2026-01-27T15:09:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:34 crc kubenswrapper[4697]: I0127 15:09:34.935151 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:34 crc kubenswrapper[4697]: I0127 15:09:34.935188 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:34 crc kubenswrapper[4697]: I0127 15:09:34.935198 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:34 crc kubenswrapper[4697]: I0127 15:09:34.935213 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:34 crc kubenswrapper[4697]: I0127 15:09:34.935224 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:34Z","lastTransitionTime":"2026-01-27T15:09:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:35 crc kubenswrapper[4697]: I0127 15:09:35.025161 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rq89t_7fbc1c27-fba2-40df-95dd-3842bd1f1906/kube-multus/0.log" Jan 27 15:09:35 crc kubenswrapper[4697]: I0127 15:09:35.025205 4697 generic.go:334] "Generic (PLEG): container finished" podID="7fbc1c27-fba2-40df-95dd-3842bd1f1906" containerID="c0c056e48d3130806317f25486fea67d938a0e610f19b6089873f2fcfe4759a0" exitCode=1 Jan 27 15:09:35 crc kubenswrapper[4697]: I0127 15:09:35.025230 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-rq89t" event={"ID":"7fbc1c27-fba2-40df-95dd-3842bd1f1906","Type":"ContainerDied","Data":"c0c056e48d3130806317f25486fea67d938a0e610f19b6089873f2fcfe4759a0"} Jan 27 15:09:35 crc kubenswrapper[4697]: I0127 15:09:35.025643 4697 scope.go:117] "RemoveContainer" containerID="c0c056e48d3130806317f25486fea67d938a0e610f19b6089873f2fcfe4759a0" Jan 27 15:09:35 crc kubenswrapper[4697]: I0127 15:09:35.037160 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:35 crc kubenswrapper[4697]: I0127 15:09:35.037403 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:35 crc kubenswrapper[4697]: I0127 15:09:35.037411 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:35 crc kubenswrapper[4697]: I0127 15:09:35.037425 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:35 crc kubenswrapper[4697]: I0127 15:09:35.037434 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:35Z","lastTransitionTime":"2026-01-27T15:09:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:35 crc kubenswrapper[4697]: I0127 15:09:35.038996 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lpz4j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d187caad-2501-44d6-8ced-f8d8ca5fecfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c2b6a00c426e85ca8ca4fe5790bf7badc12e0c2cc72c1454e664e809ace5e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5jqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lpz4j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:35Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:35 crc kubenswrapper[4697]: I0127 15:09:35.050089 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6lf86" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35bbb68b-046f-482d-8c38-e76dd8a12a61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6949b3c1babb1c4c69bf612b869bea5dabf3fedc5e6c930ec3d3a51736c9651f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sf5z9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc445832c9ce25b3b787c029df7baad2f8ad53f7cf8705ab5e2590c85119bec1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sf5z9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6lf86\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:35Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:35 crc kubenswrapper[4697]: I0127 15:09:35.063612 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30821478-065e-48b2-85f3-ae69260477fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://841fe2379065903ddc38b4968c1764a6c83d13f42c7587f20be81d8539199c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc09ec12a81a4e2954a0d1146819e9f9b4fc1fd442a3e9c930ea213aff875eb9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa7833382543ce12d026eb8bbc6fb93276a1105a0cc34d215e719591be740f80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d1140be76b3f274b414e158153723d043089cb9b01d27733976db83dc4601f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3144c28de6be75231118993ba779a42bcc9032d51e927df649d3abb602ffa5dd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 15:08:45.318333 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 15:08:45.318446 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 15:08:45.319039 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1083979560/tls.crt::/tmp/serving-cert-1083979560/tls.key\\\\\\\"\\\\nI0127 15:08:45.778691 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 15:08:45.781562 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 15:08:45.781589 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 15:08:45.781614 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 15:08:45.781620 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 15:08:45.799733 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0127 15:08:45.799756 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 15:08:45.799769 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:08:45.799774 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:08:45.799800 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 15:08:45.799806 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 15:08:45.799810 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 15:08:45.799814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 15:08:45.805747 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://772509e08b1dcc68190d81e10a93fe348af55fdc71dbab2f0cadffd65089c044\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d9c79b1675802dcd1800cdbf3562832c4d201ff1b4d7ab4504118a41a245453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d9c79b1675802dcd1800cdbf3562832c4d201ff1b4d7ab4504118a41a245453\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:24Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:35Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:35 crc kubenswrapper[4697]: I0127 15:09:35.081319 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ed572c3-ca0d-4d38-9ac0-81080c32efe5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdb01f592a7ee00906befc039b4ac006fa96e5d36ae7cf4029af12500c42d0a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9af166859a55cb5f718a1750f4ce20f5c4259e1adad06c609ce66a907974b3ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0baf6862ad66d010a3e2ca21560d76f0de57cf5afc64cc594d4b6204f5653904\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b800aa3d330bd36d5613b410b0b73f5d175f0ec70a76d4eb479dcb0db8957a72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b800aa3d330bd36d5613b410b0b73f5d175f0ec70a76d4eb479dcb0db8957a72\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:25Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:24Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:35Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:35 crc kubenswrapper[4697]: I0127 15:09:35.092685 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:35Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:35 crc kubenswrapper[4697]: I0127 15:09:35.105302 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:35Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:35 crc kubenswrapper[4697]: I0127 15:09:35.116980 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://955eb03bb38f971417b1af1b193c2008607eaeda5addf30f899830dd84620c4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:35Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:35 crc kubenswrapper[4697]: I0127 15:09:35.129483 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bdclj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed86f7b6-a042-470f-8da3-9cad4e65c550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a701152234da7522fefeed3798f4748c4f8e56fa81edd5011ad4a89bbb2e4be7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f898q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bdclj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:35Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:35 crc kubenswrapper[4697]: I0127 15:09:35.139612 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:35 crc kubenswrapper[4697]: I0127 15:09:35.139852 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:35 crc kubenswrapper[4697]: I0127 15:09:35.139946 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:35 crc kubenswrapper[4697]: I0127 15:09:35.140041 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:35 crc kubenswrapper[4697]: I0127 15:09:35.140126 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:35Z","lastTransitionTime":"2026-01-27T15:09:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:35 crc kubenswrapper[4697]: I0127 15:09:35.142600 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00bcd4fb-11e6-4087-91b7-290cd35a7292\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee5c74f4e3f1154431027a743528e81ec4bed30037b30a858870f74993da4691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b23c092c5d493951a1f6dbbf0482f102f36a830133d843f3c574afba2e1d50d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ad05a5c3b7640af677ede45c27c40da5d118e28a9d45de0ffa60a05684121c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fd615105781bcf4614f8a58cf63eeb89020db12e822192bd652a5ff23e25a2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:24Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:35Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:35 crc kubenswrapper[4697]: I0127 15:09:35.153258 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e13ee612abe9aa03f8ccaf68abbdfdbeb29820484f430097aef6be1679d3efe8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:35Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:35 crc kubenswrapper[4697]: I0127 15:09:35.165014 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a182e147723dd1c9335e6c6a910d5d53bdfc118504b6a0a9f3c91f79b6d3aee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52fcd1c6784720765f18ddc1936d3bdd625b743d27654a647ff80351957797e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:35Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:35 crc kubenswrapper[4697]: I0127 15:09:35.176455 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:35Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:35 crc kubenswrapper[4697]: I0127 15:09:35.192161 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bcb9s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7543bea-0b65-44e1-8c0c-bc1a13577d69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fe79de88015d62a290c140e0504b9ef088f39fa79bc9b379d46fa9cdb03123f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0b69d8311464a46854b17dc23de984ff37a24f3de84f8ad6033d26d5dd30afc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0b69d8311464a46854b17dc23de984ff37a24f3de84f8ad6033d26d5dd30afc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d34049aae4e409909bb597c8bf33aa1c1ac85699cf72e33f5643145fdf9fbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d34049aae4e409909bb597c8bf33aa1c1ac85699cf72e33f5643145fdf9fbb9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b85aff4ba7e4c4eddcdfd916b42392fd8f5bd4d18caae739a7490c0576fcff1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b85aff4ba7e4c4eddcdfd916b42392fd8f5bd4d18caae739a7490c0576fcff1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6affaf91a44dec8a9da34068ed68f480ad543e0efc8e0f584fd5002f8f6ed0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6affaf91a44dec8a9da34068ed68f480ad543e0efc8e0f584fd5002f8f6ed0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dede89b14b4d80c8b9e74c45b628b5def6a04f922bb59c06828c3a4e43deca4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dede89b14b4d80c8b9e74c45b628b5def6a04f922bb59c06828c3a4e43deca4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a125d46e355d85444bf125e8184888e9b0c18dab3cd7b09b89ffff202e2c6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a125d46e355d85444bf125e8184888e9b0c18dab3cd7b09b89ffff202e2c6b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bcb9s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:35Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:35 crc kubenswrapper[4697]: I0127 15:09:35.203811 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rq89t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fbc1c27-fba2-40df-95dd-3842bd1f1906\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0c056e48d3130806317f25486fea67d938a0e610f19b6089873f2fcfe4759a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0c056e48d3130806317f25486fea67d938a0e610f19b6089873f2fcfe4759a0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T15:09:34Z\\\",\\\"message\\\":\\\"2026-01-27T15:08:49+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_df226232-eda2-4025-b167-90894438b301\\\\n2026-01-27T15:08:49+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_df226232-eda2-4025-b167-90894438b301 to /host/opt/cni/bin/\\\\n2026-01-27T15:08:49Z [verbose] multus-daemon started\\\\n2026-01-27T15:08:49Z [verbose] Readiness Indicator file check\\\\n2026-01-27T15:09:34Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npp7h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rq89t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:35Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:35 crc kubenswrapper[4697]: I0127 15:09:35.220601 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6jxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24ac4a674c5fb98082daeabf52736988951ea5c66064ff4bb63f0d40c43b947d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25f52622d494cffbbd36c21f76148b896a10d3c1ace649ac0824e847b812a277\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9146d3d41cb348c99ea78d62aef3aa7d46c5f99855e042fdf5bc38b18556e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e33c68fac5ef11b2704b8a1460588937489a191ea2eacb70548b1e99cf718822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8784cf473729161592d08c782f4754724d6609756a30040715cbff8c732a09c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eea7c2b7dbea8198cc4709a808f8ecab760514224f4e3eb96d04c3bd7f16df6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c922932b6548d6d3070183264d41bc14a0cfc7a122dfc0772c4839066544c36d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c922932b6548d6d3070183264d41bc14a0cfc7a122dfc0772c4839066544c36d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T15:09:14Z\\\",\\\"message\\\":\\\"d to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:14Z is after 2025-08-24T17:21:41Z]\\\\nI0127 15:09:14.396323 6251 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]} options:{GoMap:map[iface-id-ver:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c94130be-172c-477c-88c4-40cc7eba30fe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 15:09:14.396396 6251 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c94130be-172c-477c-88c4-40cc7eba30fe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 15:09:14.396409 625\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:09:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-z6jxw_openshift-ovn-kubernetes(6a1ce5ad-1a8c-4a28-99d8-fc71649954ad)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://971bf4362650664f5133d9b68b7a5ce76e54dafbf28c88730f678ada0256ffd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9666b8a501ef015431ee3be1fc34ca2b196011df3007d2e4d508f09f9967785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9666b8a501ef015431ee3be1fc34ca2b196011df3007d2e4d508f09f9967785\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z6jxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:35Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:35 crc kubenswrapper[4697]: I0127 15:09:35.231947 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wz495" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9bec8bc-b2a6-4865-83ca-692ae5c022a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2616d07c83d73b63d4b728a30de8a7e1d76986d38f8c4c3fe019bf73e64784f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wqhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://faaced835dbc76e880a1fd29824b00fca5f720686e476bcba6ad4f807e28e8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wqhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wz495\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:35Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:35 crc kubenswrapper[4697]: I0127 15:09:35.242923 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:35 crc kubenswrapper[4697]: I0127 15:09:35.242963 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:35 crc kubenswrapper[4697]: I0127 15:09:35.242972 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:35 crc kubenswrapper[4697]: I0127 15:09:35.242985 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:35 crc kubenswrapper[4697]: I0127 15:09:35.243026 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:35Z","lastTransitionTime":"2026-01-27T15:09:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:35 crc kubenswrapper[4697]: I0127 15:09:35.244046 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vwctp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11ed6885-450d-477c-8e08-acf5fbde2fa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr85v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr85v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:09:00Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vwctp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:35Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:35 crc kubenswrapper[4697]: I0127 15:09:35.345191 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:35 crc kubenswrapper[4697]: I0127 15:09:35.345243 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:35 crc kubenswrapper[4697]: I0127 15:09:35.345255 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:35 crc kubenswrapper[4697]: I0127 15:09:35.345272 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:35 crc kubenswrapper[4697]: I0127 15:09:35.345284 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:35Z","lastTransitionTime":"2026-01-27T15:09:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:35 crc kubenswrapper[4697]: I0127 15:09:35.447285 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:35 crc kubenswrapper[4697]: I0127 15:09:35.447334 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:35 crc kubenswrapper[4697]: I0127 15:09:35.447345 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:35 crc kubenswrapper[4697]: I0127 15:09:35.447368 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:35 crc kubenswrapper[4697]: I0127 15:09:35.447379 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:35Z","lastTransitionTime":"2026-01-27T15:09:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:35 crc kubenswrapper[4697]: I0127 15:09:35.550132 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:35 crc kubenswrapper[4697]: I0127 15:09:35.550184 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:35 crc kubenswrapper[4697]: I0127 15:09:35.550198 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:35 crc kubenswrapper[4697]: I0127 15:09:35.550215 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:35 crc kubenswrapper[4697]: I0127 15:09:35.550230 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:35Z","lastTransitionTime":"2026-01-27T15:09:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:35 crc kubenswrapper[4697]: I0127 15:09:35.561450 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 13:58:14.225139805 +0000 UTC Jan 27 15:09:35 crc kubenswrapper[4697]: I0127 15:09:35.567930 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:09:35 crc kubenswrapper[4697]: I0127 15:09:35.568072 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:09:35 crc kubenswrapper[4697]: E0127 15:09:35.568175 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:09:35 crc kubenswrapper[4697]: E0127 15:09:35.568282 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:09:35 crc kubenswrapper[4697]: I0127 15:09:35.652269 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:35 crc kubenswrapper[4697]: I0127 15:09:35.652315 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:35 crc kubenswrapper[4697]: I0127 15:09:35.652325 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:35 crc kubenswrapper[4697]: I0127 15:09:35.652340 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:35 crc kubenswrapper[4697]: I0127 15:09:35.652349 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:35Z","lastTransitionTime":"2026-01-27T15:09:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:35 crc kubenswrapper[4697]: I0127 15:09:35.754806 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:35 crc kubenswrapper[4697]: I0127 15:09:35.755049 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:35 crc kubenswrapper[4697]: I0127 15:09:35.755121 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:35 crc kubenswrapper[4697]: I0127 15:09:35.755208 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:35 crc kubenswrapper[4697]: I0127 15:09:35.755267 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:35Z","lastTransitionTime":"2026-01-27T15:09:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:35 crc kubenswrapper[4697]: I0127 15:09:35.857736 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:35 crc kubenswrapper[4697]: I0127 15:09:35.857779 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:35 crc kubenswrapper[4697]: I0127 15:09:35.857793 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:35 crc kubenswrapper[4697]: I0127 15:09:35.857808 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:35 crc kubenswrapper[4697]: I0127 15:09:35.857822 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:35Z","lastTransitionTime":"2026-01-27T15:09:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:35 crc kubenswrapper[4697]: I0127 15:09:35.960449 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:35 crc kubenswrapper[4697]: I0127 15:09:35.960727 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:35 crc kubenswrapper[4697]: I0127 15:09:35.960811 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:35 crc kubenswrapper[4697]: I0127 15:09:35.960911 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:35 crc kubenswrapper[4697]: I0127 15:09:35.960974 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:35Z","lastTransitionTime":"2026-01-27T15:09:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:36 crc kubenswrapper[4697]: I0127 15:09:36.034934 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rq89t_7fbc1c27-fba2-40df-95dd-3842bd1f1906/kube-multus/0.log" Jan 27 15:09:36 crc kubenswrapper[4697]: I0127 15:09:36.034997 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-rq89t" event={"ID":"7fbc1c27-fba2-40df-95dd-3842bd1f1906","Type":"ContainerStarted","Data":"55217260dcb8aebc9ddf2d903bc0257bc8a122956102c0215d6a5a20451d6afe"} Jan 27 15:09:36 crc kubenswrapper[4697]: I0127 15:09:36.046188 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lpz4j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d187caad-2501-44d6-8ced-f8d8ca5fecfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c2b6a00c426e85ca8ca4fe5790bf7badc12e0c2cc72c1454e664e809ace5e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5jqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lpz4j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:36Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:36 crc kubenswrapper[4697]: I0127 15:09:36.059385 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30821478-065e-48b2-85f3-ae69260477fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://841fe2379065903ddc38b4968c1764a6c83d13f42c7587f20be81d8539199c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc09ec12a81a4e2954a0d1146819e9f9b4fc1fd442a3e9c930ea213aff875eb9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa7833382543ce12d026eb8bbc6fb93276a1105a0cc34d215e719591be740f80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d1140be76b3f274b414e158153723d043089cb9b01d27733976db83dc4601f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3144c28de6be75231118993ba779a42bcc9032d51e927df649d3abb602ffa5dd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 15:08:45.318333 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 15:08:45.318446 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 15:08:45.319039 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1083979560/tls.crt::/tmp/serving-cert-1083979560/tls.key\\\\\\\"\\\\nI0127 15:08:45.778691 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 15:08:45.781562 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 15:08:45.781589 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 15:08:45.781614 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 15:08:45.781620 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 15:08:45.799733 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0127 15:08:45.799756 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 15:08:45.799769 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:08:45.799774 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:08:45.799800 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 15:08:45.799806 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 15:08:45.799810 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 15:08:45.799814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 15:08:45.805747 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://772509e08b1dcc68190d81e10a93fe348af55fdc71dbab2f0cadffd65089c044\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d9c79b1675802dcd1800cdbf3562832c4d201ff1b4d7ab4504118a41a245453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d9c79b1675802dcd1800cdbf3562832c4d201ff1b4d7ab4504118a41a245453\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:24Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:36Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:36 crc kubenswrapper[4697]: I0127 15:09:36.063162 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:36 crc kubenswrapper[4697]: I0127 15:09:36.063208 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:36 crc kubenswrapper[4697]: I0127 15:09:36.063221 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:36 crc kubenswrapper[4697]: I0127 15:09:36.063238 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:36 crc kubenswrapper[4697]: I0127 15:09:36.063250 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:36Z","lastTransitionTime":"2026-01-27T15:09:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:36 crc kubenswrapper[4697]: I0127 15:09:36.070230 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ed572c3-ca0d-4d38-9ac0-81080c32efe5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdb01f592a7ee00906befc039b4ac006fa96e5d36ae7cf4029af12500c42d0a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9af166859a55cb5f718a1750f4ce20f5c4259e1adad06c609ce66a907974b3ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0baf6862ad66d010a3e2ca21560d76f0de57cf5afc64cc594d4b6204f5653904\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b800aa3d330bd36d5613b410b0b73f5d175f0ec70a76d4eb479dcb0db8957a72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b800aa3d330bd36d5613b410b0b73f5d175f0ec70a76d4eb479dcb0db8957a72\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:25Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:24Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:36Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:36 crc kubenswrapper[4697]: I0127 15:09:36.081474 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:36Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:36 crc kubenswrapper[4697]: I0127 15:09:36.093097 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:36Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:36 crc kubenswrapper[4697]: I0127 15:09:36.103936 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://955eb03bb38f971417b1af1b193c2008607eaeda5addf30f899830dd84620c4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:36Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:36 crc kubenswrapper[4697]: I0127 15:09:36.113355 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bdclj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed86f7b6-a042-470f-8da3-9cad4e65c550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a701152234da7522fefeed3798f4748c4f8e56fa81edd5011ad4a89bbb2e4be7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f898q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bdclj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:36Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:36 crc kubenswrapper[4697]: I0127 15:09:36.125327 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6lf86" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35bbb68b-046f-482d-8c38-e76dd8a12a61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6949b3c1babb1c4c69bf612b869bea5dabf3fedc5e6c930ec3d3a51736c9651f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sf5z9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc445832c9ce25b3b787c029df7baad2f8ad53f7cf8705ab5e2590c85119bec1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sf5z9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6lf86\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:36Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:36 crc kubenswrapper[4697]: I0127 15:09:36.137564 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e13ee612abe9aa03f8ccaf68abbdfdbeb29820484f430097aef6be1679d3efe8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:36Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:36 crc kubenswrapper[4697]: I0127 15:09:36.149256 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a182e147723dd1c9335e6c6a910d5d53bdfc118504b6a0a9f3c91f79b6d3aee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52fcd1c6784720765f18ddc1936d3bdd625b743d27654a647ff80351957797e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:36Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:36 crc kubenswrapper[4697]: I0127 15:09:36.159766 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:36Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:36 crc kubenswrapper[4697]: I0127 15:09:36.166256 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:36 crc kubenswrapper[4697]: I0127 15:09:36.166292 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:36 crc kubenswrapper[4697]: I0127 15:09:36.166304 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:36 crc kubenswrapper[4697]: I0127 15:09:36.166320 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:36 crc kubenswrapper[4697]: I0127 15:09:36.166330 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:36Z","lastTransitionTime":"2026-01-27T15:09:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:36 crc kubenswrapper[4697]: I0127 15:09:36.172605 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bcb9s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7543bea-0b65-44e1-8c0c-bc1a13577d69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fe79de88015d62a290c140e0504b9ef088f39fa79bc9b379d46fa9cdb03123f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0b69d8311464a46854b17dc23de984ff37a24f3de84f8ad6033d26d5dd30afc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0b69d8311464a46854b17dc23de984ff37a24f3de84f8ad6033d26d5dd30afc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d34049aae4e409909bb597c8bf33aa1c1ac85699cf72e33f5643145fdf9fbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d34049aae4e409909bb597c8bf33aa1c1ac85699cf72e33f5643145fdf9fbb9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b85aff4ba7e4c4eddcdfd916b42392fd8f5bd4d18caae739a7490c0576fcff1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b85aff4ba7e4c4eddcdfd916b42392fd8f5bd4d18caae739a7490c0576fcff1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6affaf91a44dec8a9da34068ed68f480ad543e0efc8e0f584fd5002f8f6ed0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6affaf91a44dec8a9da34068ed68f480ad543e0efc8e0f584fd5002f8f6ed0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dede89b14b4d80c8b9e74c45b628b5def6a04f922bb59c06828c3a4e43deca4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dede89b14b4d80c8b9e74c45b628b5def6a04f922bb59c06828c3a4e43deca4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a125d46e355d85444bf125e8184888e9b0c18dab3cd7b09b89ffff202e2c6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a125d46e355d85444bf125e8184888e9b0c18dab3cd7b09b89ffff202e2c6b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bcb9s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:36Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:36 crc kubenswrapper[4697]: I0127 15:09:36.186165 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rq89t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fbc1c27-fba2-40df-95dd-3842bd1f1906\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55217260dcb8aebc9ddf2d903bc0257bc8a122956102c0215d6a5a20451d6afe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0c056e48d3130806317f25486fea67d938a0e610f19b6089873f2fcfe4759a0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T15:09:34Z\\\",\\\"message\\\":\\\"2026-01-27T15:08:49+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_df226232-eda2-4025-b167-90894438b301\\\\n2026-01-27T15:08:49+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_df226232-eda2-4025-b167-90894438b301 to /host/opt/cni/bin/\\\\n2026-01-27T15:08:49Z [verbose] multus-daemon started\\\\n2026-01-27T15:08:49Z [verbose] Readiness Indicator file check\\\\n2026-01-27T15:09:34Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:09:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npp7h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rq89t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:36Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:36 crc kubenswrapper[4697]: I0127 15:09:36.203042 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6jxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24ac4a674c5fb98082daeabf52736988951ea5c66064ff4bb63f0d40c43b947d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25f52622d494cffbbd36c21f76148b896a10d3c1ace649ac0824e847b812a277\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9146d3d41cb348c99ea78d62aef3aa7d46c5f99855e042fdf5bc38b18556e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e33c68fac5ef11b2704b8a1460588937489a191ea2eacb70548b1e99cf718822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8784cf473729161592d08c782f4754724d6609756a30040715cbff8c732a09c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eea7c2b7dbea8198cc4709a808f8ecab760514224f4e3eb96d04c3bd7f16df6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c922932b6548d6d3070183264d41bc14a0cfc7a122dfc0772c4839066544c36d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c922932b6548d6d3070183264d41bc14a0cfc7a122dfc0772c4839066544c36d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T15:09:14Z\\\",\\\"message\\\":\\\"d to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:14Z is after 2025-08-24T17:21:41Z]\\\\nI0127 15:09:14.396323 6251 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]} options:{GoMap:map[iface-id-ver:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c94130be-172c-477c-88c4-40cc7eba30fe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 15:09:14.396396 6251 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c94130be-172c-477c-88c4-40cc7eba30fe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 15:09:14.396409 625\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:09:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-z6jxw_openshift-ovn-kubernetes(6a1ce5ad-1a8c-4a28-99d8-fc71649954ad)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://971bf4362650664f5133d9b68b7a5ce76e54dafbf28c88730f678ada0256ffd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9666b8a501ef015431ee3be1fc34ca2b196011df3007d2e4d508f09f9967785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9666b8a501ef015431ee3be1fc34ca2b196011df3007d2e4d508f09f9967785\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z6jxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:36Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:36 crc kubenswrapper[4697]: I0127 15:09:36.214645 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00bcd4fb-11e6-4087-91b7-290cd35a7292\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee5c74f4e3f1154431027a743528e81ec4bed30037b30a858870f74993da4691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b23c092c5d493951a1f6dbbf0482f102f36a830133d843f3c574afba2e1d50d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ad05a5c3b7640af677ede45c27c40da5d118e28a9d45de0ffa60a05684121c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fd615105781bcf4614f8a58cf63eeb89020db12e822192bd652a5ff23e25a2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:24Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:36Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:36 crc kubenswrapper[4697]: I0127 15:09:36.225348 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wz495" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9bec8bc-b2a6-4865-83ca-692ae5c022a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2616d07c83d73b63d4b728a30de8a7e1d76986d38f8c4c3fe019bf73e64784f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wqhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://faaced835dbc76e880a1fd29824b00fca5f720686e476bcba6ad4f807e28e8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wqhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wz495\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:36Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:36 crc kubenswrapper[4697]: I0127 15:09:36.236147 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vwctp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11ed6885-450d-477c-8e08-acf5fbde2fa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr85v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr85v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:09:00Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vwctp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:36Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:36 crc kubenswrapper[4697]: I0127 15:09:36.268895 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:36 crc kubenswrapper[4697]: I0127 15:09:36.268930 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:36 crc kubenswrapper[4697]: I0127 15:09:36.268938 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:36 crc kubenswrapper[4697]: I0127 15:09:36.268952 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:36 crc kubenswrapper[4697]: I0127 15:09:36.268962 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:36Z","lastTransitionTime":"2026-01-27T15:09:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:36 crc kubenswrapper[4697]: I0127 15:09:36.371536 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:36 crc kubenswrapper[4697]: I0127 15:09:36.371568 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:36 crc kubenswrapper[4697]: I0127 15:09:36.371576 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:36 crc kubenswrapper[4697]: I0127 15:09:36.371588 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:36 crc kubenswrapper[4697]: I0127 15:09:36.371598 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:36Z","lastTransitionTime":"2026-01-27T15:09:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:36 crc kubenswrapper[4697]: I0127 15:09:36.474170 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:36 crc kubenswrapper[4697]: I0127 15:09:36.474230 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:36 crc kubenswrapper[4697]: I0127 15:09:36.474239 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:36 crc kubenswrapper[4697]: I0127 15:09:36.474256 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:36 crc kubenswrapper[4697]: I0127 15:09:36.474266 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:36Z","lastTransitionTime":"2026-01-27T15:09:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:36 crc kubenswrapper[4697]: I0127 15:09:36.561828 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 00:58:46.211962205 +0000 UTC Jan 27 15:09:36 crc kubenswrapper[4697]: I0127 15:09:36.568612 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vwctp" Jan 27 15:09:36 crc kubenswrapper[4697]: I0127 15:09:36.568700 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:09:36 crc kubenswrapper[4697]: E0127 15:09:36.568763 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vwctp" podUID="11ed6885-450d-477c-8e08-acf5fbde2fa3" Jan 27 15:09:36 crc kubenswrapper[4697]: E0127 15:09:36.568832 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:09:36 crc kubenswrapper[4697]: I0127 15:09:36.576236 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:36 crc kubenswrapper[4697]: I0127 15:09:36.576280 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:36 crc kubenswrapper[4697]: I0127 15:09:36.576294 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:36 crc kubenswrapper[4697]: I0127 15:09:36.576314 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:36 crc kubenswrapper[4697]: I0127 15:09:36.576328 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:36Z","lastTransitionTime":"2026-01-27T15:09:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:36 crc kubenswrapper[4697]: I0127 15:09:36.680327 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:36 crc kubenswrapper[4697]: I0127 15:09:36.680428 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:36 crc kubenswrapper[4697]: I0127 15:09:36.680436 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:36 crc kubenswrapper[4697]: I0127 15:09:36.680450 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:36 crc kubenswrapper[4697]: I0127 15:09:36.680460 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:36Z","lastTransitionTime":"2026-01-27T15:09:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:36 crc kubenswrapper[4697]: I0127 15:09:36.783188 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:36 crc kubenswrapper[4697]: I0127 15:09:36.783271 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:36 crc kubenswrapper[4697]: I0127 15:09:36.783287 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:36 crc kubenswrapper[4697]: I0127 15:09:36.783305 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:36 crc kubenswrapper[4697]: I0127 15:09:36.783319 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:36Z","lastTransitionTime":"2026-01-27T15:09:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:36 crc kubenswrapper[4697]: I0127 15:09:36.885857 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:36 crc kubenswrapper[4697]: I0127 15:09:36.885902 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:36 crc kubenswrapper[4697]: I0127 15:09:36.885912 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:36 crc kubenswrapper[4697]: I0127 15:09:36.885929 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:36 crc kubenswrapper[4697]: I0127 15:09:36.885941 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:36Z","lastTransitionTime":"2026-01-27T15:09:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:36 crc kubenswrapper[4697]: I0127 15:09:36.989075 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:36 crc kubenswrapper[4697]: I0127 15:09:36.989339 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:36 crc kubenswrapper[4697]: I0127 15:09:36.989434 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:36 crc kubenswrapper[4697]: I0127 15:09:36.989535 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:36 crc kubenswrapper[4697]: I0127 15:09:36.989628 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:36Z","lastTransitionTime":"2026-01-27T15:09:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:37 crc kubenswrapper[4697]: I0127 15:09:37.091690 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:37 crc kubenswrapper[4697]: I0127 15:09:37.091733 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:37 crc kubenswrapper[4697]: I0127 15:09:37.091755 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:37 crc kubenswrapper[4697]: I0127 15:09:37.091774 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:37 crc kubenswrapper[4697]: I0127 15:09:37.091806 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:37Z","lastTransitionTime":"2026-01-27T15:09:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:37 crc kubenswrapper[4697]: I0127 15:09:37.194205 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:37 crc kubenswrapper[4697]: I0127 15:09:37.194248 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:37 crc kubenswrapper[4697]: I0127 15:09:37.194258 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:37 crc kubenswrapper[4697]: I0127 15:09:37.194272 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:37 crc kubenswrapper[4697]: I0127 15:09:37.194283 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:37Z","lastTransitionTime":"2026-01-27T15:09:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:37 crc kubenswrapper[4697]: I0127 15:09:37.296464 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:37 crc kubenswrapper[4697]: I0127 15:09:37.296504 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:37 crc kubenswrapper[4697]: I0127 15:09:37.296516 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:37 crc kubenswrapper[4697]: I0127 15:09:37.296531 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:37 crc kubenswrapper[4697]: I0127 15:09:37.296543 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:37Z","lastTransitionTime":"2026-01-27T15:09:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:37 crc kubenswrapper[4697]: I0127 15:09:37.399290 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:37 crc kubenswrapper[4697]: I0127 15:09:37.399332 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:37 crc kubenswrapper[4697]: I0127 15:09:37.399340 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:37 crc kubenswrapper[4697]: I0127 15:09:37.399354 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:37 crc kubenswrapper[4697]: I0127 15:09:37.399363 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:37Z","lastTransitionTime":"2026-01-27T15:09:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:37 crc kubenswrapper[4697]: I0127 15:09:37.501748 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:37 crc kubenswrapper[4697]: I0127 15:09:37.501813 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:37 crc kubenswrapper[4697]: I0127 15:09:37.501828 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:37 crc kubenswrapper[4697]: I0127 15:09:37.501844 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:37 crc kubenswrapper[4697]: I0127 15:09:37.501855 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:37Z","lastTransitionTime":"2026-01-27T15:09:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:37 crc kubenswrapper[4697]: I0127 15:09:37.562271 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 08:41:13.24925087 +0000 UTC Jan 27 15:09:37 crc kubenswrapper[4697]: I0127 15:09:37.567592 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:09:37 crc kubenswrapper[4697]: I0127 15:09:37.567622 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:09:37 crc kubenswrapper[4697]: E0127 15:09:37.567733 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:09:37 crc kubenswrapper[4697]: E0127 15:09:37.567850 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:09:37 crc kubenswrapper[4697]: I0127 15:09:37.604743 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:37 crc kubenswrapper[4697]: I0127 15:09:37.604811 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:37 crc kubenswrapper[4697]: I0127 15:09:37.604824 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:37 crc kubenswrapper[4697]: I0127 15:09:37.604842 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:37 crc kubenswrapper[4697]: I0127 15:09:37.604850 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:37Z","lastTransitionTime":"2026-01-27T15:09:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:37 crc kubenswrapper[4697]: I0127 15:09:37.707317 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:37 crc kubenswrapper[4697]: I0127 15:09:37.707351 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:37 crc kubenswrapper[4697]: I0127 15:09:37.707361 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:37 crc kubenswrapper[4697]: I0127 15:09:37.707374 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:37 crc kubenswrapper[4697]: I0127 15:09:37.707383 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:37Z","lastTransitionTime":"2026-01-27T15:09:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:37 crc kubenswrapper[4697]: I0127 15:09:37.809371 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:37 crc kubenswrapper[4697]: I0127 15:09:37.809416 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:37 crc kubenswrapper[4697]: I0127 15:09:37.809428 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:37 crc kubenswrapper[4697]: I0127 15:09:37.809444 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:37 crc kubenswrapper[4697]: I0127 15:09:37.809455 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:37Z","lastTransitionTime":"2026-01-27T15:09:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:37 crc kubenswrapper[4697]: I0127 15:09:37.916774 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:37 crc kubenswrapper[4697]: I0127 15:09:37.916844 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:37 crc kubenswrapper[4697]: I0127 15:09:37.916857 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:37 crc kubenswrapper[4697]: I0127 15:09:37.916875 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:37 crc kubenswrapper[4697]: I0127 15:09:37.916888 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:37Z","lastTransitionTime":"2026-01-27T15:09:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:38 crc kubenswrapper[4697]: I0127 15:09:38.020398 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:38 crc kubenswrapper[4697]: I0127 15:09:38.020444 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:38 crc kubenswrapper[4697]: I0127 15:09:38.020455 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:38 crc kubenswrapper[4697]: I0127 15:09:38.020472 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:38 crc kubenswrapper[4697]: I0127 15:09:38.020486 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:38Z","lastTransitionTime":"2026-01-27T15:09:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:38 crc kubenswrapper[4697]: I0127 15:09:38.122923 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:38 crc kubenswrapper[4697]: I0127 15:09:38.122973 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:38 crc kubenswrapper[4697]: I0127 15:09:38.122982 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:38 crc kubenswrapper[4697]: I0127 15:09:38.122996 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:38 crc kubenswrapper[4697]: I0127 15:09:38.123005 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:38Z","lastTransitionTime":"2026-01-27T15:09:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:38 crc kubenswrapper[4697]: I0127 15:09:38.224979 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:38 crc kubenswrapper[4697]: I0127 15:09:38.225018 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:38 crc kubenswrapper[4697]: I0127 15:09:38.225030 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:38 crc kubenswrapper[4697]: I0127 15:09:38.225045 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:38 crc kubenswrapper[4697]: I0127 15:09:38.225059 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:38Z","lastTransitionTime":"2026-01-27T15:09:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:38 crc kubenswrapper[4697]: I0127 15:09:38.327224 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:38 crc kubenswrapper[4697]: I0127 15:09:38.327530 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:38 crc kubenswrapper[4697]: I0127 15:09:38.327606 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:38 crc kubenswrapper[4697]: I0127 15:09:38.327678 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:38 crc kubenswrapper[4697]: I0127 15:09:38.327747 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:38Z","lastTransitionTime":"2026-01-27T15:09:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:38 crc kubenswrapper[4697]: I0127 15:09:38.430355 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:38 crc kubenswrapper[4697]: I0127 15:09:38.430395 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:38 crc kubenswrapper[4697]: I0127 15:09:38.430406 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:38 crc kubenswrapper[4697]: I0127 15:09:38.430422 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:38 crc kubenswrapper[4697]: I0127 15:09:38.430433 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:38Z","lastTransitionTime":"2026-01-27T15:09:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:38 crc kubenswrapper[4697]: I0127 15:09:38.532538 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:38 crc kubenswrapper[4697]: I0127 15:09:38.532881 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:38 crc kubenswrapper[4697]: I0127 15:09:38.532987 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:38 crc kubenswrapper[4697]: I0127 15:09:38.533065 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:38 crc kubenswrapper[4697]: I0127 15:09:38.533129 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:38Z","lastTransitionTime":"2026-01-27T15:09:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:38 crc kubenswrapper[4697]: I0127 15:09:38.562889 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 10:19:11.787408097 +0000 UTC Jan 27 15:09:38 crc kubenswrapper[4697]: I0127 15:09:38.568416 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vwctp" Jan 27 15:09:38 crc kubenswrapper[4697]: I0127 15:09:38.568540 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:09:38 crc kubenswrapper[4697]: E0127 15:09:38.568657 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vwctp" podUID="11ed6885-450d-477c-8e08-acf5fbde2fa3" Jan 27 15:09:38 crc kubenswrapper[4697]: E0127 15:09:38.568843 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:09:38 crc kubenswrapper[4697]: I0127 15:09:38.635766 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:38 crc kubenswrapper[4697]: I0127 15:09:38.635838 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:38 crc kubenswrapper[4697]: I0127 15:09:38.635847 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:38 crc kubenswrapper[4697]: I0127 15:09:38.635860 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:38 crc kubenswrapper[4697]: I0127 15:09:38.635868 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:38Z","lastTransitionTime":"2026-01-27T15:09:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:38 crc kubenswrapper[4697]: I0127 15:09:38.737988 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:38 crc kubenswrapper[4697]: I0127 15:09:38.738249 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:38 crc kubenswrapper[4697]: I0127 15:09:38.738336 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:38 crc kubenswrapper[4697]: I0127 15:09:38.738438 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:38 crc kubenswrapper[4697]: I0127 15:09:38.738524 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:38Z","lastTransitionTime":"2026-01-27T15:09:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:38 crc kubenswrapper[4697]: I0127 15:09:38.841032 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:38 crc kubenswrapper[4697]: I0127 15:09:38.841466 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:38 crc kubenswrapper[4697]: I0127 15:09:38.841587 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:38 crc kubenswrapper[4697]: I0127 15:09:38.841692 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:38 crc kubenswrapper[4697]: I0127 15:09:38.841779 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:38Z","lastTransitionTime":"2026-01-27T15:09:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:38 crc kubenswrapper[4697]: I0127 15:09:38.944326 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:38 crc kubenswrapper[4697]: I0127 15:09:38.944361 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:38 crc kubenswrapper[4697]: I0127 15:09:38.944371 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:38 crc kubenswrapper[4697]: I0127 15:09:38.944383 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:38 crc kubenswrapper[4697]: I0127 15:09:38.944392 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:38Z","lastTransitionTime":"2026-01-27T15:09:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:39 crc kubenswrapper[4697]: I0127 15:09:39.045860 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:39 crc kubenswrapper[4697]: I0127 15:09:39.045893 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:39 crc kubenswrapper[4697]: I0127 15:09:39.045904 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:39 crc kubenswrapper[4697]: I0127 15:09:39.045921 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:39 crc kubenswrapper[4697]: I0127 15:09:39.045932 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:39Z","lastTransitionTime":"2026-01-27T15:09:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:39 crc kubenswrapper[4697]: I0127 15:09:39.147931 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:39 crc kubenswrapper[4697]: I0127 15:09:39.147990 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:39 crc kubenswrapper[4697]: I0127 15:09:39.148002 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:39 crc kubenswrapper[4697]: I0127 15:09:39.148018 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:39 crc kubenswrapper[4697]: I0127 15:09:39.148029 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:39Z","lastTransitionTime":"2026-01-27T15:09:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:39 crc kubenswrapper[4697]: I0127 15:09:39.250566 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:39 crc kubenswrapper[4697]: I0127 15:09:39.250604 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:39 crc kubenswrapper[4697]: I0127 15:09:39.250614 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:39 crc kubenswrapper[4697]: I0127 15:09:39.250628 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:39 crc kubenswrapper[4697]: I0127 15:09:39.250641 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:39Z","lastTransitionTime":"2026-01-27T15:09:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:39 crc kubenswrapper[4697]: I0127 15:09:39.353284 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:39 crc kubenswrapper[4697]: I0127 15:09:39.353349 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:39 crc kubenswrapper[4697]: I0127 15:09:39.353363 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:39 crc kubenswrapper[4697]: I0127 15:09:39.353380 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:39 crc kubenswrapper[4697]: I0127 15:09:39.353391 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:39Z","lastTransitionTime":"2026-01-27T15:09:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:39 crc kubenswrapper[4697]: I0127 15:09:39.455362 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:39 crc kubenswrapper[4697]: I0127 15:09:39.455411 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:39 crc kubenswrapper[4697]: I0127 15:09:39.455423 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:39 crc kubenswrapper[4697]: I0127 15:09:39.455438 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:39 crc kubenswrapper[4697]: I0127 15:09:39.455448 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:39Z","lastTransitionTime":"2026-01-27T15:09:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:39 crc kubenswrapper[4697]: I0127 15:09:39.558106 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:39 crc kubenswrapper[4697]: I0127 15:09:39.558146 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:39 crc kubenswrapper[4697]: I0127 15:09:39.558159 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:39 crc kubenswrapper[4697]: I0127 15:09:39.558174 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:39 crc kubenswrapper[4697]: I0127 15:09:39.558254 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:39Z","lastTransitionTime":"2026-01-27T15:09:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:39 crc kubenswrapper[4697]: I0127 15:09:39.563728 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 03:11:30.631288992 +0000 UTC Jan 27 15:09:39 crc kubenswrapper[4697]: I0127 15:09:39.568257 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:09:39 crc kubenswrapper[4697]: I0127 15:09:39.568275 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:09:39 crc kubenswrapper[4697]: E0127 15:09:39.568540 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:09:39 crc kubenswrapper[4697]: E0127 15:09:39.568420 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:09:39 crc kubenswrapper[4697]: I0127 15:09:39.662780 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:39 crc kubenswrapper[4697]: I0127 15:09:39.662831 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:39 crc kubenswrapper[4697]: I0127 15:09:39.662839 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:39 crc kubenswrapper[4697]: I0127 15:09:39.662853 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:39 crc kubenswrapper[4697]: I0127 15:09:39.662863 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:39Z","lastTransitionTime":"2026-01-27T15:09:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:39 crc kubenswrapper[4697]: I0127 15:09:39.765355 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:39 crc kubenswrapper[4697]: I0127 15:09:39.765416 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:39 crc kubenswrapper[4697]: I0127 15:09:39.765428 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:39 crc kubenswrapper[4697]: I0127 15:09:39.765444 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:39 crc kubenswrapper[4697]: I0127 15:09:39.765456 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:39Z","lastTransitionTime":"2026-01-27T15:09:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:39 crc kubenswrapper[4697]: I0127 15:09:39.873316 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:39 crc kubenswrapper[4697]: I0127 15:09:39.873371 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:39 crc kubenswrapper[4697]: I0127 15:09:39.873382 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:39 crc kubenswrapper[4697]: I0127 15:09:39.873400 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:39 crc kubenswrapper[4697]: I0127 15:09:39.873413 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:39Z","lastTransitionTime":"2026-01-27T15:09:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:39 crc kubenswrapper[4697]: I0127 15:09:39.975702 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:39 crc kubenswrapper[4697]: I0127 15:09:39.975730 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:39 crc kubenswrapper[4697]: I0127 15:09:39.975740 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:39 crc kubenswrapper[4697]: I0127 15:09:39.975754 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:39 crc kubenswrapper[4697]: I0127 15:09:39.975763 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:39Z","lastTransitionTime":"2026-01-27T15:09:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:40 crc kubenswrapper[4697]: I0127 15:09:40.078838 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:40 crc kubenswrapper[4697]: I0127 15:09:40.078894 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:40 crc kubenswrapper[4697]: I0127 15:09:40.078907 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:40 crc kubenswrapper[4697]: I0127 15:09:40.078925 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:40 crc kubenswrapper[4697]: I0127 15:09:40.078937 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:40Z","lastTransitionTime":"2026-01-27T15:09:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:40 crc kubenswrapper[4697]: I0127 15:09:40.180716 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:40 crc kubenswrapper[4697]: I0127 15:09:40.180744 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:40 crc kubenswrapper[4697]: I0127 15:09:40.180752 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:40 crc kubenswrapper[4697]: I0127 15:09:40.180764 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:40 crc kubenswrapper[4697]: I0127 15:09:40.180773 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:40Z","lastTransitionTime":"2026-01-27T15:09:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:40 crc kubenswrapper[4697]: I0127 15:09:40.283450 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:40 crc kubenswrapper[4697]: I0127 15:09:40.283478 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:40 crc kubenswrapper[4697]: I0127 15:09:40.283508 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:40 crc kubenswrapper[4697]: I0127 15:09:40.283523 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:40 crc kubenswrapper[4697]: I0127 15:09:40.283534 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:40Z","lastTransitionTime":"2026-01-27T15:09:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:40 crc kubenswrapper[4697]: I0127 15:09:40.386527 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:40 crc kubenswrapper[4697]: I0127 15:09:40.386565 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:40 crc kubenswrapper[4697]: I0127 15:09:40.386575 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:40 crc kubenswrapper[4697]: I0127 15:09:40.386589 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:40 crc kubenswrapper[4697]: I0127 15:09:40.386599 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:40Z","lastTransitionTime":"2026-01-27T15:09:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:40 crc kubenswrapper[4697]: I0127 15:09:40.490109 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:40 crc kubenswrapper[4697]: I0127 15:09:40.490185 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:40 crc kubenswrapper[4697]: I0127 15:09:40.490199 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:40 crc kubenswrapper[4697]: I0127 15:09:40.490222 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:40 crc kubenswrapper[4697]: I0127 15:09:40.490237 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:40Z","lastTransitionTime":"2026-01-27T15:09:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:40 crc kubenswrapper[4697]: I0127 15:09:40.564125 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 00:21:05.072509196 +0000 UTC Jan 27 15:09:40 crc kubenswrapper[4697]: I0127 15:09:40.567508 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vwctp" Jan 27 15:09:40 crc kubenswrapper[4697]: E0127 15:09:40.567658 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vwctp" podUID="11ed6885-450d-477c-8e08-acf5fbde2fa3" Jan 27 15:09:40 crc kubenswrapper[4697]: I0127 15:09:40.567939 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:09:40 crc kubenswrapper[4697]: E0127 15:09:40.568021 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:09:40 crc kubenswrapper[4697]: I0127 15:09:40.593191 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:40 crc kubenswrapper[4697]: I0127 15:09:40.593257 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:40 crc kubenswrapper[4697]: I0127 15:09:40.593274 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:40 crc kubenswrapper[4697]: I0127 15:09:40.593296 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:40 crc kubenswrapper[4697]: I0127 15:09:40.593309 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:40Z","lastTransitionTime":"2026-01-27T15:09:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:40 crc kubenswrapper[4697]: I0127 15:09:40.696480 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:40 crc kubenswrapper[4697]: I0127 15:09:40.696743 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:40 crc kubenswrapper[4697]: I0127 15:09:40.696863 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:40 crc kubenswrapper[4697]: I0127 15:09:40.696988 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:40 crc kubenswrapper[4697]: I0127 15:09:40.697107 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:40Z","lastTransitionTime":"2026-01-27T15:09:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:40 crc kubenswrapper[4697]: I0127 15:09:40.799981 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:40 crc kubenswrapper[4697]: I0127 15:09:40.800381 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:40 crc kubenswrapper[4697]: I0127 15:09:40.800715 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:40 crc kubenswrapper[4697]: I0127 15:09:40.800887 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:40 crc kubenswrapper[4697]: I0127 15:09:40.800985 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:40Z","lastTransitionTime":"2026-01-27T15:09:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:40 crc kubenswrapper[4697]: I0127 15:09:40.905616 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:40 crc kubenswrapper[4697]: I0127 15:09:40.905656 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:40 crc kubenswrapper[4697]: I0127 15:09:40.905665 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:40 crc kubenswrapper[4697]: I0127 15:09:40.905681 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:40 crc kubenswrapper[4697]: I0127 15:09:40.905690 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:40Z","lastTransitionTime":"2026-01-27T15:09:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:41 crc kubenswrapper[4697]: I0127 15:09:41.008956 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:41 crc kubenswrapper[4697]: I0127 15:09:41.009001 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:41 crc kubenswrapper[4697]: I0127 15:09:41.009011 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:41 crc kubenswrapper[4697]: I0127 15:09:41.009029 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:41 crc kubenswrapper[4697]: I0127 15:09:41.009039 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:41Z","lastTransitionTime":"2026-01-27T15:09:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:41 crc kubenswrapper[4697]: I0127 15:09:41.111075 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:41 crc kubenswrapper[4697]: I0127 15:09:41.111113 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:41 crc kubenswrapper[4697]: I0127 15:09:41.111125 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:41 crc kubenswrapper[4697]: I0127 15:09:41.111143 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:41 crc kubenswrapper[4697]: I0127 15:09:41.111155 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:41Z","lastTransitionTime":"2026-01-27T15:09:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:41 crc kubenswrapper[4697]: I0127 15:09:41.214136 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:41 crc kubenswrapper[4697]: I0127 15:09:41.214178 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:41 crc kubenswrapper[4697]: I0127 15:09:41.214190 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:41 crc kubenswrapper[4697]: I0127 15:09:41.214208 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:41 crc kubenswrapper[4697]: I0127 15:09:41.214221 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:41Z","lastTransitionTime":"2026-01-27T15:09:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:41 crc kubenswrapper[4697]: I0127 15:09:41.317185 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:41 crc kubenswrapper[4697]: I0127 15:09:41.317229 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:41 crc kubenswrapper[4697]: I0127 15:09:41.317237 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:41 crc kubenswrapper[4697]: I0127 15:09:41.317252 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:41 crc kubenswrapper[4697]: I0127 15:09:41.317262 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:41Z","lastTransitionTime":"2026-01-27T15:09:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:41 crc kubenswrapper[4697]: I0127 15:09:41.419924 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:41 crc kubenswrapper[4697]: I0127 15:09:41.420050 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:41 crc kubenswrapper[4697]: I0127 15:09:41.420061 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:41 crc kubenswrapper[4697]: I0127 15:09:41.420075 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:41 crc kubenswrapper[4697]: I0127 15:09:41.420084 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:41Z","lastTransitionTime":"2026-01-27T15:09:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:41 crc kubenswrapper[4697]: I0127 15:09:41.522861 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:41 crc kubenswrapper[4697]: I0127 15:09:41.522898 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:41 crc kubenswrapper[4697]: I0127 15:09:41.522913 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:41 crc kubenswrapper[4697]: I0127 15:09:41.522931 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:41 crc kubenswrapper[4697]: I0127 15:09:41.522944 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:41Z","lastTransitionTime":"2026-01-27T15:09:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:41 crc kubenswrapper[4697]: I0127 15:09:41.564663 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 23:22:07.902845221 +0000 UTC Jan 27 15:09:41 crc kubenswrapper[4697]: I0127 15:09:41.568212 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:09:41 crc kubenswrapper[4697]: I0127 15:09:41.568280 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:09:41 crc kubenswrapper[4697]: E0127 15:09:41.568367 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:09:41 crc kubenswrapper[4697]: E0127 15:09:41.568419 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:09:41 crc kubenswrapper[4697]: I0127 15:09:41.625224 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:41 crc kubenswrapper[4697]: I0127 15:09:41.625261 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:41 crc kubenswrapper[4697]: I0127 15:09:41.625270 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:41 crc kubenswrapper[4697]: I0127 15:09:41.625284 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:41 crc kubenswrapper[4697]: I0127 15:09:41.625294 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:41Z","lastTransitionTime":"2026-01-27T15:09:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:41 crc kubenswrapper[4697]: I0127 15:09:41.727875 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:41 crc kubenswrapper[4697]: I0127 15:09:41.728180 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:41 crc kubenswrapper[4697]: I0127 15:09:41.728269 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:41 crc kubenswrapper[4697]: I0127 15:09:41.728378 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:41 crc kubenswrapper[4697]: I0127 15:09:41.728466 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:41Z","lastTransitionTime":"2026-01-27T15:09:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:41 crc kubenswrapper[4697]: I0127 15:09:41.830577 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:41 crc kubenswrapper[4697]: I0127 15:09:41.830861 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:41 crc kubenswrapper[4697]: I0127 15:09:41.831009 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:41 crc kubenswrapper[4697]: I0127 15:09:41.831127 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:41 crc kubenswrapper[4697]: I0127 15:09:41.831214 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:41Z","lastTransitionTime":"2026-01-27T15:09:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:41 crc kubenswrapper[4697]: I0127 15:09:41.934047 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:41 crc kubenswrapper[4697]: I0127 15:09:41.934090 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:41 crc kubenswrapper[4697]: I0127 15:09:41.934100 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:41 crc kubenswrapper[4697]: I0127 15:09:41.934113 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:41 crc kubenswrapper[4697]: I0127 15:09:41.934125 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:41Z","lastTransitionTime":"2026-01-27T15:09:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:41 crc kubenswrapper[4697]: I0127 15:09:41.997126 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:41 crc kubenswrapper[4697]: I0127 15:09:41.997170 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:41 crc kubenswrapper[4697]: I0127 15:09:41.997180 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:41 crc kubenswrapper[4697]: I0127 15:09:41.997196 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:41 crc kubenswrapper[4697]: I0127 15:09:41.997206 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:41Z","lastTransitionTime":"2026-01-27T15:09:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:42 crc kubenswrapper[4697]: E0127 15:09:42.009352 4697 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:09:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:09:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:09:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:09:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"74b869f4-b1e4-4686-af4e-9516e0fb5017\\\",\\\"systemUUID\\\":\\\"69bca9ab-721f-415b-ad88-6626c7795f3c\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:42Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:42 crc kubenswrapper[4697]: I0127 15:09:42.012896 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:42 crc kubenswrapper[4697]: I0127 15:09:42.013056 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:42 crc kubenswrapper[4697]: I0127 15:09:42.013150 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:42 crc kubenswrapper[4697]: I0127 15:09:42.013241 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:42 crc kubenswrapper[4697]: I0127 15:09:42.013316 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:42Z","lastTransitionTime":"2026-01-27T15:09:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:42 crc kubenswrapper[4697]: E0127 15:09:42.024417 4697 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:09:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:09:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:09:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:09:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"74b869f4-b1e4-4686-af4e-9516e0fb5017\\\",\\\"systemUUID\\\":\\\"69bca9ab-721f-415b-ad88-6626c7795f3c\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:42Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:42 crc kubenswrapper[4697]: I0127 15:09:42.028385 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:42 crc kubenswrapper[4697]: I0127 15:09:42.028497 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:42 crc kubenswrapper[4697]: I0127 15:09:42.028570 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:42 crc kubenswrapper[4697]: I0127 15:09:42.028649 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:42 crc kubenswrapper[4697]: I0127 15:09:42.028708 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:42Z","lastTransitionTime":"2026-01-27T15:09:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:42 crc kubenswrapper[4697]: E0127 15:09:42.039767 4697 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:09:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:09:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:09:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:09:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"74b869f4-b1e4-4686-af4e-9516e0fb5017\\\",\\\"systemUUID\\\":\\\"69bca9ab-721f-415b-ad88-6626c7795f3c\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:42Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:42 crc kubenswrapper[4697]: I0127 15:09:42.043120 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:42 crc kubenswrapper[4697]: I0127 15:09:42.043151 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:42 crc kubenswrapper[4697]: I0127 15:09:42.043161 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:42 crc kubenswrapper[4697]: I0127 15:09:42.043174 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:42 crc kubenswrapper[4697]: I0127 15:09:42.043182 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:42Z","lastTransitionTime":"2026-01-27T15:09:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:42 crc kubenswrapper[4697]: E0127 15:09:42.054705 4697 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:09:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:09:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:09:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:09:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"74b869f4-b1e4-4686-af4e-9516e0fb5017\\\",\\\"systemUUID\\\":\\\"69bca9ab-721f-415b-ad88-6626c7795f3c\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:42Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:42 crc kubenswrapper[4697]: I0127 15:09:42.057801 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:42 crc kubenswrapper[4697]: I0127 15:09:42.057848 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:42 crc kubenswrapper[4697]: I0127 15:09:42.057857 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:42 crc kubenswrapper[4697]: I0127 15:09:42.057868 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:42 crc kubenswrapper[4697]: I0127 15:09:42.057876 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:42Z","lastTransitionTime":"2026-01-27T15:09:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:42 crc kubenswrapper[4697]: E0127 15:09:42.071416 4697 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:09:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:09:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:09:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:09:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"74b869f4-b1e4-4686-af4e-9516e0fb5017\\\",\\\"systemUUID\\\":\\\"69bca9ab-721f-415b-ad88-6626c7795f3c\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:42Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:42 crc kubenswrapper[4697]: E0127 15:09:42.071537 4697 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 15:09:42 crc kubenswrapper[4697]: I0127 15:09:42.073830 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:42 crc kubenswrapper[4697]: I0127 15:09:42.073862 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:42 crc kubenswrapper[4697]: I0127 15:09:42.073881 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:42 crc kubenswrapper[4697]: I0127 15:09:42.073906 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:42 crc kubenswrapper[4697]: I0127 15:09:42.073918 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:42Z","lastTransitionTime":"2026-01-27T15:09:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:42 crc kubenswrapper[4697]: I0127 15:09:42.176068 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:42 crc kubenswrapper[4697]: I0127 15:09:42.176096 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:42 crc kubenswrapper[4697]: I0127 15:09:42.176106 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:42 crc kubenswrapper[4697]: I0127 15:09:42.176136 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:42 crc kubenswrapper[4697]: I0127 15:09:42.176144 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:42Z","lastTransitionTime":"2026-01-27T15:09:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:42 crc kubenswrapper[4697]: I0127 15:09:42.278717 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:42 crc kubenswrapper[4697]: I0127 15:09:42.278745 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:42 crc kubenswrapper[4697]: I0127 15:09:42.278752 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:42 crc kubenswrapper[4697]: I0127 15:09:42.278765 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:42 crc kubenswrapper[4697]: I0127 15:09:42.278774 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:42Z","lastTransitionTime":"2026-01-27T15:09:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:42 crc kubenswrapper[4697]: I0127 15:09:42.381277 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:42 crc kubenswrapper[4697]: I0127 15:09:42.381563 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:42 crc kubenswrapper[4697]: I0127 15:09:42.381637 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:42 crc kubenswrapper[4697]: I0127 15:09:42.381699 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:42 crc kubenswrapper[4697]: I0127 15:09:42.381759 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:42Z","lastTransitionTime":"2026-01-27T15:09:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:42 crc kubenswrapper[4697]: I0127 15:09:42.484660 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:42 crc kubenswrapper[4697]: I0127 15:09:42.484702 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:42 crc kubenswrapper[4697]: I0127 15:09:42.484713 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:42 crc kubenswrapper[4697]: I0127 15:09:42.484732 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:42 crc kubenswrapper[4697]: I0127 15:09:42.484743 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:42Z","lastTransitionTime":"2026-01-27T15:09:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:42 crc kubenswrapper[4697]: I0127 15:09:42.565436 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 18:15:54.077338864 +0000 UTC Jan 27 15:09:42 crc kubenswrapper[4697]: I0127 15:09:42.567894 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vwctp" Jan 27 15:09:42 crc kubenswrapper[4697]: I0127 15:09:42.568021 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:09:42 crc kubenswrapper[4697]: E0127 15:09:42.568302 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:09:42 crc kubenswrapper[4697]: E0127 15:09:42.568205 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vwctp" podUID="11ed6885-450d-477c-8e08-acf5fbde2fa3" Jan 27 15:09:42 crc kubenswrapper[4697]: I0127 15:09:42.586359 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:42 crc kubenswrapper[4697]: I0127 15:09:42.586655 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:42 crc kubenswrapper[4697]: I0127 15:09:42.586792 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:42 crc kubenswrapper[4697]: I0127 15:09:42.586964 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:42 crc kubenswrapper[4697]: I0127 15:09:42.587098 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:42Z","lastTransitionTime":"2026-01-27T15:09:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:42 crc kubenswrapper[4697]: I0127 15:09:42.689169 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:42 crc kubenswrapper[4697]: I0127 15:09:42.689217 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:42 crc kubenswrapper[4697]: I0127 15:09:42.689229 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:42 crc kubenswrapper[4697]: I0127 15:09:42.689249 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:42 crc kubenswrapper[4697]: I0127 15:09:42.689262 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:42Z","lastTransitionTime":"2026-01-27T15:09:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:42 crc kubenswrapper[4697]: I0127 15:09:42.792334 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:42 crc kubenswrapper[4697]: I0127 15:09:42.792642 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:42 crc kubenswrapper[4697]: I0127 15:09:42.792777 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:42 crc kubenswrapper[4697]: I0127 15:09:42.792891 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:42 crc kubenswrapper[4697]: I0127 15:09:42.793000 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:42Z","lastTransitionTime":"2026-01-27T15:09:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:42 crc kubenswrapper[4697]: I0127 15:09:42.895462 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:42 crc kubenswrapper[4697]: I0127 15:09:42.895738 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:42 crc kubenswrapper[4697]: I0127 15:09:42.895854 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:42 crc kubenswrapper[4697]: I0127 15:09:42.895964 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:42 crc kubenswrapper[4697]: I0127 15:09:42.896044 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:42Z","lastTransitionTime":"2026-01-27T15:09:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:42 crc kubenswrapper[4697]: I0127 15:09:42.999696 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:43 crc kubenswrapper[4697]: I0127 15:09:42.999744 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:43 crc kubenswrapper[4697]: I0127 15:09:42.999832 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:43 crc kubenswrapper[4697]: I0127 15:09:42.999852 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:43 crc kubenswrapper[4697]: I0127 15:09:42.999864 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:42Z","lastTransitionTime":"2026-01-27T15:09:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:43 crc kubenswrapper[4697]: I0127 15:09:43.102490 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:43 crc kubenswrapper[4697]: I0127 15:09:43.102533 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:43 crc kubenswrapper[4697]: I0127 15:09:43.102546 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:43 crc kubenswrapper[4697]: I0127 15:09:43.102560 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:43 crc kubenswrapper[4697]: I0127 15:09:43.102572 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:43Z","lastTransitionTime":"2026-01-27T15:09:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:43 crc kubenswrapper[4697]: I0127 15:09:43.205742 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:43 crc kubenswrapper[4697]: I0127 15:09:43.205852 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:43 crc kubenswrapper[4697]: I0127 15:09:43.205874 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:43 crc kubenswrapper[4697]: I0127 15:09:43.205904 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:43 crc kubenswrapper[4697]: I0127 15:09:43.205925 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:43Z","lastTransitionTime":"2026-01-27T15:09:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:43 crc kubenswrapper[4697]: I0127 15:09:43.308509 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:43 crc kubenswrapper[4697]: I0127 15:09:43.308577 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:43 crc kubenswrapper[4697]: I0127 15:09:43.308591 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:43 crc kubenswrapper[4697]: I0127 15:09:43.308610 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:43 crc kubenswrapper[4697]: I0127 15:09:43.308621 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:43Z","lastTransitionTime":"2026-01-27T15:09:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:43 crc kubenswrapper[4697]: I0127 15:09:43.410617 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:43 crc kubenswrapper[4697]: I0127 15:09:43.410661 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:43 crc kubenswrapper[4697]: I0127 15:09:43.410671 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:43 crc kubenswrapper[4697]: I0127 15:09:43.410689 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:43 crc kubenswrapper[4697]: I0127 15:09:43.410701 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:43Z","lastTransitionTime":"2026-01-27T15:09:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:43 crc kubenswrapper[4697]: I0127 15:09:43.512898 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:43 crc kubenswrapper[4697]: I0127 15:09:43.512948 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:43 crc kubenswrapper[4697]: I0127 15:09:43.512961 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:43 crc kubenswrapper[4697]: I0127 15:09:43.512978 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:43 crc kubenswrapper[4697]: I0127 15:09:43.512989 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:43Z","lastTransitionTime":"2026-01-27T15:09:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:43 crc kubenswrapper[4697]: I0127 15:09:43.566234 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 00:46:15.01402978 +0000 UTC Jan 27 15:09:43 crc kubenswrapper[4697]: I0127 15:09:43.567488 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:09:43 crc kubenswrapper[4697]: E0127 15:09:43.567667 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:09:43 crc kubenswrapper[4697]: I0127 15:09:43.567699 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:09:43 crc kubenswrapper[4697]: E0127 15:09:43.568041 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:09:43 crc kubenswrapper[4697]: I0127 15:09:43.568442 4697 scope.go:117] "RemoveContainer" containerID="c922932b6548d6d3070183264d41bc14a0cfc7a122dfc0772c4839066544c36d" Jan 27 15:09:43 crc kubenswrapper[4697]: I0127 15:09:43.582825 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 27 15:09:43 crc kubenswrapper[4697]: I0127 15:09:43.617622 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:43 crc kubenswrapper[4697]: I0127 15:09:43.617658 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:43 crc kubenswrapper[4697]: I0127 15:09:43.617668 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:43 crc kubenswrapper[4697]: I0127 15:09:43.617682 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:43 crc kubenswrapper[4697]: I0127 15:09:43.617694 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:43Z","lastTransitionTime":"2026-01-27T15:09:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:43 crc kubenswrapper[4697]: I0127 15:09:43.720076 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:43 crc kubenswrapper[4697]: I0127 15:09:43.720116 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:43 crc kubenswrapper[4697]: I0127 15:09:43.720126 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:43 crc kubenswrapper[4697]: I0127 15:09:43.720140 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:43 crc kubenswrapper[4697]: I0127 15:09:43.720153 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:43Z","lastTransitionTime":"2026-01-27T15:09:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:43 crc kubenswrapper[4697]: I0127 15:09:43.822090 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:43 crc kubenswrapper[4697]: I0127 15:09:43.822452 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:43 crc kubenswrapper[4697]: I0127 15:09:43.822464 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:43 crc kubenswrapper[4697]: I0127 15:09:43.822483 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:43 crc kubenswrapper[4697]: I0127 15:09:43.822495 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:43Z","lastTransitionTime":"2026-01-27T15:09:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:43 crc kubenswrapper[4697]: I0127 15:09:43.924158 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:43 crc kubenswrapper[4697]: I0127 15:09:43.924184 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:43 crc kubenswrapper[4697]: I0127 15:09:43.924192 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:43 crc kubenswrapper[4697]: I0127 15:09:43.924203 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:43 crc kubenswrapper[4697]: I0127 15:09:43.924211 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:43Z","lastTransitionTime":"2026-01-27T15:09:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:44 crc kubenswrapper[4697]: I0127 15:09:44.025883 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:44 crc kubenswrapper[4697]: I0127 15:09:44.025917 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:44 crc kubenswrapper[4697]: I0127 15:09:44.025928 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:44 crc kubenswrapper[4697]: I0127 15:09:44.025944 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:44 crc kubenswrapper[4697]: I0127 15:09:44.025953 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:44Z","lastTransitionTime":"2026-01-27T15:09:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:44 crc kubenswrapper[4697]: I0127 15:09:44.063041 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-z6jxw_6a1ce5ad-1a8c-4a28-99d8-fc71649954ad/ovnkube-controller/2.log" Jan 27 15:09:44 crc kubenswrapper[4697]: I0127 15:09:44.065455 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6jxw" event={"ID":"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad","Type":"ContainerStarted","Data":"8434917bca076a475c1e4b907733bca9cee4559bea25a20542bc654c51f925fd"} Jan 27 15:09:44 crc kubenswrapper[4697]: I0127 15:09:44.066233 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-z6jxw" Jan 27 15:09:44 crc kubenswrapper[4697]: I0127 15:09:44.079315 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wz495" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9bec8bc-b2a6-4865-83ca-692ae5c022a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2616d07c83d73b63d4b728a30de8a7e1d76986d38f8c4c3fe019bf73e64784f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wqhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://faaced835dbc76e880a1fd29824b00fca5f720686e476bcba6ad4f807e28e8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wqhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wz495\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:44Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:44 crc kubenswrapper[4697]: I0127 15:09:44.093718 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vwctp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11ed6885-450d-477c-8e08-acf5fbde2fa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr85v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr85v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:09:00Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vwctp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:44Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:44 crc kubenswrapper[4697]: I0127 15:09:44.103107 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lpz4j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d187caad-2501-44d6-8ced-f8d8ca5fecfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c2b6a00c426e85ca8ca4fe5790bf7badc12e0c2cc72c1454e664e809ace5e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5jqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lpz4j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:44Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:44 crc kubenswrapper[4697]: I0127 15:09:44.115417 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bdclj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed86f7b6-a042-470f-8da3-9cad4e65c550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a701152234da7522fefeed3798f4748c4f8e56fa81edd5011ad4a89bbb2e4be7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f898q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bdclj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:44Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:44 crc kubenswrapper[4697]: I0127 15:09:44.127732 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6lf86" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35bbb68b-046f-482d-8c38-e76dd8a12a61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6949b3c1babb1c4c69bf612b869bea5dabf3fedc5e6c930ec3d3a51736c9651f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sf5z9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc445832c9ce25b3b787c029df7baad2f8ad53f7cf8705ab5e2590c85119bec1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sf5z9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6lf86\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:44Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:44 crc kubenswrapper[4697]: I0127 15:09:44.127938 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:44 crc kubenswrapper[4697]: I0127 15:09:44.127952 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:44 crc kubenswrapper[4697]: I0127 15:09:44.127960 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:44 crc kubenswrapper[4697]: I0127 15:09:44.127972 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:44 crc kubenswrapper[4697]: I0127 15:09:44.127980 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:44Z","lastTransitionTime":"2026-01-27T15:09:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:44 crc kubenswrapper[4697]: I0127 15:09:44.145139 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f1b32c65-9dd1-4f66-9c0b-c9234c934d7a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3eb075712ef240052637eb573fea6b47aced80df441ea60774ed40e4e35c8fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db92f9a5afaba8bbda4d8ea9bcb99b5ad334aef7f68b0cf85da3fbf0a1816d8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f35bc4d5235d591cc0e2af6294fb97d2fc6d7e84ce8a179acac6ea4ea4b9e5ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3889f71127ca21494f008231cfc9d7f1a3c106ea419b6abd1ccebeccdcc749a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b10d0995f70041a7467f958a2d131f342a916cce576ad086a33b41bc2864fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a040b13703e44987138aeacce4da557d3046864eac3120c090d95959796ba68c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a040b13703e44987138aeacce4da557d3046864eac3120c090d95959796ba68c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6fc33ccddbde32f28fee077d2abeda2e39cf7874e8f789e91a211a8a95a2313\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6fc33ccddbde32f28fee077d2abeda2e39cf7874e8f789e91a211a8a95a2313\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:26Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://052d6ea90d7fff038a147dffda9efd4688f2a53ee7c627569d63d50e02b5ce2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://052d6ea90d7fff038a147dffda9efd4688f2a53ee7c627569d63d50e02b5ce2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:24Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:44Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:44 crc kubenswrapper[4697]: I0127 15:09:44.160561 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30821478-065e-48b2-85f3-ae69260477fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://841fe2379065903ddc38b4968c1764a6c83d13f42c7587f20be81d8539199c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc09ec12a81a4e2954a0d1146819e9f9b4fc1fd442a3e9c930ea213aff875eb9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa7833382543ce12d026eb8bbc6fb93276a1105a0cc34d215e719591be740f80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d1140be76b3f274b414e158153723d043089cb9b01d27733976db83dc4601f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3144c28de6be75231118993ba779a42bcc9032d51e927df649d3abb602ffa5dd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 15:08:45.318333 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 15:08:45.318446 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 15:08:45.319039 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1083979560/tls.crt::/tmp/serving-cert-1083979560/tls.key\\\\\\\"\\\\nI0127 15:08:45.778691 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 15:08:45.781562 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 15:08:45.781589 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 15:08:45.781614 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 15:08:45.781620 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 15:08:45.799733 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0127 15:08:45.799756 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 15:08:45.799769 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:08:45.799774 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:08:45.799800 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 15:08:45.799806 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 15:08:45.799810 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 15:08:45.799814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 15:08:45.805747 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://772509e08b1dcc68190d81e10a93fe348af55fdc71dbab2f0cadffd65089c044\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d9c79b1675802dcd1800cdbf3562832c4d201ff1b4d7ab4504118a41a245453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d9c79b1675802dcd1800cdbf3562832c4d201ff1b4d7ab4504118a41a245453\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:24Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:44Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:44 crc kubenswrapper[4697]: I0127 15:09:44.174544 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ed572c3-ca0d-4d38-9ac0-81080c32efe5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdb01f592a7ee00906befc039b4ac006fa96e5d36ae7cf4029af12500c42d0a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9af166859a55cb5f718a1750f4ce20f5c4259e1adad06c609ce66a907974b3ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0baf6862ad66d010a3e2ca21560d76f0de57cf5afc64cc594d4b6204f5653904\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b800aa3d330bd36d5613b410b0b73f5d175f0ec70a76d4eb479dcb0db8957a72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b800aa3d330bd36d5613b410b0b73f5d175f0ec70a76d4eb479dcb0db8957a72\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:25Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:24Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:44Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:44 crc kubenswrapper[4697]: I0127 15:09:44.193099 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:44Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:44 crc kubenswrapper[4697]: I0127 15:09:44.210429 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:44Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:44 crc kubenswrapper[4697]: I0127 15:09:44.225673 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://955eb03bb38f971417b1af1b193c2008607eaeda5addf30f899830dd84620c4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:44Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:44 crc kubenswrapper[4697]: I0127 15:09:44.231034 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:44 crc kubenswrapper[4697]: I0127 15:09:44.231059 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:44 crc kubenswrapper[4697]: I0127 15:09:44.231069 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:44 crc kubenswrapper[4697]: I0127 15:09:44.231287 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:44 crc kubenswrapper[4697]: I0127 15:09:44.231299 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:44Z","lastTransitionTime":"2026-01-27T15:09:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:44 crc kubenswrapper[4697]: I0127 15:09:44.245601 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6jxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24ac4a674c5fb98082daeabf52736988951ea5c66064ff4bb63f0d40c43b947d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25f52622d494cffbbd36c21f76148b896a10d3c1ace649ac0824e847b812a277\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9146d3d41cb348c99ea78d62aef3aa7d46c5f99855e042fdf5bc38b18556e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e33c68fac5ef11b2704b8a1460588937489a191ea2eacb70548b1e99cf718822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8784cf473729161592d08c782f4754724d6609756a30040715cbff8c732a09c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eea7c2b7dbea8198cc4709a808f8ecab760514224f4e3eb96d04c3bd7f16df6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8434917bca076a475c1e4b907733bca9cee4559bea25a20542bc654c51f925fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c922932b6548d6d3070183264d41bc14a0cfc7a122dfc0772c4839066544c36d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T15:09:14Z\\\",\\\"message\\\":\\\"d to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:14Z is after 2025-08-24T17:21:41Z]\\\\nI0127 15:09:14.396323 6251 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]} options:{GoMap:map[iface-id-ver:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c94130be-172c-477c-88c4-40cc7eba30fe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 15:09:14.396396 6251 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c94130be-172c-477c-88c4-40cc7eba30fe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 15:09:14.396409 625\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:09:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://971bf4362650664f5133d9b68b7a5ce76e54dafbf28c88730f678ada0256ffd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9666b8a501ef015431ee3be1fc34ca2b196011df3007d2e4d508f09f9967785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9666b8a501ef015431ee3be1fc34ca2b196011df3007d2e4d508f09f9967785\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z6jxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:44Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:44 crc kubenswrapper[4697]: I0127 15:09:44.262880 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00bcd4fb-11e6-4087-91b7-290cd35a7292\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee5c74f4e3f1154431027a743528e81ec4bed30037b30a858870f74993da4691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b23c092c5d493951a1f6dbbf0482f102f36a830133d843f3c574afba2e1d50d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ad05a5c3b7640af677ede45c27c40da5d118e28a9d45de0ffa60a05684121c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fd615105781bcf4614f8a58cf63eeb89020db12e822192bd652a5ff23e25a2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:24Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:44Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:44 crc kubenswrapper[4697]: I0127 15:09:44.283168 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e13ee612abe9aa03f8ccaf68abbdfdbeb29820484f430097aef6be1679d3efe8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:44Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:44 crc kubenswrapper[4697]: I0127 15:09:44.297657 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a182e147723dd1c9335e6c6a910d5d53bdfc118504b6a0a9f3c91f79b6d3aee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52fcd1c6784720765f18ddc1936d3bdd625b743d27654a647ff80351957797e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:44Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:44 crc kubenswrapper[4697]: I0127 15:09:44.313576 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:44Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:44 crc kubenswrapper[4697]: I0127 15:09:44.330261 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bcb9s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7543bea-0b65-44e1-8c0c-bc1a13577d69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fe79de88015d62a290c140e0504b9ef088f39fa79bc9b379d46fa9cdb03123f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0b69d8311464a46854b17dc23de984ff37a24f3de84f8ad6033d26d5dd30afc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0b69d8311464a46854b17dc23de984ff37a24f3de84f8ad6033d26d5dd30afc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d34049aae4e409909bb597c8bf33aa1c1ac85699cf72e33f5643145fdf9fbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d34049aae4e409909bb597c8bf33aa1c1ac85699cf72e33f5643145fdf9fbb9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b85aff4ba7e4c4eddcdfd916b42392fd8f5bd4d18caae739a7490c0576fcff1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b85aff4ba7e4c4eddcdfd916b42392fd8f5bd4d18caae739a7490c0576fcff1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6affaf91a44dec8a9da34068ed68f480ad543e0efc8e0f584fd5002f8f6ed0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6affaf91a44dec8a9da34068ed68f480ad543e0efc8e0f584fd5002f8f6ed0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dede89b14b4d80c8b9e74c45b628b5def6a04f922bb59c06828c3a4e43deca4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dede89b14b4d80c8b9e74c45b628b5def6a04f922bb59c06828c3a4e43deca4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a125d46e355d85444bf125e8184888e9b0c18dab3cd7b09b89ffff202e2c6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a125d46e355d85444bf125e8184888e9b0c18dab3cd7b09b89ffff202e2c6b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bcb9s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:44Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:44 crc kubenswrapper[4697]: I0127 15:09:44.333956 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:44 crc kubenswrapper[4697]: I0127 15:09:44.333993 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:44 crc kubenswrapper[4697]: I0127 15:09:44.334002 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:44 crc kubenswrapper[4697]: I0127 15:09:44.334017 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:44 crc kubenswrapper[4697]: I0127 15:09:44.334028 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:44Z","lastTransitionTime":"2026-01-27T15:09:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:44 crc kubenswrapper[4697]: I0127 15:09:44.344137 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rq89t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fbc1c27-fba2-40df-95dd-3842bd1f1906\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55217260dcb8aebc9ddf2d903bc0257bc8a122956102c0215d6a5a20451d6afe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0c056e48d3130806317f25486fea67d938a0e610f19b6089873f2fcfe4759a0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T15:09:34Z\\\",\\\"message\\\":\\\"2026-01-27T15:08:49+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_df226232-eda2-4025-b167-90894438b301\\\\n2026-01-27T15:08:49+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_df226232-eda2-4025-b167-90894438b301 to /host/opt/cni/bin/\\\\n2026-01-27T15:08:49Z [verbose] multus-daemon started\\\\n2026-01-27T15:08:49Z [verbose] Readiness Indicator file check\\\\n2026-01-27T15:09:34Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:09:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npp7h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rq89t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:44Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:44 crc kubenswrapper[4697]: I0127 15:09:44.437120 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:44 crc kubenswrapper[4697]: I0127 15:09:44.437164 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:44 crc kubenswrapper[4697]: I0127 15:09:44.437174 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:44 crc kubenswrapper[4697]: I0127 15:09:44.437188 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:44 crc kubenswrapper[4697]: I0127 15:09:44.437197 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:44Z","lastTransitionTime":"2026-01-27T15:09:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:44 crc kubenswrapper[4697]: I0127 15:09:44.539229 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:44 crc kubenswrapper[4697]: I0127 15:09:44.539272 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:44 crc kubenswrapper[4697]: I0127 15:09:44.539289 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:44 crc kubenswrapper[4697]: I0127 15:09:44.539309 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:44 crc kubenswrapper[4697]: I0127 15:09:44.539325 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:44Z","lastTransitionTime":"2026-01-27T15:09:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:44 crc kubenswrapper[4697]: I0127 15:09:44.566961 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 22:07:03.278178457 +0000 UTC Jan 27 15:09:44 crc kubenswrapper[4697]: I0127 15:09:44.568274 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vwctp" Jan 27 15:09:44 crc kubenswrapper[4697]: I0127 15:09:44.568321 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:09:44 crc kubenswrapper[4697]: E0127 15:09:44.569025 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vwctp" podUID="11ed6885-450d-477c-8e08-acf5fbde2fa3" Jan 27 15:09:44 crc kubenswrapper[4697]: E0127 15:09:44.569161 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:09:44 crc kubenswrapper[4697]: I0127 15:09:44.583655 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00bcd4fb-11e6-4087-91b7-290cd35a7292\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee5c74f4e3f1154431027a743528e81ec4bed30037b30a858870f74993da4691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b23c092c5d493951a1f6dbbf0482f102f36a830133d843f3c574afba2e1d50d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ad05a5c3b7640af677ede45c27c40da5d118e28a9d45de0ffa60a05684121c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fd615105781bcf4614f8a58cf63eeb89020db12e822192bd652a5ff23e25a2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:24Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:44Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:44 crc kubenswrapper[4697]: I0127 15:09:44.599821 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e13ee612abe9aa03f8ccaf68abbdfdbeb29820484f430097aef6be1679d3efe8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:44Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:44 crc kubenswrapper[4697]: I0127 15:09:44.613779 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a182e147723dd1c9335e6c6a910d5d53bdfc118504b6a0a9f3c91f79b6d3aee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52fcd1c6784720765f18ddc1936d3bdd625b743d27654a647ff80351957797e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:44Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:44 crc kubenswrapper[4697]: I0127 15:09:44.625654 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:44Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:44 crc kubenswrapper[4697]: I0127 15:09:44.641708 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:44 crc kubenswrapper[4697]: I0127 15:09:44.642009 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:44 crc kubenswrapper[4697]: I0127 15:09:44.642161 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:44 crc kubenswrapper[4697]: I0127 15:09:44.641819 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bcb9s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7543bea-0b65-44e1-8c0c-bc1a13577d69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fe79de88015d62a290c140e0504b9ef088f39fa79bc9b379d46fa9cdb03123f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0b69d8311464a46854b17dc23de984ff37a24f3de84f8ad6033d26d5dd30afc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0b69d8311464a46854b17dc23de984ff37a24f3de84f8ad6033d26d5dd30afc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d34049aae4e409909bb597c8bf33aa1c1ac85699cf72e33f5643145fdf9fbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d34049aae4e409909bb597c8bf33aa1c1ac85699cf72e33f5643145fdf9fbb9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b85aff4ba7e4c4eddcdfd916b42392fd8f5bd4d18caae739a7490c0576fcff1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b85aff4ba7e4c4eddcdfd916b42392fd8f5bd4d18caae739a7490c0576fcff1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6affaf91a44dec8a9da34068ed68f480ad543e0efc8e0f584fd5002f8f6ed0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6affaf91a44dec8a9da34068ed68f480ad543e0efc8e0f584fd5002f8f6ed0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dede89b14b4d80c8b9e74c45b628b5def6a04f922bb59c06828c3a4e43deca4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dede89b14b4d80c8b9e74c45b628b5def6a04f922bb59c06828c3a4e43deca4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a125d46e355d85444bf125e8184888e9b0c18dab3cd7b09b89ffff202e2c6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a125d46e355d85444bf125e8184888e9b0c18dab3cd7b09b89ffff202e2c6b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bcb9s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:44Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:44 crc kubenswrapper[4697]: I0127 15:09:44.642301 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:44 crc kubenswrapper[4697]: I0127 15:09:44.642461 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:44Z","lastTransitionTime":"2026-01-27T15:09:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:44 crc kubenswrapper[4697]: I0127 15:09:44.655637 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rq89t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fbc1c27-fba2-40df-95dd-3842bd1f1906\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55217260dcb8aebc9ddf2d903bc0257bc8a122956102c0215d6a5a20451d6afe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0c056e48d3130806317f25486fea67d938a0e610f19b6089873f2fcfe4759a0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T15:09:34Z\\\",\\\"message\\\":\\\"2026-01-27T15:08:49+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_df226232-eda2-4025-b167-90894438b301\\\\n2026-01-27T15:08:49+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_df226232-eda2-4025-b167-90894438b301 to /host/opt/cni/bin/\\\\n2026-01-27T15:08:49Z [verbose] multus-daemon started\\\\n2026-01-27T15:08:49Z [verbose] Readiness Indicator file check\\\\n2026-01-27T15:09:34Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:09:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npp7h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rq89t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:44Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:44 crc kubenswrapper[4697]: I0127 15:09:44.674356 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6jxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24ac4a674c5fb98082daeabf52736988951ea5c66064ff4bb63f0d40c43b947d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25f52622d494cffbbd36c21f76148b896a10d3c1ace649ac0824e847b812a277\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9146d3d41cb348c99ea78d62aef3aa7d46c5f99855e042fdf5bc38b18556e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e33c68fac5ef11b2704b8a1460588937489a191ea2eacb70548b1e99cf718822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8784cf473729161592d08c782f4754724d6609756a30040715cbff8c732a09c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eea7c2b7dbea8198cc4709a808f8ecab760514224f4e3eb96d04c3bd7f16df6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8434917bca076a475c1e4b907733bca9cee4559bea25a20542bc654c51f925fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c922932b6548d6d3070183264d41bc14a0cfc7a122dfc0772c4839066544c36d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T15:09:14Z\\\",\\\"message\\\":\\\"d to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:14Z is after 2025-08-24T17:21:41Z]\\\\nI0127 15:09:14.396323 6251 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]} options:{GoMap:map[iface-id-ver:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c94130be-172c-477c-88c4-40cc7eba30fe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 15:09:14.396396 6251 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c94130be-172c-477c-88c4-40cc7eba30fe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 15:09:14.396409 625\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:09:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://971bf4362650664f5133d9b68b7a5ce76e54dafbf28c88730f678ada0256ffd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9666b8a501ef015431ee3be1fc34ca2b196011df3007d2e4d508f09f9967785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9666b8a501ef015431ee3be1fc34ca2b196011df3007d2e4d508f09f9967785\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z6jxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:44Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:44 crc kubenswrapper[4697]: I0127 15:09:44.684683 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wz495" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9bec8bc-b2a6-4865-83ca-692ae5c022a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2616d07c83d73b63d4b728a30de8a7e1d76986d38f8c4c3fe019bf73e64784f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wqhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://faaced835dbc76e880a1fd29824b00fca5f720686e476bcba6ad4f807e28e8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wqhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wz495\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:44Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:44 crc kubenswrapper[4697]: I0127 15:09:44.694358 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vwctp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11ed6885-450d-477c-8e08-acf5fbde2fa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr85v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr85v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:09:00Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vwctp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:44Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:44 crc kubenswrapper[4697]: I0127 15:09:44.704488 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lpz4j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d187caad-2501-44d6-8ced-f8d8ca5fecfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c2b6a00c426e85ca8ca4fe5790bf7badc12e0c2cc72c1454e664e809ace5e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5jqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lpz4j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:44Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:44 crc kubenswrapper[4697]: I0127 15:09:44.723698 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f1b32c65-9dd1-4f66-9c0b-c9234c934d7a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3eb075712ef240052637eb573fea6b47aced80df441ea60774ed40e4e35c8fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db92f9a5afaba8bbda4d8ea9bcb99b5ad334aef7f68b0cf85da3fbf0a1816d8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f35bc4d5235d591cc0e2af6294fb97d2fc6d7e84ce8a179acac6ea4ea4b9e5ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3889f71127ca21494f008231cfc9d7f1a3c106ea419b6abd1ccebeccdcc749a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b10d0995f70041a7467f958a2d131f342a916cce576ad086a33b41bc2864fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a040b13703e44987138aeacce4da557d3046864eac3120c090d95959796ba68c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a040b13703e44987138aeacce4da557d3046864eac3120c090d95959796ba68c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6fc33ccddbde32f28fee077d2abeda2e39cf7874e8f789e91a211a8a95a2313\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6fc33ccddbde32f28fee077d2abeda2e39cf7874e8f789e91a211a8a95a2313\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:26Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://052d6ea90d7fff038a147dffda9efd4688f2a53ee7c627569d63d50e02b5ce2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://052d6ea90d7fff038a147dffda9efd4688f2a53ee7c627569d63d50e02b5ce2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:24Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:44Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:44 crc kubenswrapper[4697]: I0127 15:09:44.738040 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30821478-065e-48b2-85f3-ae69260477fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://841fe2379065903ddc38b4968c1764a6c83d13f42c7587f20be81d8539199c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc09ec12a81a4e2954a0d1146819e9f9b4fc1fd442a3e9c930ea213aff875eb9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa7833382543ce12d026eb8bbc6fb93276a1105a0cc34d215e719591be740f80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d1140be76b3f274b414e158153723d043089cb9b01d27733976db83dc4601f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3144c28de6be75231118993ba779a42bcc9032d51e927df649d3abb602ffa5dd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 15:08:45.318333 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 15:08:45.318446 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 15:08:45.319039 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1083979560/tls.crt::/tmp/serving-cert-1083979560/tls.key\\\\\\\"\\\\nI0127 15:08:45.778691 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 15:08:45.781562 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 15:08:45.781589 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 15:08:45.781614 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 15:08:45.781620 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 15:08:45.799733 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0127 15:08:45.799756 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 15:08:45.799769 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:08:45.799774 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:08:45.799800 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 15:08:45.799806 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 15:08:45.799810 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 15:08:45.799814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 15:08:45.805747 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://772509e08b1dcc68190d81e10a93fe348af55fdc71dbab2f0cadffd65089c044\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d9c79b1675802dcd1800cdbf3562832c4d201ff1b4d7ab4504118a41a245453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d9c79b1675802dcd1800cdbf3562832c4d201ff1b4d7ab4504118a41a245453\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:24Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:44Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:44 crc kubenswrapper[4697]: I0127 15:09:44.746622 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:44 crc kubenswrapper[4697]: I0127 15:09:44.746660 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:44 crc kubenswrapper[4697]: I0127 15:09:44.746673 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:44 crc kubenswrapper[4697]: I0127 15:09:44.746690 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:44 crc kubenswrapper[4697]: I0127 15:09:44.746703 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:44Z","lastTransitionTime":"2026-01-27T15:09:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:44 crc kubenswrapper[4697]: I0127 15:09:44.752856 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ed572c3-ca0d-4d38-9ac0-81080c32efe5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdb01f592a7ee00906befc039b4ac006fa96e5d36ae7cf4029af12500c42d0a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9af166859a55cb5f718a1750f4ce20f5c4259e1adad06c609ce66a907974b3ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0baf6862ad66d010a3e2ca21560d76f0de57cf5afc64cc594d4b6204f5653904\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b800aa3d330bd36d5613b410b0b73f5d175f0ec70a76d4eb479dcb0db8957a72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b800aa3d330bd36d5613b410b0b73f5d175f0ec70a76d4eb479dcb0db8957a72\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:25Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:24Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:44Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:44 crc kubenswrapper[4697]: I0127 15:09:44.768234 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:44Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:44 crc kubenswrapper[4697]: I0127 15:09:44.780493 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:44Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:44 crc kubenswrapper[4697]: I0127 15:09:44.791217 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://955eb03bb38f971417b1af1b193c2008607eaeda5addf30f899830dd84620c4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:44Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:44 crc kubenswrapper[4697]: I0127 15:09:44.801032 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bdclj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed86f7b6-a042-470f-8da3-9cad4e65c550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a701152234da7522fefeed3798f4748c4f8e56fa81edd5011ad4a89bbb2e4be7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f898q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bdclj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:44Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:44 crc kubenswrapper[4697]: I0127 15:09:44.813437 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6lf86" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35bbb68b-046f-482d-8c38-e76dd8a12a61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6949b3c1babb1c4c69bf612b869bea5dabf3fedc5e6c930ec3d3a51736c9651f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sf5z9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc445832c9ce25b3b787c029df7baad2f8ad53f7cf8705ab5e2590c85119bec1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sf5z9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6lf86\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:44Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:44 crc kubenswrapper[4697]: I0127 15:09:44.849823 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:44 crc kubenswrapper[4697]: I0127 15:09:44.849881 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:44 crc kubenswrapper[4697]: I0127 15:09:44.849897 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:44 crc kubenswrapper[4697]: I0127 15:09:44.849915 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:44 crc kubenswrapper[4697]: I0127 15:09:44.849931 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:44Z","lastTransitionTime":"2026-01-27T15:09:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:44 crc kubenswrapper[4697]: I0127 15:09:44.954758 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:44 crc kubenswrapper[4697]: I0127 15:09:44.954859 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:44 crc kubenswrapper[4697]: I0127 15:09:44.954871 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:44 crc kubenswrapper[4697]: I0127 15:09:44.954890 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:44 crc kubenswrapper[4697]: I0127 15:09:44.954901 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:44Z","lastTransitionTime":"2026-01-27T15:09:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:45 crc kubenswrapper[4697]: I0127 15:09:45.057751 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:45 crc kubenswrapper[4697]: I0127 15:09:45.057784 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:45 crc kubenswrapper[4697]: I0127 15:09:45.057825 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:45 crc kubenswrapper[4697]: I0127 15:09:45.057842 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:45 crc kubenswrapper[4697]: I0127 15:09:45.057855 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:45Z","lastTransitionTime":"2026-01-27T15:09:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:45 crc kubenswrapper[4697]: I0127 15:09:45.072669 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-z6jxw_6a1ce5ad-1a8c-4a28-99d8-fc71649954ad/ovnkube-controller/3.log" Jan 27 15:09:45 crc kubenswrapper[4697]: I0127 15:09:45.073434 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-z6jxw_6a1ce5ad-1a8c-4a28-99d8-fc71649954ad/ovnkube-controller/2.log" Jan 27 15:09:45 crc kubenswrapper[4697]: I0127 15:09:45.075954 4697 generic.go:334] "Generic (PLEG): container finished" podID="6a1ce5ad-1a8c-4a28-99d8-fc71649954ad" containerID="8434917bca076a475c1e4b907733bca9cee4559bea25a20542bc654c51f925fd" exitCode=1 Jan 27 15:09:45 crc kubenswrapper[4697]: I0127 15:09:45.076010 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6jxw" event={"ID":"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad","Type":"ContainerDied","Data":"8434917bca076a475c1e4b907733bca9cee4559bea25a20542bc654c51f925fd"} Jan 27 15:09:45 crc kubenswrapper[4697]: I0127 15:09:45.076101 4697 scope.go:117] "RemoveContainer" containerID="c922932b6548d6d3070183264d41bc14a0cfc7a122dfc0772c4839066544c36d" Jan 27 15:09:45 crc kubenswrapper[4697]: I0127 15:09:45.076671 4697 scope.go:117] "RemoveContainer" containerID="8434917bca076a475c1e4b907733bca9cee4559bea25a20542bc654c51f925fd" Jan 27 15:09:45 crc kubenswrapper[4697]: E0127 15:09:45.076838 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-z6jxw_openshift-ovn-kubernetes(6a1ce5ad-1a8c-4a28-99d8-fc71649954ad)\"" pod="openshift-ovn-kubernetes/ovnkube-node-z6jxw" podUID="6a1ce5ad-1a8c-4a28-99d8-fc71649954ad" Jan 27 15:09:45 crc kubenswrapper[4697]: I0127 15:09:45.100401 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rq89t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fbc1c27-fba2-40df-95dd-3842bd1f1906\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55217260dcb8aebc9ddf2d903bc0257bc8a122956102c0215d6a5a20451d6afe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0c056e48d3130806317f25486fea67d938a0e610f19b6089873f2fcfe4759a0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T15:09:34Z\\\",\\\"message\\\":\\\"2026-01-27T15:08:49+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_df226232-eda2-4025-b167-90894438b301\\\\n2026-01-27T15:08:49+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_df226232-eda2-4025-b167-90894438b301 to /host/opt/cni/bin/\\\\n2026-01-27T15:08:49Z [verbose] multus-daemon started\\\\n2026-01-27T15:08:49Z [verbose] Readiness Indicator file check\\\\n2026-01-27T15:09:34Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:09:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npp7h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rq89t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:45Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:45 crc kubenswrapper[4697]: I0127 15:09:45.124027 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6jxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24ac4a674c5fb98082daeabf52736988951ea5c66064ff4bb63f0d40c43b947d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25f52622d494cffbbd36c21f76148b896a10d3c1ace649ac0824e847b812a277\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9146d3d41cb348c99ea78d62aef3aa7d46c5f99855e042fdf5bc38b18556e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e33c68fac5ef11b2704b8a1460588937489a191ea2eacb70548b1e99cf718822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8784cf473729161592d08c782f4754724d6609756a30040715cbff8c732a09c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eea7c2b7dbea8198cc4709a808f8ecab760514224f4e3eb96d04c3bd7f16df6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8434917bca076a475c1e4b907733bca9cee4559bea25a20542bc654c51f925fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c922932b6548d6d3070183264d41bc14a0cfc7a122dfc0772c4839066544c36d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T15:09:14Z\\\",\\\"message\\\":\\\"d to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:14Z is after 2025-08-24T17:21:41Z]\\\\nI0127 15:09:14.396323 6251 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]} options:{GoMap:map[iface-id-ver:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c94130be-172c-477c-88c4-40cc7eba30fe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 15:09:14.396396 6251 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c94130be-172c-477c-88c4-40cc7eba30fe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 15:09:14.396409 625\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:09:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8434917bca076a475c1e4b907733bca9cee4559bea25a20542bc654c51f925fd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T15:09:44Z\\\",\\\"message\\\":\\\"} vips:{GoMap:map[10.217.4.38:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {8b82f026-5975-4a1b-bb18-08d5d51147ec}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 15:09:44.278383 6643 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 15:09:44.278396 6643 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0127 15:09:44.278366 6643 ovnkube.go:599] Stopped ovnkube\\\\nI0127 15:09:44.278411 6643 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-apiserver-operator/metrics]} name:Service_openshift-apiserver-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.38:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {8b82f026-5975-4a1b-bb18-08d5d51147ec}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 15:09:44.278483 6643 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0127 15:09:44.278559 6643 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://971bf4362650664f5133d9b68b7a5ce76e54dafbf28c88730f678ada0256ffd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9666b8a501ef015431ee3be1fc34ca2b196011df3007d2e4d508f09f9967785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9666b8a501ef015431ee3be1fc34ca2b196011df3007d2e4d508f09f9967785\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z6jxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:45Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:45 crc kubenswrapper[4697]: I0127 15:09:45.138477 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00bcd4fb-11e6-4087-91b7-290cd35a7292\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee5c74f4e3f1154431027a743528e81ec4bed30037b30a858870f74993da4691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b23c092c5d493951a1f6dbbf0482f102f36a830133d843f3c574afba2e1d50d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ad05a5c3b7640af677ede45c27c40da5d118e28a9d45de0ffa60a05684121c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fd615105781bcf4614f8a58cf63eeb89020db12e822192bd652a5ff23e25a2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:24Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:45Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:45 crc kubenswrapper[4697]: I0127 15:09:45.157507 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e13ee612abe9aa03f8ccaf68abbdfdbeb29820484f430097aef6be1679d3efe8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:45Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:45 crc kubenswrapper[4697]: I0127 15:09:45.161009 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:45 crc kubenswrapper[4697]: I0127 15:09:45.161033 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:45 crc kubenswrapper[4697]: I0127 15:09:45.161047 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:45 crc kubenswrapper[4697]: I0127 15:09:45.161065 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:45 crc kubenswrapper[4697]: I0127 15:09:45.161078 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:45Z","lastTransitionTime":"2026-01-27T15:09:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:45 crc kubenswrapper[4697]: I0127 15:09:45.170553 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a182e147723dd1c9335e6c6a910d5d53bdfc118504b6a0a9f3c91f79b6d3aee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52fcd1c6784720765f18ddc1936d3bdd625b743d27654a647ff80351957797e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:45Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:45 crc kubenswrapper[4697]: I0127 15:09:45.184774 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:45Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:45 crc kubenswrapper[4697]: I0127 15:09:45.199374 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bcb9s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7543bea-0b65-44e1-8c0c-bc1a13577d69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fe79de88015d62a290c140e0504b9ef088f39fa79bc9b379d46fa9cdb03123f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0b69d8311464a46854b17dc23de984ff37a24f3de84f8ad6033d26d5dd30afc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0b69d8311464a46854b17dc23de984ff37a24f3de84f8ad6033d26d5dd30afc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d34049aae4e409909bb597c8bf33aa1c1ac85699cf72e33f5643145fdf9fbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d34049aae4e409909bb597c8bf33aa1c1ac85699cf72e33f5643145fdf9fbb9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b85aff4ba7e4c4eddcdfd916b42392fd8f5bd4d18caae739a7490c0576fcff1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b85aff4ba7e4c4eddcdfd916b42392fd8f5bd4d18caae739a7490c0576fcff1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6affaf91a44dec8a9da34068ed68f480ad543e0efc8e0f584fd5002f8f6ed0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6affaf91a44dec8a9da34068ed68f480ad543e0efc8e0f584fd5002f8f6ed0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dede89b14b4d80c8b9e74c45b628b5def6a04f922bb59c06828c3a4e43deca4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dede89b14b4d80c8b9e74c45b628b5def6a04f922bb59c06828c3a4e43deca4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a125d46e355d85444bf125e8184888e9b0c18dab3cd7b09b89ffff202e2c6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a125d46e355d85444bf125e8184888e9b0c18dab3cd7b09b89ffff202e2c6b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bcb9s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:45Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:45 crc kubenswrapper[4697]: I0127 15:09:45.211935 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wz495" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9bec8bc-b2a6-4865-83ca-692ae5c022a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2616d07c83d73b63d4b728a30de8a7e1d76986d38f8c4c3fe019bf73e64784f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wqhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://faaced835dbc76e880a1fd29824b00fca5f720686e476bcba6ad4f807e28e8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wqhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wz495\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:45Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:45 crc kubenswrapper[4697]: I0127 15:09:45.223651 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vwctp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11ed6885-450d-477c-8e08-acf5fbde2fa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr85v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr85v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:09:00Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vwctp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:45Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:45 crc kubenswrapper[4697]: I0127 15:09:45.234982 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lpz4j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d187caad-2501-44d6-8ced-f8d8ca5fecfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c2b6a00c426e85ca8ca4fe5790bf7badc12e0c2cc72c1454e664e809ace5e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5jqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lpz4j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:45Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:45 crc kubenswrapper[4697]: I0127 15:09:45.248434 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://955eb03bb38f971417b1af1b193c2008607eaeda5addf30f899830dd84620c4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:45Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:45 crc kubenswrapper[4697]: I0127 15:09:45.261799 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bdclj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed86f7b6-a042-470f-8da3-9cad4e65c550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a701152234da7522fefeed3798f4748c4f8e56fa81edd5011ad4a89bbb2e4be7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f898q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bdclj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:45Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:45 crc kubenswrapper[4697]: I0127 15:09:45.266403 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:45 crc kubenswrapper[4697]: I0127 15:09:45.266432 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:45 crc kubenswrapper[4697]: I0127 15:09:45.266442 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:45 crc kubenswrapper[4697]: I0127 15:09:45.266457 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:45 crc kubenswrapper[4697]: I0127 15:09:45.266468 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:45Z","lastTransitionTime":"2026-01-27T15:09:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:45 crc kubenswrapper[4697]: I0127 15:09:45.275513 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6lf86" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35bbb68b-046f-482d-8c38-e76dd8a12a61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6949b3c1babb1c4c69bf612b869bea5dabf3fedc5e6c930ec3d3a51736c9651f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sf5z9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc445832c9ce25b3b787c029df7baad2f8ad53f7cf8705ab5e2590c85119bec1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sf5z9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6lf86\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:45Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:45 crc kubenswrapper[4697]: I0127 15:09:45.300658 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f1b32c65-9dd1-4f66-9c0b-c9234c934d7a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3eb075712ef240052637eb573fea6b47aced80df441ea60774ed40e4e35c8fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db92f9a5afaba8bbda4d8ea9bcb99b5ad334aef7f68b0cf85da3fbf0a1816d8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f35bc4d5235d591cc0e2af6294fb97d2fc6d7e84ce8a179acac6ea4ea4b9e5ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3889f71127ca21494f008231cfc9d7f1a3c106ea419b6abd1ccebeccdcc749a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b10d0995f70041a7467f958a2d131f342a916cce576ad086a33b41bc2864fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a040b13703e44987138aeacce4da557d3046864eac3120c090d95959796ba68c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a040b13703e44987138aeacce4da557d3046864eac3120c090d95959796ba68c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6fc33ccddbde32f28fee077d2abeda2e39cf7874e8f789e91a211a8a95a2313\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6fc33ccddbde32f28fee077d2abeda2e39cf7874e8f789e91a211a8a95a2313\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:26Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://052d6ea90d7fff038a147dffda9efd4688f2a53ee7c627569d63d50e02b5ce2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://052d6ea90d7fff038a147dffda9efd4688f2a53ee7c627569d63d50e02b5ce2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:24Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:45Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:45 crc kubenswrapper[4697]: I0127 15:09:45.318890 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30821478-065e-48b2-85f3-ae69260477fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://841fe2379065903ddc38b4968c1764a6c83d13f42c7587f20be81d8539199c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc09ec12a81a4e2954a0d1146819e9f9b4fc1fd442a3e9c930ea213aff875eb9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa7833382543ce12d026eb8bbc6fb93276a1105a0cc34d215e719591be740f80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d1140be76b3f274b414e158153723d043089cb9b01d27733976db83dc4601f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3144c28de6be75231118993ba779a42bcc9032d51e927df649d3abb602ffa5dd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 15:08:45.318333 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 15:08:45.318446 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 15:08:45.319039 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1083979560/tls.crt::/tmp/serving-cert-1083979560/tls.key\\\\\\\"\\\\nI0127 15:08:45.778691 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 15:08:45.781562 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 15:08:45.781589 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 15:08:45.781614 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 15:08:45.781620 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 15:08:45.799733 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0127 15:08:45.799756 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 15:08:45.799769 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:08:45.799774 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:08:45.799800 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 15:08:45.799806 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 15:08:45.799810 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 15:08:45.799814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 15:08:45.805747 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://772509e08b1dcc68190d81e10a93fe348af55fdc71dbab2f0cadffd65089c044\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d9c79b1675802dcd1800cdbf3562832c4d201ff1b4d7ab4504118a41a245453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d9c79b1675802dcd1800cdbf3562832c4d201ff1b4d7ab4504118a41a245453\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:24Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:45Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:45 crc kubenswrapper[4697]: I0127 15:09:45.333165 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ed572c3-ca0d-4d38-9ac0-81080c32efe5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdb01f592a7ee00906befc039b4ac006fa96e5d36ae7cf4029af12500c42d0a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9af166859a55cb5f718a1750f4ce20f5c4259e1adad06c609ce66a907974b3ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0baf6862ad66d010a3e2ca21560d76f0de57cf5afc64cc594d4b6204f5653904\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b800aa3d330bd36d5613b410b0b73f5d175f0ec70a76d4eb479dcb0db8957a72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b800aa3d330bd36d5613b410b0b73f5d175f0ec70a76d4eb479dcb0db8957a72\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:25Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:24Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:45Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:45 crc kubenswrapper[4697]: I0127 15:09:45.349573 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:45Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:45 crc kubenswrapper[4697]: I0127 15:09:45.365891 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:45Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:45 crc kubenswrapper[4697]: I0127 15:09:45.368998 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:45 crc kubenswrapper[4697]: I0127 15:09:45.369071 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:45 crc kubenswrapper[4697]: I0127 15:09:45.369088 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:45 crc kubenswrapper[4697]: I0127 15:09:45.369114 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:45 crc kubenswrapper[4697]: I0127 15:09:45.369132 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:45Z","lastTransitionTime":"2026-01-27T15:09:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:45 crc kubenswrapper[4697]: I0127 15:09:45.471500 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:45 crc kubenswrapper[4697]: I0127 15:09:45.471532 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:45 crc kubenswrapper[4697]: I0127 15:09:45.471541 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:45 crc kubenswrapper[4697]: I0127 15:09:45.471553 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:45 crc kubenswrapper[4697]: I0127 15:09:45.471574 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:45Z","lastTransitionTime":"2026-01-27T15:09:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:45 crc kubenswrapper[4697]: I0127 15:09:45.567317 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 04:16:20.668638134 +0000 UTC Jan 27 15:09:45 crc kubenswrapper[4697]: I0127 15:09:45.568421 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:09:45 crc kubenswrapper[4697]: E0127 15:09:45.568527 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:09:45 crc kubenswrapper[4697]: I0127 15:09:45.568611 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:09:45 crc kubenswrapper[4697]: E0127 15:09:45.568837 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:09:45 crc kubenswrapper[4697]: I0127 15:09:45.574336 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:45 crc kubenswrapper[4697]: I0127 15:09:45.574373 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:45 crc kubenswrapper[4697]: I0127 15:09:45.574386 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:45 crc kubenswrapper[4697]: I0127 15:09:45.574406 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:45 crc kubenswrapper[4697]: I0127 15:09:45.574420 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:45Z","lastTransitionTime":"2026-01-27T15:09:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:45 crc kubenswrapper[4697]: I0127 15:09:45.676816 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:45 crc kubenswrapper[4697]: I0127 15:09:45.676859 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:45 crc kubenswrapper[4697]: I0127 15:09:45.676873 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:45 crc kubenswrapper[4697]: I0127 15:09:45.676890 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:45 crc kubenswrapper[4697]: I0127 15:09:45.676904 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:45Z","lastTransitionTime":"2026-01-27T15:09:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:45 crc kubenswrapper[4697]: I0127 15:09:45.779125 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:45 crc kubenswrapper[4697]: I0127 15:09:45.779161 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:45 crc kubenswrapper[4697]: I0127 15:09:45.779171 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:45 crc kubenswrapper[4697]: I0127 15:09:45.779186 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:45 crc kubenswrapper[4697]: I0127 15:09:45.779198 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:45Z","lastTransitionTime":"2026-01-27T15:09:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:45 crc kubenswrapper[4697]: I0127 15:09:45.881560 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:45 crc kubenswrapper[4697]: I0127 15:09:45.881914 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:45 crc kubenswrapper[4697]: I0127 15:09:45.881998 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:45 crc kubenswrapper[4697]: I0127 15:09:45.882063 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:45 crc kubenswrapper[4697]: I0127 15:09:45.882119 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:45Z","lastTransitionTime":"2026-01-27T15:09:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:45 crc kubenswrapper[4697]: I0127 15:09:45.984052 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:45 crc kubenswrapper[4697]: I0127 15:09:45.984112 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:45 crc kubenswrapper[4697]: I0127 15:09:45.984124 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:45 crc kubenswrapper[4697]: I0127 15:09:45.984140 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:45 crc kubenswrapper[4697]: I0127 15:09:45.984152 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:45Z","lastTransitionTime":"2026-01-27T15:09:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:46 crc kubenswrapper[4697]: I0127 15:09:46.080351 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-z6jxw_6a1ce5ad-1a8c-4a28-99d8-fc71649954ad/ovnkube-controller/3.log" Jan 27 15:09:46 crc kubenswrapper[4697]: I0127 15:09:46.083800 4697 scope.go:117] "RemoveContainer" containerID="8434917bca076a475c1e4b907733bca9cee4559bea25a20542bc654c51f925fd" Jan 27 15:09:46 crc kubenswrapper[4697]: E0127 15:09:46.084112 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-z6jxw_openshift-ovn-kubernetes(6a1ce5ad-1a8c-4a28-99d8-fc71649954ad)\"" pod="openshift-ovn-kubernetes/ovnkube-node-z6jxw" podUID="6a1ce5ad-1a8c-4a28-99d8-fc71649954ad" Jan 27 15:09:46 crc kubenswrapper[4697]: I0127 15:09:46.085776 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:46 crc kubenswrapper[4697]: I0127 15:09:46.085830 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:46 crc kubenswrapper[4697]: I0127 15:09:46.085850 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:46 crc kubenswrapper[4697]: I0127 15:09:46.085865 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:46 crc kubenswrapper[4697]: I0127 15:09:46.085878 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:46Z","lastTransitionTime":"2026-01-27T15:09:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:46 crc kubenswrapper[4697]: I0127 15:09:46.098078 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ed572c3-ca0d-4d38-9ac0-81080c32efe5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdb01f592a7ee00906befc039b4ac006fa96e5d36ae7cf4029af12500c42d0a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9af166859a55cb5f718a1750f4ce20f5c4259e1adad06c609ce66a907974b3ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0baf6862ad66d010a3e2ca21560d76f0de57cf5afc64cc594d4b6204f5653904\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b800aa3d330bd36d5613b410b0b73f5d175f0ec70a76d4eb479dcb0db8957a72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b800aa3d330bd36d5613b410b0b73f5d175f0ec70a76d4eb479dcb0db8957a72\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:25Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:24Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:46Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:46 crc kubenswrapper[4697]: I0127 15:09:46.118060 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:46Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:46 crc kubenswrapper[4697]: I0127 15:09:46.136574 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:46Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:46 crc kubenswrapper[4697]: I0127 15:09:46.153336 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://955eb03bb38f971417b1af1b193c2008607eaeda5addf30f899830dd84620c4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:46Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:46 crc kubenswrapper[4697]: I0127 15:09:46.166118 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bdclj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed86f7b6-a042-470f-8da3-9cad4e65c550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a701152234da7522fefeed3798f4748c4f8e56fa81edd5011ad4a89bbb2e4be7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f898q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bdclj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:46Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:46 crc kubenswrapper[4697]: I0127 15:09:46.179657 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6lf86" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35bbb68b-046f-482d-8c38-e76dd8a12a61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6949b3c1babb1c4c69bf612b869bea5dabf3fedc5e6c930ec3d3a51736c9651f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sf5z9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc445832c9ce25b3b787c029df7baad2f8ad53f7cf8705ab5e2590c85119bec1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sf5z9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6lf86\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:46Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:46 crc kubenswrapper[4697]: I0127 15:09:46.188202 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:46 crc kubenswrapper[4697]: I0127 15:09:46.188247 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:46 crc kubenswrapper[4697]: I0127 15:09:46.188257 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:46 crc kubenswrapper[4697]: I0127 15:09:46.188272 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:46 crc kubenswrapper[4697]: I0127 15:09:46.188285 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:46Z","lastTransitionTime":"2026-01-27T15:09:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:46 crc kubenswrapper[4697]: I0127 15:09:46.207160 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f1b32c65-9dd1-4f66-9c0b-c9234c934d7a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3eb075712ef240052637eb573fea6b47aced80df441ea60774ed40e4e35c8fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db92f9a5afaba8bbda4d8ea9bcb99b5ad334aef7f68b0cf85da3fbf0a1816d8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f35bc4d5235d591cc0e2af6294fb97d2fc6d7e84ce8a179acac6ea4ea4b9e5ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3889f71127ca21494f008231cfc9d7f1a3c106ea419b6abd1ccebeccdcc749a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b10d0995f70041a7467f958a2d131f342a916cce576ad086a33b41bc2864fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a040b13703e44987138aeacce4da557d3046864eac3120c090d95959796ba68c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a040b13703e44987138aeacce4da557d3046864eac3120c090d95959796ba68c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6fc33ccddbde32f28fee077d2abeda2e39cf7874e8f789e91a211a8a95a2313\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6fc33ccddbde32f28fee077d2abeda2e39cf7874e8f789e91a211a8a95a2313\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:26Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://052d6ea90d7fff038a147dffda9efd4688f2a53ee7c627569d63d50e02b5ce2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://052d6ea90d7fff038a147dffda9efd4688f2a53ee7c627569d63d50e02b5ce2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:24Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:46Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:46 crc kubenswrapper[4697]: I0127 15:09:46.223728 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30821478-065e-48b2-85f3-ae69260477fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://841fe2379065903ddc38b4968c1764a6c83d13f42c7587f20be81d8539199c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc09ec12a81a4e2954a0d1146819e9f9b4fc1fd442a3e9c930ea213aff875eb9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa7833382543ce12d026eb8bbc6fb93276a1105a0cc34d215e719591be740f80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d1140be76b3f274b414e158153723d043089cb9b01d27733976db83dc4601f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3144c28de6be75231118993ba779a42bcc9032d51e927df649d3abb602ffa5dd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 15:08:45.318333 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 15:08:45.318446 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 15:08:45.319039 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1083979560/tls.crt::/tmp/serving-cert-1083979560/tls.key\\\\\\\"\\\\nI0127 15:08:45.778691 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 15:08:45.781562 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 15:08:45.781589 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 15:08:45.781614 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 15:08:45.781620 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 15:08:45.799733 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0127 15:08:45.799756 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 15:08:45.799769 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:08:45.799774 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:08:45.799800 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 15:08:45.799806 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 15:08:45.799810 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 15:08:45.799814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 15:08:45.805747 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://772509e08b1dcc68190d81e10a93fe348af55fdc71dbab2f0cadffd65089c044\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d9c79b1675802dcd1800cdbf3562832c4d201ff1b4d7ab4504118a41a245453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d9c79b1675802dcd1800cdbf3562832c4d201ff1b4d7ab4504118a41a245453\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:24Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:46Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:46 crc kubenswrapper[4697]: I0127 15:09:46.241641 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a182e147723dd1c9335e6c6a910d5d53bdfc118504b6a0a9f3c91f79b6d3aee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52fcd1c6784720765f18ddc1936d3bdd625b743d27654a647ff80351957797e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:46Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:46 crc kubenswrapper[4697]: I0127 15:09:46.255130 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:46Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:46 crc kubenswrapper[4697]: I0127 15:09:46.277309 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bcb9s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7543bea-0b65-44e1-8c0c-bc1a13577d69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fe79de88015d62a290c140e0504b9ef088f39fa79bc9b379d46fa9cdb03123f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0b69d8311464a46854b17dc23de984ff37a24f3de84f8ad6033d26d5dd30afc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0b69d8311464a46854b17dc23de984ff37a24f3de84f8ad6033d26d5dd30afc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d34049aae4e409909bb597c8bf33aa1c1ac85699cf72e33f5643145fdf9fbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d34049aae4e409909bb597c8bf33aa1c1ac85699cf72e33f5643145fdf9fbb9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b85aff4ba7e4c4eddcdfd916b42392fd8f5bd4d18caae739a7490c0576fcff1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b85aff4ba7e4c4eddcdfd916b42392fd8f5bd4d18caae739a7490c0576fcff1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6affaf91a44dec8a9da34068ed68f480ad543e0efc8e0f584fd5002f8f6ed0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6affaf91a44dec8a9da34068ed68f480ad543e0efc8e0f584fd5002f8f6ed0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dede89b14b4d80c8b9e74c45b628b5def6a04f922bb59c06828c3a4e43deca4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dede89b14b4d80c8b9e74c45b628b5def6a04f922bb59c06828c3a4e43deca4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a125d46e355d85444bf125e8184888e9b0c18dab3cd7b09b89ffff202e2c6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a125d46e355d85444bf125e8184888e9b0c18dab3cd7b09b89ffff202e2c6b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bcb9s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:46Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:46 crc kubenswrapper[4697]: I0127 15:09:46.291215 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rq89t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fbc1c27-fba2-40df-95dd-3842bd1f1906\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55217260dcb8aebc9ddf2d903bc0257bc8a122956102c0215d6a5a20451d6afe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0c056e48d3130806317f25486fea67d938a0e610f19b6089873f2fcfe4759a0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T15:09:34Z\\\",\\\"message\\\":\\\"2026-01-27T15:08:49+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_df226232-eda2-4025-b167-90894438b301\\\\n2026-01-27T15:08:49+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_df226232-eda2-4025-b167-90894438b301 to /host/opt/cni/bin/\\\\n2026-01-27T15:08:49Z [verbose] multus-daemon started\\\\n2026-01-27T15:08:49Z [verbose] Readiness Indicator file check\\\\n2026-01-27T15:09:34Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:09:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npp7h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rq89t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:46Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:46 crc kubenswrapper[4697]: I0127 15:09:46.293485 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:46 crc kubenswrapper[4697]: I0127 15:09:46.293560 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:46 crc kubenswrapper[4697]: I0127 15:09:46.293572 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:46 crc kubenswrapper[4697]: I0127 15:09:46.293594 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:46 crc kubenswrapper[4697]: I0127 15:09:46.293606 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:46Z","lastTransitionTime":"2026-01-27T15:09:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:46 crc kubenswrapper[4697]: I0127 15:09:46.313066 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6jxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24ac4a674c5fb98082daeabf52736988951ea5c66064ff4bb63f0d40c43b947d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25f52622d494cffbbd36c21f76148b896a10d3c1ace649ac0824e847b812a277\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9146d3d41cb348c99ea78d62aef3aa7d46c5f99855e042fdf5bc38b18556e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e33c68fac5ef11b2704b8a1460588937489a191ea2eacb70548b1e99cf718822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8784cf473729161592d08c782f4754724d6609756a30040715cbff8c732a09c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eea7c2b7dbea8198cc4709a808f8ecab760514224f4e3eb96d04c3bd7f16df6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8434917bca076a475c1e4b907733bca9cee4559bea25a20542bc654c51f925fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8434917bca076a475c1e4b907733bca9cee4559bea25a20542bc654c51f925fd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T15:09:44Z\\\",\\\"message\\\":\\\"} vips:{GoMap:map[10.217.4.38:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {8b82f026-5975-4a1b-bb18-08d5d51147ec}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 15:09:44.278383 6643 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 15:09:44.278396 6643 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0127 15:09:44.278366 6643 ovnkube.go:599] Stopped ovnkube\\\\nI0127 15:09:44.278411 6643 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-apiserver-operator/metrics]} name:Service_openshift-apiserver-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.38:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {8b82f026-5975-4a1b-bb18-08d5d51147ec}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 15:09:44.278483 6643 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0127 15:09:44.278559 6643 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:09:43Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-z6jxw_openshift-ovn-kubernetes(6a1ce5ad-1a8c-4a28-99d8-fc71649954ad)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://971bf4362650664f5133d9b68b7a5ce76e54dafbf28c88730f678ada0256ffd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9666b8a501ef015431ee3be1fc34ca2b196011df3007d2e4d508f09f9967785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9666b8a501ef015431ee3be1fc34ca2b196011df3007d2e4d508f09f9967785\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z6jxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:46Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:46 crc kubenswrapper[4697]: I0127 15:09:46.330152 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00bcd4fb-11e6-4087-91b7-290cd35a7292\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee5c74f4e3f1154431027a743528e81ec4bed30037b30a858870f74993da4691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b23c092c5d493951a1f6dbbf0482f102f36a830133d843f3c574afba2e1d50d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ad05a5c3b7640af677ede45c27c40da5d118e28a9d45de0ffa60a05684121c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fd615105781bcf4614f8a58cf63eeb89020db12e822192bd652a5ff23e25a2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:24Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:46Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:46 crc kubenswrapper[4697]: I0127 15:09:46.344526 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e13ee612abe9aa03f8ccaf68abbdfdbeb29820484f430097aef6be1679d3efe8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:46Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:46 crc kubenswrapper[4697]: I0127 15:09:46.355526 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vwctp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11ed6885-450d-477c-8e08-acf5fbde2fa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr85v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr85v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:09:00Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vwctp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:46Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:46 crc kubenswrapper[4697]: I0127 15:09:46.366054 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wz495" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9bec8bc-b2a6-4865-83ca-692ae5c022a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2616d07c83d73b63d4b728a30de8a7e1d76986d38f8c4c3fe019bf73e64784f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wqhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://faaced835dbc76e880a1fd29824b00fca5f720686e476bcba6ad4f807e28e8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wqhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wz495\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:46Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:46 crc kubenswrapper[4697]: I0127 15:09:46.376403 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lpz4j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d187caad-2501-44d6-8ced-f8d8ca5fecfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c2b6a00c426e85ca8ca4fe5790bf7badc12e0c2cc72c1454e664e809ace5e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5jqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lpz4j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:46Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:46 crc kubenswrapper[4697]: I0127 15:09:46.396583 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:46 crc kubenswrapper[4697]: I0127 15:09:46.396649 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:46 crc kubenswrapper[4697]: I0127 15:09:46.396658 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:46 crc kubenswrapper[4697]: I0127 15:09:46.396672 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:46 crc kubenswrapper[4697]: I0127 15:09:46.396683 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:46Z","lastTransitionTime":"2026-01-27T15:09:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:46 crc kubenswrapper[4697]: I0127 15:09:46.499494 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:46 crc kubenswrapper[4697]: I0127 15:09:46.499525 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:46 crc kubenswrapper[4697]: I0127 15:09:46.499532 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:46 crc kubenswrapper[4697]: I0127 15:09:46.499545 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:46 crc kubenswrapper[4697]: I0127 15:09:46.499555 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:46Z","lastTransitionTime":"2026-01-27T15:09:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:46 crc kubenswrapper[4697]: I0127 15:09:46.567743 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 10:27:42.413876682 +0000 UTC Jan 27 15:09:46 crc kubenswrapper[4697]: I0127 15:09:46.567859 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:09:46 crc kubenswrapper[4697]: I0127 15:09:46.567900 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vwctp" Jan 27 15:09:46 crc kubenswrapper[4697]: E0127 15:09:46.568526 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:09:46 crc kubenswrapper[4697]: E0127 15:09:46.568865 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vwctp" podUID="11ed6885-450d-477c-8e08-acf5fbde2fa3" Jan 27 15:09:46 crc kubenswrapper[4697]: I0127 15:09:46.602148 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:46 crc kubenswrapper[4697]: I0127 15:09:46.602191 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:46 crc kubenswrapper[4697]: I0127 15:09:46.602204 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:46 crc kubenswrapper[4697]: I0127 15:09:46.602221 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:46 crc kubenswrapper[4697]: I0127 15:09:46.602239 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:46Z","lastTransitionTime":"2026-01-27T15:09:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:46 crc kubenswrapper[4697]: I0127 15:09:46.704968 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:46 crc kubenswrapper[4697]: I0127 15:09:46.705013 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:46 crc kubenswrapper[4697]: I0127 15:09:46.705027 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:46 crc kubenswrapper[4697]: I0127 15:09:46.705045 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:46 crc kubenswrapper[4697]: I0127 15:09:46.705062 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:46Z","lastTransitionTime":"2026-01-27T15:09:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:46 crc kubenswrapper[4697]: I0127 15:09:46.807443 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:46 crc kubenswrapper[4697]: I0127 15:09:46.807487 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:46 crc kubenswrapper[4697]: I0127 15:09:46.807500 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:46 crc kubenswrapper[4697]: I0127 15:09:46.807518 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:46 crc kubenswrapper[4697]: I0127 15:09:46.807530 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:46Z","lastTransitionTime":"2026-01-27T15:09:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:46 crc kubenswrapper[4697]: I0127 15:09:46.909530 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:46 crc kubenswrapper[4697]: I0127 15:09:46.909570 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:46 crc kubenswrapper[4697]: I0127 15:09:46.909582 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:46 crc kubenswrapper[4697]: I0127 15:09:46.909597 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:46 crc kubenswrapper[4697]: I0127 15:09:46.909609 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:46Z","lastTransitionTime":"2026-01-27T15:09:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:47 crc kubenswrapper[4697]: I0127 15:09:47.011196 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:47 crc kubenswrapper[4697]: I0127 15:09:47.011221 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:47 crc kubenswrapper[4697]: I0127 15:09:47.011228 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:47 crc kubenswrapper[4697]: I0127 15:09:47.011242 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:47 crc kubenswrapper[4697]: I0127 15:09:47.011250 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:47Z","lastTransitionTime":"2026-01-27T15:09:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:47 crc kubenswrapper[4697]: I0127 15:09:47.114007 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:47 crc kubenswrapper[4697]: I0127 15:09:47.114043 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:47 crc kubenswrapper[4697]: I0127 15:09:47.114054 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:47 crc kubenswrapper[4697]: I0127 15:09:47.114069 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:47 crc kubenswrapper[4697]: I0127 15:09:47.114080 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:47Z","lastTransitionTime":"2026-01-27T15:09:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:47 crc kubenswrapper[4697]: I0127 15:09:47.216512 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:47 crc kubenswrapper[4697]: I0127 15:09:47.216564 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:47 crc kubenswrapper[4697]: I0127 15:09:47.216575 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:47 crc kubenswrapper[4697]: I0127 15:09:47.216591 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:47 crc kubenswrapper[4697]: I0127 15:09:47.216605 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:47Z","lastTransitionTime":"2026-01-27T15:09:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:47 crc kubenswrapper[4697]: I0127 15:09:47.319123 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:47 crc kubenswrapper[4697]: I0127 15:09:47.319166 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:47 crc kubenswrapper[4697]: I0127 15:09:47.319178 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:47 crc kubenswrapper[4697]: I0127 15:09:47.319195 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:47 crc kubenswrapper[4697]: I0127 15:09:47.319206 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:47Z","lastTransitionTime":"2026-01-27T15:09:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:47 crc kubenswrapper[4697]: I0127 15:09:47.421338 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:47 crc kubenswrapper[4697]: I0127 15:09:47.421389 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:47 crc kubenswrapper[4697]: I0127 15:09:47.421402 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:47 crc kubenswrapper[4697]: I0127 15:09:47.421419 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:47 crc kubenswrapper[4697]: I0127 15:09:47.421453 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:47Z","lastTransitionTime":"2026-01-27T15:09:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:47 crc kubenswrapper[4697]: I0127 15:09:47.524131 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:47 crc kubenswrapper[4697]: I0127 15:09:47.524157 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:47 crc kubenswrapper[4697]: I0127 15:09:47.524165 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:47 crc kubenswrapper[4697]: I0127 15:09:47.524177 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:47 crc kubenswrapper[4697]: I0127 15:09:47.524185 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:47Z","lastTransitionTime":"2026-01-27T15:09:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:47 crc kubenswrapper[4697]: I0127 15:09:47.567690 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:09:47 crc kubenswrapper[4697]: I0127 15:09:47.567690 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:09:47 crc kubenswrapper[4697]: E0127 15:09:47.567846 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:09:47 crc kubenswrapper[4697]: E0127 15:09:47.567889 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:09:47 crc kubenswrapper[4697]: I0127 15:09:47.568694 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 20:28:49.712166282 +0000 UTC Jan 27 15:09:47 crc kubenswrapper[4697]: I0127 15:09:47.626443 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:47 crc kubenswrapper[4697]: I0127 15:09:47.626479 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:47 crc kubenswrapper[4697]: I0127 15:09:47.626487 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:47 crc kubenswrapper[4697]: I0127 15:09:47.626502 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:47 crc kubenswrapper[4697]: I0127 15:09:47.626514 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:47Z","lastTransitionTime":"2026-01-27T15:09:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:47 crc kubenswrapper[4697]: I0127 15:09:47.729325 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:47 crc kubenswrapper[4697]: I0127 15:09:47.729377 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:47 crc kubenswrapper[4697]: I0127 15:09:47.729396 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:47 crc kubenswrapper[4697]: I0127 15:09:47.729418 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:47 crc kubenswrapper[4697]: I0127 15:09:47.729434 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:47Z","lastTransitionTime":"2026-01-27T15:09:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:47 crc kubenswrapper[4697]: I0127 15:09:47.832226 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:47 crc kubenswrapper[4697]: I0127 15:09:47.832276 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:47 crc kubenswrapper[4697]: I0127 15:09:47.832291 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:47 crc kubenswrapper[4697]: I0127 15:09:47.832315 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:47 crc kubenswrapper[4697]: I0127 15:09:47.832332 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:47Z","lastTransitionTime":"2026-01-27T15:09:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:47 crc kubenswrapper[4697]: I0127 15:09:47.934632 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:47 crc kubenswrapper[4697]: I0127 15:09:47.934687 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:47 crc kubenswrapper[4697]: I0127 15:09:47.934704 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:47 crc kubenswrapper[4697]: I0127 15:09:47.934726 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:47 crc kubenswrapper[4697]: I0127 15:09:47.934743 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:47Z","lastTransitionTime":"2026-01-27T15:09:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:48 crc kubenswrapper[4697]: I0127 15:09:48.037211 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:48 crc kubenswrapper[4697]: I0127 15:09:48.037274 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:48 crc kubenswrapper[4697]: I0127 15:09:48.037287 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:48 crc kubenswrapper[4697]: I0127 15:09:48.037303 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:48 crc kubenswrapper[4697]: I0127 15:09:48.037316 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:48Z","lastTransitionTime":"2026-01-27T15:09:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:48 crc kubenswrapper[4697]: I0127 15:09:48.140196 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:48 crc kubenswrapper[4697]: I0127 15:09:48.140254 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:48 crc kubenswrapper[4697]: I0127 15:09:48.140266 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:48 crc kubenswrapper[4697]: I0127 15:09:48.140282 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:48 crc kubenswrapper[4697]: I0127 15:09:48.140294 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:48Z","lastTransitionTime":"2026-01-27T15:09:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:48 crc kubenswrapper[4697]: I0127 15:09:48.243127 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:48 crc kubenswrapper[4697]: I0127 15:09:48.243169 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:48 crc kubenswrapper[4697]: I0127 15:09:48.243177 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:48 crc kubenswrapper[4697]: I0127 15:09:48.243192 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:48 crc kubenswrapper[4697]: I0127 15:09:48.243203 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:48Z","lastTransitionTime":"2026-01-27T15:09:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:48 crc kubenswrapper[4697]: I0127 15:09:48.345550 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:48 crc kubenswrapper[4697]: I0127 15:09:48.345592 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:48 crc kubenswrapper[4697]: I0127 15:09:48.345600 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:48 crc kubenswrapper[4697]: I0127 15:09:48.345614 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:48 crc kubenswrapper[4697]: I0127 15:09:48.345623 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:48Z","lastTransitionTime":"2026-01-27T15:09:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:48 crc kubenswrapper[4697]: I0127 15:09:48.448227 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:48 crc kubenswrapper[4697]: I0127 15:09:48.448300 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:48 crc kubenswrapper[4697]: I0127 15:09:48.448327 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:48 crc kubenswrapper[4697]: I0127 15:09:48.448386 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:48 crc kubenswrapper[4697]: I0127 15:09:48.448404 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:48Z","lastTransitionTime":"2026-01-27T15:09:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:48 crc kubenswrapper[4697]: I0127 15:09:48.550468 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:48 crc kubenswrapper[4697]: I0127 15:09:48.550521 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:48 crc kubenswrapper[4697]: I0127 15:09:48.550537 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:48 crc kubenswrapper[4697]: I0127 15:09:48.550558 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:48 crc kubenswrapper[4697]: I0127 15:09:48.550575 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:48Z","lastTransitionTime":"2026-01-27T15:09:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:48 crc kubenswrapper[4697]: I0127 15:09:48.568963 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 08:24:54.166472045 +0000 UTC Jan 27 15:09:48 crc kubenswrapper[4697]: I0127 15:09:48.569111 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vwctp" Jan 27 15:09:48 crc kubenswrapper[4697]: I0127 15:09:48.569220 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:09:48 crc kubenswrapper[4697]: E0127 15:09:48.569246 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vwctp" podUID="11ed6885-450d-477c-8e08-acf5fbde2fa3" Jan 27 15:09:48 crc kubenswrapper[4697]: E0127 15:09:48.569344 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:09:48 crc kubenswrapper[4697]: I0127 15:09:48.652854 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:48 crc kubenswrapper[4697]: I0127 15:09:48.652892 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:48 crc kubenswrapper[4697]: I0127 15:09:48.652904 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:48 crc kubenswrapper[4697]: I0127 15:09:48.652919 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:48 crc kubenswrapper[4697]: I0127 15:09:48.652929 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:48Z","lastTransitionTime":"2026-01-27T15:09:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:48 crc kubenswrapper[4697]: I0127 15:09:48.756764 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:48 crc kubenswrapper[4697]: I0127 15:09:48.756820 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:48 crc kubenswrapper[4697]: I0127 15:09:48.756830 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:48 crc kubenswrapper[4697]: I0127 15:09:48.756844 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:48 crc kubenswrapper[4697]: I0127 15:09:48.756855 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:48Z","lastTransitionTime":"2026-01-27T15:09:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:48 crc kubenswrapper[4697]: I0127 15:09:48.858750 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:48 crc kubenswrapper[4697]: I0127 15:09:48.858829 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:48 crc kubenswrapper[4697]: I0127 15:09:48.858856 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:48 crc kubenswrapper[4697]: I0127 15:09:48.858880 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:48 crc kubenswrapper[4697]: I0127 15:09:48.858895 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:48Z","lastTransitionTime":"2026-01-27T15:09:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:48 crc kubenswrapper[4697]: I0127 15:09:48.961662 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:48 crc kubenswrapper[4697]: I0127 15:09:48.961753 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:48 crc kubenswrapper[4697]: I0127 15:09:48.961771 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:48 crc kubenswrapper[4697]: I0127 15:09:48.961824 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:48 crc kubenswrapper[4697]: I0127 15:09:48.961840 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:48Z","lastTransitionTime":"2026-01-27T15:09:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:49 crc kubenswrapper[4697]: I0127 15:09:49.064695 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:49 crc kubenswrapper[4697]: I0127 15:09:49.064812 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:49 crc kubenswrapper[4697]: I0127 15:09:49.064828 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:49 crc kubenswrapper[4697]: I0127 15:09:49.064848 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:49 crc kubenswrapper[4697]: I0127 15:09:49.064859 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:49Z","lastTransitionTime":"2026-01-27T15:09:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:49 crc kubenswrapper[4697]: I0127 15:09:49.167379 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:49 crc kubenswrapper[4697]: I0127 15:09:49.167423 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:49 crc kubenswrapper[4697]: I0127 15:09:49.167434 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:49 crc kubenswrapper[4697]: I0127 15:09:49.167450 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:49 crc kubenswrapper[4697]: I0127 15:09:49.167464 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:49Z","lastTransitionTime":"2026-01-27T15:09:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:49 crc kubenswrapper[4697]: I0127 15:09:49.271077 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:49 crc kubenswrapper[4697]: I0127 15:09:49.271120 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:49 crc kubenswrapper[4697]: I0127 15:09:49.271130 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:49 crc kubenswrapper[4697]: I0127 15:09:49.271146 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:49 crc kubenswrapper[4697]: I0127 15:09:49.271160 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:49Z","lastTransitionTime":"2026-01-27T15:09:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:49 crc kubenswrapper[4697]: I0127 15:09:49.345377 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:09:49 crc kubenswrapper[4697]: E0127 15:09:49.345537 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:10:53.345507494 +0000 UTC m=+149.517907275 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:09:49 crc kubenswrapper[4697]: I0127 15:09:49.372886 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:49 crc kubenswrapper[4697]: I0127 15:09:49.373181 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:49 crc kubenswrapper[4697]: I0127 15:09:49.373206 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:49 crc kubenswrapper[4697]: I0127 15:09:49.373221 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:49 crc kubenswrapper[4697]: I0127 15:09:49.373229 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:49Z","lastTransitionTime":"2026-01-27T15:09:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:49 crc kubenswrapper[4697]: I0127 15:09:49.446363 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:09:49 crc kubenswrapper[4697]: I0127 15:09:49.446400 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:09:49 crc kubenswrapper[4697]: I0127 15:09:49.446419 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:09:49 crc kubenswrapper[4697]: I0127 15:09:49.446451 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:09:49 crc kubenswrapper[4697]: E0127 15:09:49.446528 4697 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 15:09:49 crc kubenswrapper[4697]: E0127 15:09:49.446566 4697 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 15:09:49 crc kubenswrapper[4697]: E0127 15:09:49.446576 4697 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 15:09:49 crc kubenswrapper[4697]: E0127 15:09:49.446581 4697 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 15:09:49 crc kubenswrapper[4697]: E0127 15:09:49.446593 4697 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 15:09:49 crc kubenswrapper[4697]: E0127 15:09:49.446604 4697 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 15:09:49 crc kubenswrapper[4697]: E0127 15:09:49.446619 4697 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 15:09:49 crc kubenswrapper[4697]: E0127 15:09:49.446578 4697 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 15:09:49 crc kubenswrapper[4697]: E0127 15:09:49.446636 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 15:10:53.446620017 +0000 UTC m=+149.619019798 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 15:09:49 crc kubenswrapper[4697]: E0127 15:09:49.446747 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 15:10:53.446729359 +0000 UTC m=+149.619129140 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 15:09:49 crc kubenswrapper[4697]: E0127 15:09:49.446764 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 15:10:53.44675823 +0000 UTC m=+149.619158011 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 15:09:49 crc kubenswrapper[4697]: E0127 15:09:49.446775 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 15:10:53.44676917 +0000 UTC m=+149.619168951 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 15:09:49 crc kubenswrapper[4697]: I0127 15:09:49.475892 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:49 crc kubenswrapper[4697]: I0127 15:09:49.475943 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:49 crc kubenswrapper[4697]: I0127 15:09:49.475953 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:49 crc kubenswrapper[4697]: I0127 15:09:49.475967 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:49 crc kubenswrapper[4697]: I0127 15:09:49.475977 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:49Z","lastTransitionTime":"2026-01-27T15:09:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:49 crc kubenswrapper[4697]: I0127 15:09:49.568251 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:09:49 crc kubenswrapper[4697]: E0127 15:09:49.568381 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:09:49 crc kubenswrapper[4697]: I0127 15:09:49.568257 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:09:49 crc kubenswrapper[4697]: E0127 15:09:49.568487 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:09:49 crc kubenswrapper[4697]: I0127 15:09:49.569270 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 08:01:07.941279173 +0000 UTC Jan 27 15:09:49 crc kubenswrapper[4697]: I0127 15:09:49.578083 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:49 crc kubenswrapper[4697]: I0127 15:09:49.578113 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:49 crc kubenswrapper[4697]: I0127 15:09:49.578121 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:49 crc kubenswrapper[4697]: I0127 15:09:49.578135 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:49 crc kubenswrapper[4697]: I0127 15:09:49.578143 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:49Z","lastTransitionTime":"2026-01-27T15:09:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:49 crc kubenswrapper[4697]: I0127 15:09:49.681830 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:49 crc kubenswrapper[4697]: I0127 15:09:49.682483 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:49 crc kubenswrapper[4697]: I0127 15:09:49.682690 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:49 crc kubenswrapper[4697]: I0127 15:09:49.682929 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:49 crc kubenswrapper[4697]: I0127 15:09:49.683170 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:49Z","lastTransitionTime":"2026-01-27T15:09:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:49 crc kubenswrapper[4697]: I0127 15:09:49.786164 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:49 crc kubenswrapper[4697]: I0127 15:09:49.786231 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:49 crc kubenswrapper[4697]: I0127 15:09:49.786250 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:49 crc kubenswrapper[4697]: I0127 15:09:49.786273 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:49 crc kubenswrapper[4697]: I0127 15:09:49.786292 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:49Z","lastTransitionTime":"2026-01-27T15:09:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:49 crc kubenswrapper[4697]: I0127 15:09:49.889151 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:49 crc kubenswrapper[4697]: I0127 15:09:49.889218 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:49 crc kubenswrapper[4697]: I0127 15:09:49.889236 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:49 crc kubenswrapper[4697]: I0127 15:09:49.889261 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:49 crc kubenswrapper[4697]: I0127 15:09:49.889277 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:49Z","lastTransitionTime":"2026-01-27T15:09:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:49 crc kubenswrapper[4697]: I0127 15:09:49.991997 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:49 crc kubenswrapper[4697]: I0127 15:09:49.992073 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:49 crc kubenswrapper[4697]: I0127 15:09:49.992085 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:49 crc kubenswrapper[4697]: I0127 15:09:49.992102 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:49 crc kubenswrapper[4697]: I0127 15:09:49.992113 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:49Z","lastTransitionTime":"2026-01-27T15:09:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:50 crc kubenswrapper[4697]: I0127 15:09:50.096668 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:50 crc kubenswrapper[4697]: I0127 15:09:50.096822 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:50 crc kubenswrapper[4697]: I0127 15:09:50.096854 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:50 crc kubenswrapper[4697]: I0127 15:09:50.096926 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:50 crc kubenswrapper[4697]: I0127 15:09:50.096952 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:50Z","lastTransitionTime":"2026-01-27T15:09:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:50 crc kubenswrapper[4697]: I0127 15:09:50.199774 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:50 crc kubenswrapper[4697]: I0127 15:09:50.199833 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:50 crc kubenswrapper[4697]: I0127 15:09:50.199841 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:50 crc kubenswrapper[4697]: I0127 15:09:50.199857 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:50 crc kubenswrapper[4697]: I0127 15:09:50.199866 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:50Z","lastTransitionTime":"2026-01-27T15:09:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:50 crc kubenswrapper[4697]: I0127 15:09:50.302301 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:50 crc kubenswrapper[4697]: I0127 15:09:50.302338 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:50 crc kubenswrapper[4697]: I0127 15:09:50.302346 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:50 crc kubenswrapper[4697]: I0127 15:09:50.302358 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:50 crc kubenswrapper[4697]: I0127 15:09:50.302367 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:50Z","lastTransitionTime":"2026-01-27T15:09:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:50 crc kubenswrapper[4697]: I0127 15:09:50.405297 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:50 crc kubenswrapper[4697]: I0127 15:09:50.405598 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:50 crc kubenswrapper[4697]: I0127 15:09:50.405710 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:50 crc kubenswrapper[4697]: I0127 15:09:50.405834 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:50 crc kubenswrapper[4697]: I0127 15:09:50.405937 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:50Z","lastTransitionTime":"2026-01-27T15:09:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:50 crc kubenswrapper[4697]: I0127 15:09:50.508683 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:50 crc kubenswrapper[4697]: I0127 15:09:50.508727 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:50 crc kubenswrapper[4697]: I0127 15:09:50.508739 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:50 crc kubenswrapper[4697]: I0127 15:09:50.508754 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:50 crc kubenswrapper[4697]: I0127 15:09:50.508765 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:50Z","lastTransitionTime":"2026-01-27T15:09:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:50 crc kubenswrapper[4697]: I0127 15:09:50.567668 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vwctp" Jan 27 15:09:50 crc kubenswrapper[4697]: I0127 15:09:50.567733 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:09:50 crc kubenswrapper[4697]: E0127 15:09:50.567829 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vwctp" podUID="11ed6885-450d-477c-8e08-acf5fbde2fa3" Jan 27 15:09:50 crc kubenswrapper[4697]: E0127 15:09:50.567940 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:09:50 crc kubenswrapper[4697]: I0127 15:09:50.570305 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 13:24:18.26772281 +0000 UTC Jan 27 15:09:50 crc kubenswrapper[4697]: I0127 15:09:50.610860 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:50 crc kubenswrapper[4697]: I0127 15:09:50.610908 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:50 crc kubenswrapper[4697]: I0127 15:09:50.610923 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:50 crc kubenswrapper[4697]: I0127 15:09:50.610940 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:50 crc kubenswrapper[4697]: I0127 15:09:50.610952 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:50Z","lastTransitionTime":"2026-01-27T15:09:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:50 crc kubenswrapper[4697]: I0127 15:09:50.714043 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:50 crc kubenswrapper[4697]: I0127 15:09:50.714086 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:50 crc kubenswrapper[4697]: I0127 15:09:50.714107 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:50 crc kubenswrapper[4697]: I0127 15:09:50.714127 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:50 crc kubenswrapper[4697]: I0127 15:09:50.714140 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:50Z","lastTransitionTime":"2026-01-27T15:09:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:50 crc kubenswrapper[4697]: I0127 15:09:50.817277 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:50 crc kubenswrapper[4697]: I0127 15:09:50.817311 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:50 crc kubenswrapper[4697]: I0127 15:09:50.817320 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:50 crc kubenswrapper[4697]: I0127 15:09:50.817333 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:50 crc kubenswrapper[4697]: I0127 15:09:50.817343 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:50Z","lastTransitionTime":"2026-01-27T15:09:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:50 crc kubenswrapper[4697]: I0127 15:09:50.921229 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:50 crc kubenswrapper[4697]: I0127 15:09:50.921285 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:50 crc kubenswrapper[4697]: I0127 15:09:50.921305 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:50 crc kubenswrapper[4697]: I0127 15:09:50.921324 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:50 crc kubenswrapper[4697]: I0127 15:09:50.921336 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:50Z","lastTransitionTime":"2026-01-27T15:09:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:51 crc kubenswrapper[4697]: I0127 15:09:51.024285 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:51 crc kubenswrapper[4697]: I0127 15:09:51.024313 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:51 crc kubenswrapper[4697]: I0127 15:09:51.024323 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:51 crc kubenswrapper[4697]: I0127 15:09:51.024334 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:51 crc kubenswrapper[4697]: I0127 15:09:51.024342 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:51Z","lastTransitionTime":"2026-01-27T15:09:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:51 crc kubenswrapper[4697]: I0127 15:09:51.126889 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:51 crc kubenswrapper[4697]: I0127 15:09:51.126973 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:51 crc kubenswrapper[4697]: I0127 15:09:51.126996 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:51 crc kubenswrapper[4697]: I0127 15:09:51.127026 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:51 crc kubenswrapper[4697]: I0127 15:09:51.127048 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:51Z","lastTransitionTime":"2026-01-27T15:09:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:51 crc kubenswrapper[4697]: I0127 15:09:51.230696 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:51 crc kubenswrapper[4697]: I0127 15:09:51.230840 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:51 crc kubenswrapper[4697]: I0127 15:09:51.230866 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:51 crc kubenswrapper[4697]: I0127 15:09:51.230896 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:51 crc kubenswrapper[4697]: I0127 15:09:51.230963 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:51Z","lastTransitionTime":"2026-01-27T15:09:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:51 crc kubenswrapper[4697]: I0127 15:09:51.334692 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:51 crc kubenswrapper[4697]: I0127 15:09:51.334989 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:51 crc kubenswrapper[4697]: I0127 15:09:51.334997 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:51 crc kubenswrapper[4697]: I0127 15:09:51.335013 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:51 crc kubenswrapper[4697]: I0127 15:09:51.335022 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:51Z","lastTransitionTime":"2026-01-27T15:09:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:51 crc kubenswrapper[4697]: I0127 15:09:51.437968 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:51 crc kubenswrapper[4697]: I0127 15:09:51.438016 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:51 crc kubenswrapper[4697]: I0127 15:09:51.438025 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:51 crc kubenswrapper[4697]: I0127 15:09:51.438039 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:51 crc kubenswrapper[4697]: I0127 15:09:51.438049 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:51Z","lastTransitionTime":"2026-01-27T15:09:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:51 crc kubenswrapper[4697]: I0127 15:09:51.541690 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:51 crc kubenswrapper[4697]: I0127 15:09:51.541749 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:51 crc kubenswrapper[4697]: I0127 15:09:51.541766 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:51 crc kubenswrapper[4697]: I0127 15:09:51.541834 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:51 crc kubenswrapper[4697]: I0127 15:09:51.541852 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:51Z","lastTransitionTime":"2026-01-27T15:09:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:51 crc kubenswrapper[4697]: I0127 15:09:51.567545 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:09:51 crc kubenswrapper[4697]: I0127 15:09:51.567653 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:09:51 crc kubenswrapper[4697]: E0127 15:09:51.567703 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:09:51 crc kubenswrapper[4697]: E0127 15:09:51.568054 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:09:51 crc kubenswrapper[4697]: I0127 15:09:51.571364 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 23:51:53.891953992 +0000 UTC Jan 27 15:09:51 crc kubenswrapper[4697]: I0127 15:09:51.576256 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 27 15:09:51 crc kubenswrapper[4697]: I0127 15:09:51.649834 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:51 crc kubenswrapper[4697]: I0127 15:09:51.650226 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:51 crc kubenswrapper[4697]: I0127 15:09:51.650254 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:51 crc kubenswrapper[4697]: I0127 15:09:51.650486 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:51 crc kubenswrapper[4697]: I0127 15:09:51.650503 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:51Z","lastTransitionTime":"2026-01-27T15:09:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:51 crc kubenswrapper[4697]: I0127 15:09:51.753975 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:51 crc kubenswrapper[4697]: I0127 15:09:51.754064 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:51 crc kubenswrapper[4697]: I0127 15:09:51.754089 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:51 crc kubenswrapper[4697]: I0127 15:09:51.754170 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:51 crc kubenswrapper[4697]: I0127 15:09:51.754200 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:51Z","lastTransitionTime":"2026-01-27T15:09:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:51 crc kubenswrapper[4697]: I0127 15:09:51.857537 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:51 crc kubenswrapper[4697]: I0127 15:09:51.857667 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:51 crc kubenswrapper[4697]: I0127 15:09:51.857681 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:51 crc kubenswrapper[4697]: I0127 15:09:51.857698 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:51 crc kubenswrapper[4697]: I0127 15:09:51.857709 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:51Z","lastTransitionTime":"2026-01-27T15:09:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:51 crc kubenswrapper[4697]: I0127 15:09:51.960188 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:51 crc kubenswrapper[4697]: I0127 15:09:51.960225 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:51 crc kubenswrapper[4697]: I0127 15:09:51.960236 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:51 crc kubenswrapper[4697]: I0127 15:09:51.960252 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:51 crc kubenswrapper[4697]: I0127 15:09:51.960263 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:51Z","lastTransitionTime":"2026-01-27T15:09:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:52 crc kubenswrapper[4697]: I0127 15:09:52.062318 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:52 crc kubenswrapper[4697]: I0127 15:09:52.062367 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:52 crc kubenswrapper[4697]: I0127 15:09:52.062375 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:52 crc kubenswrapper[4697]: I0127 15:09:52.062390 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:52 crc kubenswrapper[4697]: I0127 15:09:52.062399 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:52Z","lastTransitionTime":"2026-01-27T15:09:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:52 crc kubenswrapper[4697]: I0127 15:09:52.153841 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:52 crc kubenswrapper[4697]: I0127 15:09:52.153879 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:52 crc kubenswrapper[4697]: I0127 15:09:52.153890 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:52 crc kubenswrapper[4697]: I0127 15:09:52.153907 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:52 crc kubenswrapper[4697]: I0127 15:09:52.153919 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:52Z","lastTransitionTime":"2026-01-27T15:09:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:52 crc kubenswrapper[4697]: E0127 15:09:52.170391 4697 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:09:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:09:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:09:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:09:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"74b869f4-b1e4-4686-af4e-9516e0fb5017\\\",\\\"systemUUID\\\":\\\"69bca9ab-721f-415b-ad88-6626c7795f3c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:52Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:52 crc kubenswrapper[4697]: I0127 15:09:52.174273 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:52 crc kubenswrapper[4697]: I0127 15:09:52.174306 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:52 crc kubenswrapper[4697]: I0127 15:09:52.174316 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:52 crc kubenswrapper[4697]: I0127 15:09:52.174331 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:52 crc kubenswrapper[4697]: I0127 15:09:52.174342 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:52Z","lastTransitionTime":"2026-01-27T15:09:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:52 crc kubenswrapper[4697]: E0127 15:09:52.191900 4697 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:09:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:09:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:09:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:09:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"74b869f4-b1e4-4686-af4e-9516e0fb5017\\\",\\\"systemUUID\\\":\\\"69bca9ab-721f-415b-ad88-6626c7795f3c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:52Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:52 crc kubenswrapper[4697]: I0127 15:09:52.195980 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:52 crc kubenswrapper[4697]: I0127 15:09:52.196052 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:52 crc kubenswrapper[4697]: I0127 15:09:52.196065 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:52 crc kubenswrapper[4697]: I0127 15:09:52.196082 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:52 crc kubenswrapper[4697]: I0127 15:09:52.196139 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:52Z","lastTransitionTime":"2026-01-27T15:09:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:52 crc kubenswrapper[4697]: E0127 15:09:52.212492 4697 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:09:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:09:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:09:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:09:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"74b869f4-b1e4-4686-af4e-9516e0fb5017\\\",\\\"systemUUID\\\":\\\"69bca9ab-721f-415b-ad88-6626c7795f3c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:52Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:52 crc kubenswrapper[4697]: I0127 15:09:52.215899 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:52 crc kubenswrapper[4697]: I0127 15:09:52.215946 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:52 crc kubenswrapper[4697]: I0127 15:09:52.215953 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:52 crc kubenswrapper[4697]: I0127 15:09:52.215966 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:52 crc kubenswrapper[4697]: I0127 15:09:52.215975 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:52Z","lastTransitionTime":"2026-01-27T15:09:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:52 crc kubenswrapper[4697]: E0127 15:09:52.236592 4697 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:09:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:09:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:09:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:09:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"74b869f4-b1e4-4686-af4e-9516e0fb5017\\\",\\\"systemUUID\\\":\\\"69bca9ab-721f-415b-ad88-6626c7795f3c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:52Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:52 crc kubenswrapper[4697]: I0127 15:09:52.239997 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:52 crc kubenswrapper[4697]: I0127 15:09:52.240044 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:52 crc kubenswrapper[4697]: I0127 15:09:52.240052 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:52 crc kubenswrapper[4697]: I0127 15:09:52.240066 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:52 crc kubenswrapper[4697]: I0127 15:09:52.240099 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:52Z","lastTransitionTime":"2026-01-27T15:09:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:52 crc kubenswrapper[4697]: E0127 15:09:52.299876 4697 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:09:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:09:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:09:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:09:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"74b869f4-b1e4-4686-af4e-9516e0fb5017\\\",\\\"systemUUID\\\":\\\"69bca9ab-721f-415b-ad88-6626c7795f3c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:52Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:52 crc kubenswrapper[4697]: E0127 15:09:52.300026 4697 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 15:09:52 crc kubenswrapper[4697]: I0127 15:09:52.301180 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:52 crc kubenswrapper[4697]: I0127 15:09:52.301216 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:52 crc kubenswrapper[4697]: I0127 15:09:52.301225 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:52 crc kubenswrapper[4697]: I0127 15:09:52.301241 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:52 crc kubenswrapper[4697]: I0127 15:09:52.301251 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:52Z","lastTransitionTime":"2026-01-27T15:09:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:52 crc kubenswrapper[4697]: I0127 15:09:52.403334 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:52 crc kubenswrapper[4697]: I0127 15:09:52.403384 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:52 crc kubenswrapper[4697]: I0127 15:09:52.403397 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:52 crc kubenswrapper[4697]: I0127 15:09:52.403418 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:52 crc kubenswrapper[4697]: I0127 15:09:52.403432 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:52Z","lastTransitionTime":"2026-01-27T15:09:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:52 crc kubenswrapper[4697]: I0127 15:09:52.506095 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:52 crc kubenswrapper[4697]: I0127 15:09:52.506132 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:52 crc kubenswrapper[4697]: I0127 15:09:52.506141 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:52 crc kubenswrapper[4697]: I0127 15:09:52.506156 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:52 crc kubenswrapper[4697]: I0127 15:09:52.506165 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:52Z","lastTransitionTime":"2026-01-27T15:09:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:52 crc kubenswrapper[4697]: I0127 15:09:52.567840 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vwctp" Jan 27 15:09:52 crc kubenswrapper[4697]: E0127 15:09:52.568005 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vwctp" podUID="11ed6885-450d-477c-8e08-acf5fbde2fa3" Jan 27 15:09:52 crc kubenswrapper[4697]: I0127 15:09:52.568190 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:09:52 crc kubenswrapper[4697]: E0127 15:09:52.568276 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:09:52 crc kubenswrapper[4697]: I0127 15:09:52.571750 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 13:35:51.355935936 +0000 UTC Jan 27 15:09:52 crc kubenswrapper[4697]: I0127 15:09:52.607918 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:52 crc kubenswrapper[4697]: I0127 15:09:52.607972 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:52 crc kubenswrapper[4697]: I0127 15:09:52.607982 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:52 crc kubenswrapper[4697]: I0127 15:09:52.608025 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:52 crc kubenswrapper[4697]: I0127 15:09:52.608041 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:52Z","lastTransitionTime":"2026-01-27T15:09:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:52 crc kubenswrapper[4697]: I0127 15:09:52.711323 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:52 crc kubenswrapper[4697]: I0127 15:09:52.711380 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:52 crc kubenswrapper[4697]: I0127 15:09:52.711395 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:52 crc kubenswrapper[4697]: I0127 15:09:52.711411 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:52 crc kubenswrapper[4697]: I0127 15:09:52.711423 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:52Z","lastTransitionTime":"2026-01-27T15:09:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:52 crc kubenswrapper[4697]: I0127 15:09:52.813843 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:52 crc kubenswrapper[4697]: I0127 15:09:52.813888 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:52 crc kubenswrapper[4697]: I0127 15:09:52.813901 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:52 crc kubenswrapper[4697]: I0127 15:09:52.813917 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:52 crc kubenswrapper[4697]: I0127 15:09:52.813931 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:52Z","lastTransitionTime":"2026-01-27T15:09:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:52 crc kubenswrapper[4697]: I0127 15:09:52.917420 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:52 crc kubenswrapper[4697]: I0127 15:09:52.917488 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:52 crc kubenswrapper[4697]: I0127 15:09:52.917509 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:52 crc kubenswrapper[4697]: I0127 15:09:52.917539 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:52 crc kubenswrapper[4697]: I0127 15:09:52.917564 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:52Z","lastTransitionTime":"2026-01-27T15:09:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:53 crc kubenswrapper[4697]: I0127 15:09:53.020929 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:53 crc kubenswrapper[4697]: I0127 15:09:53.021007 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:53 crc kubenswrapper[4697]: I0127 15:09:53.021028 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:53 crc kubenswrapper[4697]: I0127 15:09:53.021057 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:53 crc kubenswrapper[4697]: I0127 15:09:53.021078 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:53Z","lastTransitionTime":"2026-01-27T15:09:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:53 crc kubenswrapper[4697]: I0127 15:09:53.124958 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:53 crc kubenswrapper[4697]: I0127 15:09:53.125021 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:53 crc kubenswrapper[4697]: I0127 15:09:53.125040 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:53 crc kubenswrapper[4697]: I0127 15:09:53.125066 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:53 crc kubenswrapper[4697]: I0127 15:09:53.125085 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:53Z","lastTransitionTime":"2026-01-27T15:09:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:53 crc kubenswrapper[4697]: I0127 15:09:53.227357 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:53 crc kubenswrapper[4697]: I0127 15:09:53.227395 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:53 crc kubenswrapper[4697]: I0127 15:09:53.227405 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:53 crc kubenswrapper[4697]: I0127 15:09:53.227420 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:53 crc kubenswrapper[4697]: I0127 15:09:53.227430 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:53Z","lastTransitionTime":"2026-01-27T15:09:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:53 crc kubenswrapper[4697]: I0127 15:09:53.329613 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:53 crc kubenswrapper[4697]: I0127 15:09:53.329646 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:53 crc kubenswrapper[4697]: I0127 15:09:53.329658 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:53 crc kubenswrapper[4697]: I0127 15:09:53.329672 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:53 crc kubenswrapper[4697]: I0127 15:09:53.329685 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:53Z","lastTransitionTime":"2026-01-27T15:09:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:53 crc kubenswrapper[4697]: I0127 15:09:53.432176 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:53 crc kubenswrapper[4697]: I0127 15:09:53.432221 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:53 crc kubenswrapper[4697]: I0127 15:09:53.432229 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:53 crc kubenswrapper[4697]: I0127 15:09:53.432243 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:53 crc kubenswrapper[4697]: I0127 15:09:53.432254 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:53Z","lastTransitionTime":"2026-01-27T15:09:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:53 crc kubenswrapper[4697]: I0127 15:09:53.534345 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:53 crc kubenswrapper[4697]: I0127 15:09:53.534415 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:53 crc kubenswrapper[4697]: I0127 15:09:53.534436 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:53 crc kubenswrapper[4697]: I0127 15:09:53.534460 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:53 crc kubenswrapper[4697]: I0127 15:09:53.534492 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:53Z","lastTransitionTime":"2026-01-27T15:09:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:53 crc kubenswrapper[4697]: I0127 15:09:53.567979 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:09:53 crc kubenswrapper[4697]: I0127 15:09:53.567996 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:09:53 crc kubenswrapper[4697]: E0127 15:09:53.568124 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:09:53 crc kubenswrapper[4697]: E0127 15:09:53.568656 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:09:53 crc kubenswrapper[4697]: I0127 15:09:53.572212 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 00:52:10.735758886 +0000 UTC Jan 27 15:09:53 crc kubenswrapper[4697]: I0127 15:09:53.637154 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:53 crc kubenswrapper[4697]: I0127 15:09:53.637247 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:53 crc kubenswrapper[4697]: I0127 15:09:53.637272 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:53 crc kubenswrapper[4697]: I0127 15:09:53.637300 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:53 crc kubenswrapper[4697]: I0127 15:09:53.637325 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:53Z","lastTransitionTime":"2026-01-27T15:09:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:53 crc kubenswrapper[4697]: I0127 15:09:53.739235 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:53 crc kubenswrapper[4697]: I0127 15:09:53.739285 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:53 crc kubenswrapper[4697]: I0127 15:09:53.739295 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:53 crc kubenswrapper[4697]: I0127 15:09:53.739311 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:53 crc kubenswrapper[4697]: I0127 15:09:53.739321 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:53Z","lastTransitionTime":"2026-01-27T15:09:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:53 crc kubenswrapper[4697]: I0127 15:09:53.841890 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:53 crc kubenswrapper[4697]: I0127 15:09:53.841930 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:53 crc kubenswrapper[4697]: I0127 15:09:53.841941 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:53 crc kubenswrapper[4697]: I0127 15:09:53.841960 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:53 crc kubenswrapper[4697]: I0127 15:09:53.841972 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:53Z","lastTransitionTime":"2026-01-27T15:09:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:53 crc kubenswrapper[4697]: I0127 15:09:53.945377 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:53 crc kubenswrapper[4697]: I0127 15:09:53.945415 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:53 crc kubenswrapper[4697]: I0127 15:09:53.945423 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:53 crc kubenswrapper[4697]: I0127 15:09:53.945437 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:53 crc kubenswrapper[4697]: I0127 15:09:53.945446 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:53Z","lastTransitionTime":"2026-01-27T15:09:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:54 crc kubenswrapper[4697]: I0127 15:09:54.049178 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:54 crc kubenswrapper[4697]: I0127 15:09:54.049220 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:54 crc kubenswrapper[4697]: I0127 15:09:54.049233 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:54 crc kubenswrapper[4697]: I0127 15:09:54.049250 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:54 crc kubenswrapper[4697]: I0127 15:09:54.049261 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:54Z","lastTransitionTime":"2026-01-27T15:09:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:54 crc kubenswrapper[4697]: I0127 15:09:54.151409 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:54 crc kubenswrapper[4697]: I0127 15:09:54.151457 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:54 crc kubenswrapper[4697]: I0127 15:09:54.151473 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:54 crc kubenswrapper[4697]: I0127 15:09:54.151492 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:54 crc kubenswrapper[4697]: I0127 15:09:54.151506 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:54Z","lastTransitionTime":"2026-01-27T15:09:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:54 crc kubenswrapper[4697]: I0127 15:09:54.255001 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:54 crc kubenswrapper[4697]: I0127 15:09:54.255128 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:54 crc kubenswrapper[4697]: I0127 15:09:54.255155 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:54 crc kubenswrapper[4697]: I0127 15:09:54.255184 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:54 crc kubenswrapper[4697]: I0127 15:09:54.255209 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:54Z","lastTransitionTime":"2026-01-27T15:09:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:54 crc kubenswrapper[4697]: I0127 15:09:54.358546 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:54 crc kubenswrapper[4697]: I0127 15:09:54.358611 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:54 crc kubenswrapper[4697]: I0127 15:09:54.358629 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:54 crc kubenswrapper[4697]: I0127 15:09:54.358653 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:54 crc kubenswrapper[4697]: I0127 15:09:54.358672 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:54Z","lastTransitionTime":"2026-01-27T15:09:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:54 crc kubenswrapper[4697]: I0127 15:09:54.461039 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:54 crc kubenswrapper[4697]: I0127 15:09:54.461084 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:54 crc kubenswrapper[4697]: I0127 15:09:54.461098 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:54 crc kubenswrapper[4697]: I0127 15:09:54.461114 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:54 crc kubenswrapper[4697]: I0127 15:09:54.461126 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:54Z","lastTransitionTime":"2026-01-27T15:09:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:54 crc kubenswrapper[4697]: I0127 15:09:54.563871 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:54 crc kubenswrapper[4697]: I0127 15:09:54.563935 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:54 crc kubenswrapper[4697]: I0127 15:09:54.563952 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:54 crc kubenswrapper[4697]: I0127 15:09:54.563974 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:54 crc kubenswrapper[4697]: I0127 15:09:54.563992 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:54Z","lastTransitionTime":"2026-01-27T15:09:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:54 crc kubenswrapper[4697]: I0127 15:09:54.568155 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vwctp" Jan 27 15:09:54 crc kubenswrapper[4697]: I0127 15:09:54.568246 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:09:54 crc kubenswrapper[4697]: E0127 15:09:54.568395 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vwctp" podUID="11ed6885-450d-477c-8e08-acf5fbde2fa3" Jan 27 15:09:54 crc kubenswrapper[4697]: E0127 15:09:54.568742 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:09:54 crc kubenswrapper[4697]: I0127 15:09:54.577019 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 22:26:51.230724826 +0000 UTC Jan 27 15:09:54 crc kubenswrapper[4697]: I0127 15:09:54.587976 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e13ee612abe9aa03f8ccaf68abbdfdbeb29820484f430097aef6be1679d3efe8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:54Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:54 crc kubenswrapper[4697]: I0127 15:09:54.601741 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a182e147723dd1c9335e6c6a910d5d53bdfc118504b6a0a9f3c91f79b6d3aee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52fcd1c6784720765f18ddc1936d3bdd625b743d27654a647ff80351957797e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:54Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:54 crc kubenswrapper[4697]: I0127 15:09:54.616492 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:54Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:54 crc kubenswrapper[4697]: I0127 15:09:54.636379 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bcb9s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7543bea-0b65-44e1-8c0c-bc1a13577d69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fe79de88015d62a290c140e0504b9ef088f39fa79bc9b379d46fa9cdb03123f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0b69d8311464a46854b17dc23de984ff37a24f3de84f8ad6033d26d5dd30afc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0b69d8311464a46854b17dc23de984ff37a24f3de84f8ad6033d26d5dd30afc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d34049aae4e409909bb597c8bf33aa1c1ac85699cf72e33f5643145fdf9fbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d34049aae4e409909bb597c8bf33aa1c1ac85699cf72e33f5643145fdf9fbb9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b85aff4ba7e4c4eddcdfd916b42392fd8f5bd4d18caae739a7490c0576fcff1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b85aff4ba7e4c4eddcdfd916b42392fd8f5bd4d18caae739a7490c0576fcff1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6affaf91a44dec8a9da34068ed68f480ad543e0efc8e0f584fd5002f8f6ed0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6affaf91a44dec8a9da34068ed68f480ad543e0efc8e0f584fd5002f8f6ed0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dede89b14b4d80c8b9e74c45b628b5def6a04f922bb59c06828c3a4e43deca4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dede89b14b4d80c8b9e74c45b628b5def6a04f922bb59c06828c3a4e43deca4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a125d46e355d85444bf125e8184888e9b0c18dab3cd7b09b89ffff202e2c6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a125d46e355d85444bf125e8184888e9b0c18dab3cd7b09b89ffff202e2c6b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl56b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bcb9s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:54Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:54 crc kubenswrapper[4697]: I0127 15:09:54.651979 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rq89t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fbc1c27-fba2-40df-95dd-3842bd1f1906\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55217260dcb8aebc9ddf2d903bc0257bc8a122956102c0215d6a5a20451d6afe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0c056e48d3130806317f25486fea67d938a0e610f19b6089873f2fcfe4759a0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T15:09:34Z\\\",\\\"message\\\":\\\"2026-01-27T15:08:49+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_df226232-eda2-4025-b167-90894438b301\\\\n2026-01-27T15:08:49+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_df226232-eda2-4025-b167-90894438b301 to /host/opt/cni/bin/\\\\n2026-01-27T15:08:49Z [verbose] multus-daemon started\\\\n2026-01-27T15:08:49Z [verbose] Readiness Indicator file check\\\\n2026-01-27T15:09:34Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:09:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npp7h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rq89t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:54Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:54 crc kubenswrapper[4697]: I0127 15:09:54.665653 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:54 crc kubenswrapper[4697]: I0127 15:09:54.665691 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:54 crc kubenswrapper[4697]: I0127 15:09:54.665701 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:54 crc kubenswrapper[4697]: I0127 15:09:54.665719 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:54 crc kubenswrapper[4697]: I0127 15:09:54.665732 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:54Z","lastTransitionTime":"2026-01-27T15:09:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:54 crc kubenswrapper[4697]: I0127 15:09:54.679695 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6jxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24ac4a674c5fb98082daeabf52736988951ea5c66064ff4bb63f0d40c43b947d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25f52622d494cffbbd36c21f76148b896a10d3c1ace649ac0824e847b812a277\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9146d3d41cb348c99ea78d62aef3aa7d46c5f99855e042fdf5bc38b18556e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e33c68fac5ef11b2704b8a1460588937489a191ea2eacb70548b1e99cf718822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8784cf473729161592d08c782f4754724d6609756a30040715cbff8c732a09c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eea7c2b7dbea8198cc4709a808f8ecab760514224f4e3eb96d04c3bd7f16df6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8434917bca076a475c1e4b907733bca9cee4559bea25a20542bc654c51f925fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8434917bca076a475c1e4b907733bca9cee4559bea25a20542bc654c51f925fd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T15:09:44Z\\\",\\\"message\\\":\\\"} vips:{GoMap:map[10.217.4.38:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {8b82f026-5975-4a1b-bb18-08d5d51147ec}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 15:09:44.278383 6643 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 15:09:44.278396 6643 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0127 15:09:44.278366 6643 ovnkube.go:599] Stopped ovnkube\\\\nI0127 15:09:44.278411 6643 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-apiserver-operator/metrics]} name:Service_openshift-apiserver-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.38:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {8b82f026-5975-4a1b-bb18-08d5d51147ec}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 15:09:44.278483 6643 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0127 15:09:44.278559 6643 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:09:43Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-z6jxw_openshift-ovn-kubernetes(6a1ce5ad-1a8c-4a28-99d8-fc71649954ad)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://971bf4362650664f5133d9b68b7a5ce76e54dafbf28c88730f678ada0256ffd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9666b8a501ef015431ee3be1fc34ca2b196011df3007d2e4d508f09f9967785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9666b8a501ef015431ee3be1fc34ca2b196011df3007d2e4d508f09f9967785\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z6jxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:54Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:54 crc kubenswrapper[4697]: I0127 15:09:54.694279 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00bcd4fb-11e6-4087-91b7-290cd35a7292\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee5c74f4e3f1154431027a743528e81ec4bed30037b30a858870f74993da4691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b23c092c5d493951a1f6dbbf0482f102f36a830133d843f3c574afba2e1d50d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ad05a5c3b7640af677ede45c27c40da5d118e28a9d45de0ffa60a05684121c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fd615105781bcf4614f8a58cf63eeb89020db12e822192bd652a5ff23e25a2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:24Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:54Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:54 crc kubenswrapper[4697]: I0127 15:09:54.707964 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wz495" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9bec8bc-b2a6-4865-83ca-692ae5c022a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2616d07c83d73b63d4b728a30de8a7e1d76986d38f8c4c3fe019bf73e64784f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wqhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://faaced835dbc76e880a1fd29824b00fca5f720686e476bcba6ad4f807e28e8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wqhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wz495\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:54Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:54 crc kubenswrapper[4697]: I0127 15:09:54.719146 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vwctp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11ed6885-450d-477c-8e08-acf5fbde2fa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr85v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr85v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:09:00Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vwctp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:54Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:54 crc kubenswrapper[4697]: I0127 15:09:54.730578 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"744865f0-3e59-4f00-8501-93f81e43d761\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cd6b4774eb9f0e9e586080499fe34cd307cdd0257abf0e45e717093cbef8d28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://455c5c73e0c973f4f29466798aaa9e03b0a1768678f818b93faad8a79b5c43b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://455c5c73e0c973f4f29466798aaa9e03b0a1768678f818b93faad8a79b5c43b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:54Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:54 crc kubenswrapper[4697]: I0127 15:09:54.741517 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lpz4j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d187caad-2501-44d6-8ced-f8d8ca5fecfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c2b6a00c426e85ca8ca4fe5790bf7badc12e0c2cc72c1454e664e809ace5e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5jqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lpz4j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:54Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:54 crc kubenswrapper[4697]: I0127 15:09:54.755962 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30821478-065e-48b2-85f3-ae69260477fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://841fe2379065903ddc38b4968c1764a6c83d13f42c7587f20be81d8539199c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc09ec12a81a4e2954a0d1146819e9f9b4fc1fd442a3e9c930ea213aff875eb9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa7833382543ce12d026eb8bbc6fb93276a1105a0cc34d215e719591be740f80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d1140be76b3f274b414e158153723d043089cb9b01d27733976db83dc4601f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3144c28de6be75231118993ba779a42bcc9032d51e927df649d3abb602ffa5dd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 15:08:45.318333 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 15:08:45.318446 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 15:08:45.319039 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1083979560/tls.crt::/tmp/serving-cert-1083979560/tls.key\\\\\\\"\\\\nI0127 15:08:45.778691 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 15:08:45.781562 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 15:08:45.781589 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 15:08:45.781614 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 15:08:45.781620 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 15:08:45.799733 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0127 15:08:45.799756 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 15:08:45.799769 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:08:45.799774 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 15:08:45.799800 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 15:08:45.799806 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 15:08:45.799810 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 15:08:45.799814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 15:08:45.805747 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://772509e08b1dcc68190d81e10a93fe348af55fdc71dbab2f0cadffd65089c044\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d9c79b1675802dcd1800cdbf3562832c4d201ff1b4d7ab4504118a41a245453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d9c79b1675802dcd1800cdbf3562832c4d201ff1b4d7ab4504118a41a245453\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:24Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:54Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:54 crc kubenswrapper[4697]: I0127 15:09:54.767989 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ed572c3-ca0d-4d38-9ac0-81080c32efe5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdb01f592a7ee00906befc039b4ac006fa96e5d36ae7cf4029af12500c42d0a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9af166859a55cb5f718a1750f4ce20f5c4259e1adad06c609ce66a907974b3ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0baf6862ad66d010a3e2ca21560d76f0de57cf5afc64cc594d4b6204f5653904\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b800aa3d330bd36d5613b410b0b73f5d175f0ec70a76d4eb479dcb0db8957a72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b800aa3d330bd36d5613b410b0b73f5d175f0ec70a76d4eb479dcb0db8957a72\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:25Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:24Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:54Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:54 crc kubenswrapper[4697]: I0127 15:09:54.768473 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:54 crc kubenswrapper[4697]: I0127 15:09:54.768507 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:54 crc kubenswrapper[4697]: I0127 15:09:54.768516 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:54 crc kubenswrapper[4697]: I0127 15:09:54.768531 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:54 crc kubenswrapper[4697]: I0127 15:09:54.768557 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:54Z","lastTransitionTime":"2026-01-27T15:09:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:54 crc kubenswrapper[4697]: I0127 15:09:54.779708 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:54Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:54 crc kubenswrapper[4697]: I0127 15:09:54.791607 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:54Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:54 crc kubenswrapper[4697]: I0127 15:09:54.806118 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://955eb03bb38f971417b1af1b193c2008607eaeda5addf30f899830dd84620c4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:54Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:54 crc kubenswrapper[4697]: I0127 15:09:54.820647 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bdclj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed86f7b6-a042-470f-8da3-9cad4e65c550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a701152234da7522fefeed3798f4748c4f8e56fa81edd5011ad4a89bbb2e4be7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f898q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bdclj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:54Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:54 crc kubenswrapper[4697]: I0127 15:09:54.833189 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6lf86" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35bbb68b-046f-482d-8c38-e76dd8a12a61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6949b3c1babb1c4c69bf612b869bea5dabf3fedc5e6c930ec3d3a51736c9651f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sf5z9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc445832c9ce25b3b787c029df7baad2f8ad53f7cf8705ab5e2590c85119bec1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sf5z9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6lf86\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:54Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:54 crc kubenswrapper[4697]: I0127 15:09:54.855018 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f1b32c65-9dd1-4f66-9c0b-c9234c934d7a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:08:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3eb075712ef240052637eb573fea6b47aced80df441ea60774ed40e4e35c8fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db92f9a5afaba8bbda4d8ea9bcb99b5ad334aef7f68b0cf85da3fbf0a1816d8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f35bc4d5235d591cc0e2af6294fb97d2fc6d7e84ce8a179acac6ea4ea4b9e5ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3889f71127ca21494f008231cfc9d7f1a3c106ea419b6abd1ccebeccdcc749a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b10d0995f70041a7467f958a2d131f342a916cce576ad086a33b41bc2864fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:08:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a040b13703e44987138aeacce4da557d3046864eac3120c090d95959796ba68c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a040b13703e44987138aeacce4da557d3046864eac3120c090d95959796ba68c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6fc33ccddbde32f28fee077d2abeda2e39cf7874e8f789e91a211a8a95a2313\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6fc33ccddbde32f28fee077d2abeda2e39cf7874e8f789e91a211a8a95a2313\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:26Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://052d6ea90d7fff038a147dffda9efd4688f2a53ee7c627569d63d50e02b5ce2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://052d6ea90d7fff038a147dffda9efd4688f2a53ee7c627569d63d50e02b5ce2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:08:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:08:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:08:24Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:09:54Z is after 2025-08-24T17:21:41Z" Jan 27 15:09:54 crc kubenswrapper[4697]: I0127 15:09:54.870626 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:54 crc kubenswrapper[4697]: I0127 15:09:54.870689 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:54 crc kubenswrapper[4697]: I0127 15:09:54.870707 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:54 crc kubenswrapper[4697]: I0127 15:09:54.870731 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:54 crc kubenswrapper[4697]: I0127 15:09:54.870749 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:54Z","lastTransitionTime":"2026-01-27T15:09:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:54 crc kubenswrapper[4697]: I0127 15:09:54.972847 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:54 crc kubenswrapper[4697]: I0127 15:09:54.972884 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:54 crc kubenswrapper[4697]: I0127 15:09:54.972894 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:54 crc kubenswrapper[4697]: I0127 15:09:54.972912 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:54 crc kubenswrapper[4697]: I0127 15:09:54.972923 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:54Z","lastTransitionTime":"2026-01-27T15:09:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:55 crc kubenswrapper[4697]: I0127 15:09:55.075644 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:55 crc kubenswrapper[4697]: I0127 15:09:55.075690 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:55 crc kubenswrapper[4697]: I0127 15:09:55.075703 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:55 crc kubenswrapper[4697]: I0127 15:09:55.075718 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:55 crc kubenswrapper[4697]: I0127 15:09:55.075729 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:55Z","lastTransitionTime":"2026-01-27T15:09:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:55 crc kubenswrapper[4697]: I0127 15:09:55.179473 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:55 crc kubenswrapper[4697]: I0127 15:09:55.179524 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:55 crc kubenswrapper[4697]: I0127 15:09:55.179539 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:55 crc kubenswrapper[4697]: I0127 15:09:55.179559 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:55 crc kubenswrapper[4697]: I0127 15:09:55.179759 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:55Z","lastTransitionTime":"2026-01-27T15:09:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:55 crc kubenswrapper[4697]: I0127 15:09:55.283130 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:55 crc kubenswrapper[4697]: I0127 15:09:55.283190 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:55 crc kubenswrapper[4697]: I0127 15:09:55.283206 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:55 crc kubenswrapper[4697]: I0127 15:09:55.283229 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:55 crc kubenswrapper[4697]: I0127 15:09:55.283249 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:55Z","lastTransitionTime":"2026-01-27T15:09:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:55 crc kubenswrapper[4697]: I0127 15:09:55.386229 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:55 crc kubenswrapper[4697]: I0127 15:09:55.386285 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:55 crc kubenswrapper[4697]: I0127 15:09:55.386302 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:55 crc kubenswrapper[4697]: I0127 15:09:55.386327 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:55 crc kubenswrapper[4697]: I0127 15:09:55.386343 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:55Z","lastTransitionTime":"2026-01-27T15:09:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:55 crc kubenswrapper[4697]: I0127 15:09:55.489774 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:55 crc kubenswrapper[4697]: I0127 15:09:55.489893 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:55 crc kubenswrapper[4697]: I0127 15:09:55.489914 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:55 crc kubenswrapper[4697]: I0127 15:09:55.489939 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:55 crc kubenswrapper[4697]: I0127 15:09:55.489956 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:55Z","lastTransitionTime":"2026-01-27T15:09:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:55 crc kubenswrapper[4697]: I0127 15:09:55.568092 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:09:55 crc kubenswrapper[4697]: I0127 15:09:55.568092 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:09:55 crc kubenswrapper[4697]: E0127 15:09:55.568446 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:09:55 crc kubenswrapper[4697]: E0127 15:09:55.568560 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:09:55 crc kubenswrapper[4697]: I0127 15:09:55.577330 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 04:42:47.632402641 +0000 UTC Jan 27 15:09:55 crc kubenswrapper[4697]: I0127 15:09:55.591940 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:55 crc kubenswrapper[4697]: I0127 15:09:55.591983 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:55 crc kubenswrapper[4697]: I0127 15:09:55.591995 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:55 crc kubenswrapper[4697]: I0127 15:09:55.592013 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:55 crc kubenswrapper[4697]: I0127 15:09:55.592026 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:55Z","lastTransitionTime":"2026-01-27T15:09:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:55 crc kubenswrapper[4697]: I0127 15:09:55.694848 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:55 crc kubenswrapper[4697]: I0127 15:09:55.694909 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:55 crc kubenswrapper[4697]: I0127 15:09:55.694926 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:55 crc kubenswrapper[4697]: I0127 15:09:55.694965 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:55 crc kubenswrapper[4697]: I0127 15:09:55.694985 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:55Z","lastTransitionTime":"2026-01-27T15:09:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:55 crc kubenswrapper[4697]: I0127 15:09:55.798752 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:55 crc kubenswrapper[4697]: I0127 15:09:55.798841 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:55 crc kubenswrapper[4697]: I0127 15:09:55.798856 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:55 crc kubenswrapper[4697]: I0127 15:09:55.798882 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:55 crc kubenswrapper[4697]: I0127 15:09:55.798899 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:55Z","lastTransitionTime":"2026-01-27T15:09:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:55 crc kubenswrapper[4697]: I0127 15:09:55.901650 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:55 crc kubenswrapper[4697]: I0127 15:09:55.901724 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:55 crc kubenswrapper[4697]: I0127 15:09:55.901738 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:55 crc kubenswrapper[4697]: I0127 15:09:55.901757 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:55 crc kubenswrapper[4697]: I0127 15:09:55.901768 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:55Z","lastTransitionTime":"2026-01-27T15:09:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:56 crc kubenswrapper[4697]: I0127 15:09:56.004948 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:56 crc kubenswrapper[4697]: I0127 15:09:56.005024 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:56 crc kubenswrapper[4697]: I0127 15:09:56.005047 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:56 crc kubenswrapper[4697]: I0127 15:09:56.005080 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:56 crc kubenswrapper[4697]: I0127 15:09:56.005128 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:56Z","lastTransitionTime":"2026-01-27T15:09:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:56 crc kubenswrapper[4697]: I0127 15:09:56.108018 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:56 crc kubenswrapper[4697]: I0127 15:09:56.108160 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:56 crc kubenswrapper[4697]: I0127 15:09:56.108190 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:56 crc kubenswrapper[4697]: I0127 15:09:56.108219 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:56 crc kubenswrapper[4697]: I0127 15:09:56.108239 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:56Z","lastTransitionTime":"2026-01-27T15:09:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:56 crc kubenswrapper[4697]: I0127 15:09:56.211952 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:56 crc kubenswrapper[4697]: I0127 15:09:56.212017 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:56 crc kubenswrapper[4697]: I0127 15:09:56.212034 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:56 crc kubenswrapper[4697]: I0127 15:09:56.212058 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:56 crc kubenswrapper[4697]: I0127 15:09:56.212076 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:56Z","lastTransitionTime":"2026-01-27T15:09:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:56 crc kubenswrapper[4697]: I0127 15:09:56.314928 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:56 crc kubenswrapper[4697]: I0127 15:09:56.314982 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:56 crc kubenswrapper[4697]: I0127 15:09:56.314999 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:56 crc kubenswrapper[4697]: I0127 15:09:56.315023 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:56 crc kubenswrapper[4697]: I0127 15:09:56.315040 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:56Z","lastTransitionTime":"2026-01-27T15:09:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:56 crc kubenswrapper[4697]: I0127 15:09:56.418030 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:56 crc kubenswrapper[4697]: I0127 15:09:56.418125 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:56 crc kubenswrapper[4697]: I0127 15:09:56.418138 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:56 crc kubenswrapper[4697]: I0127 15:09:56.418159 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:56 crc kubenswrapper[4697]: I0127 15:09:56.418174 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:56Z","lastTransitionTime":"2026-01-27T15:09:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:56 crc kubenswrapper[4697]: I0127 15:09:56.521035 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:56 crc kubenswrapper[4697]: I0127 15:09:56.521074 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:56 crc kubenswrapper[4697]: I0127 15:09:56.521085 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:56 crc kubenswrapper[4697]: I0127 15:09:56.521101 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:56 crc kubenswrapper[4697]: I0127 15:09:56.521113 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:56Z","lastTransitionTime":"2026-01-27T15:09:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:56 crc kubenswrapper[4697]: I0127 15:09:56.568143 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vwctp" Jan 27 15:09:56 crc kubenswrapper[4697]: E0127 15:09:56.568331 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vwctp" podUID="11ed6885-450d-477c-8e08-acf5fbde2fa3" Jan 27 15:09:56 crc kubenswrapper[4697]: I0127 15:09:56.568143 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:09:56 crc kubenswrapper[4697]: E0127 15:09:56.568650 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:09:56 crc kubenswrapper[4697]: I0127 15:09:56.577515 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 16:48:27.604964816 +0000 UTC Jan 27 15:09:56 crc kubenswrapper[4697]: I0127 15:09:56.623879 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:56 crc kubenswrapper[4697]: I0127 15:09:56.623928 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:56 crc kubenswrapper[4697]: I0127 15:09:56.623940 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:56 crc kubenswrapper[4697]: I0127 15:09:56.623958 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:56 crc kubenswrapper[4697]: I0127 15:09:56.623971 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:56Z","lastTransitionTime":"2026-01-27T15:09:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:56 crc kubenswrapper[4697]: I0127 15:09:56.725900 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:56 crc kubenswrapper[4697]: I0127 15:09:56.725963 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:56 crc kubenswrapper[4697]: I0127 15:09:56.725988 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:56 crc kubenswrapper[4697]: I0127 15:09:56.726014 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:56 crc kubenswrapper[4697]: I0127 15:09:56.726030 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:56Z","lastTransitionTime":"2026-01-27T15:09:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:56 crc kubenswrapper[4697]: I0127 15:09:56.828586 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:56 crc kubenswrapper[4697]: I0127 15:09:56.828629 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:56 crc kubenswrapper[4697]: I0127 15:09:56.828677 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:56 crc kubenswrapper[4697]: I0127 15:09:56.828724 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:56 crc kubenswrapper[4697]: I0127 15:09:56.828744 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:56Z","lastTransitionTime":"2026-01-27T15:09:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:56 crc kubenswrapper[4697]: I0127 15:09:56.932568 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:56 crc kubenswrapper[4697]: I0127 15:09:56.932628 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:56 crc kubenswrapper[4697]: I0127 15:09:56.932651 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:56 crc kubenswrapper[4697]: I0127 15:09:56.932672 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:56 crc kubenswrapper[4697]: I0127 15:09:56.932691 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:56Z","lastTransitionTime":"2026-01-27T15:09:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:57 crc kubenswrapper[4697]: I0127 15:09:57.036349 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:57 crc kubenswrapper[4697]: I0127 15:09:57.036382 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:57 crc kubenswrapper[4697]: I0127 15:09:57.036393 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:57 crc kubenswrapper[4697]: I0127 15:09:57.036408 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:57 crc kubenswrapper[4697]: I0127 15:09:57.036420 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:57Z","lastTransitionTime":"2026-01-27T15:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:57 crc kubenswrapper[4697]: I0127 15:09:57.138389 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:57 crc kubenswrapper[4697]: I0127 15:09:57.138426 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:57 crc kubenswrapper[4697]: I0127 15:09:57.138434 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:57 crc kubenswrapper[4697]: I0127 15:09:57.138447 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:57 crc kubenswrapper[4697]: I0127 15:09:57.138458 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:57Z","lastTransitionTime":"2026-01-27T15:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:57 crc kubenswrapper[4697]: I0127 15:09:57.241380 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:57 crc kubenswrapper[4697]: I0127 15:09:57.241479 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:57 crc kubenswrapper[4697]: I0127 15:09:57.241530 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:57 crc kubenswrapper[4697]: I0127 15:09:57.241555 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:57 crc kubenswrapper[4697]: I0127 15:09:57.241607 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:57Z","lastTransitionTime":"2026-01-27T15:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:57 crc kubenswrapper[4697]: I0127 15:09:57.345007 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:57 crc kubenswrapper[4697]: I0127 15:09:57.345080 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:57 crc kubenswrapper[4697]: I0127 15:09:57.345105 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:57 crc kubenswrapper[4697]: I0127 15:09:57.345129 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:57 crc kubenswrapper[4697]: I0127 15:09:57.345147 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:57Z","lastTransitionTime":"2026-01-27T15:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:57 crc kubenswrapper[4697]: I0127 15:09:57.447378 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:57 crc kubenswrapper[4697]: I0127 15:09:57.447463 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:57 crc kubenswrapper[4697]: I0127 15:09:57.447487 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:57 crc kubenswrapper[4697]: I0127 15:09:57.447518 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:57 crc kubenswrapper[4697]: I0127 15:09:57.447542 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:57Z","lastTransitionTime":"2026-01-27T15:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:57 crc kubenswrapper[4697]: I0127 15:09:57.549391 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:57 crc kubenswrapper[4697]: I0127 15:09:57.549431 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:57 crc kubenswrapper[4697]: I0127 15:09:57.549439 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:57 crc kubenswrapper[4697]: I0127 15:09:57.549453 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:57 crc kubenswrapper[4697]: I0127 15:09:57.549461 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:57Z","lastTransitionTime":"2026-01-27T15:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:57 crc kubenswrapper[4697]: I0127 15:09:57.567391 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:09:57 crc kubenswrapper[4697]: I0127 15:09:57.567435 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:09:57 crc kubenswrapper[4697]: E0127 15:09:57.567484 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:09:57 crc kubenswrapper[4697]: E0127 15:09:57.567547 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:09:57 crc kubenswrapper[4697]: I0127 15:09:57.578665 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 14:18:58.215829701 +0000 UTC Jan 27 15:09:57 crc kubenswrapper[4697]: I0127 15:09:57.652371 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:57 crc kubenswrapper[4697]: I0127 15:09:57.652427 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:57 crc kubenswrapper[4697]: I0127 15:09:57.652439 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:57 crc kubenswrapper[4697]: I0127 15:09:57.652461 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:57 crc kubenswrapper[4697]: I0127 15:09:57.652473 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:57Z","lastTransitionTime":"2026-01-27T15:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:57 crc kubenswrapper[4697]: I0127 15:09:57.755369 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:57 crc kubenswrapper[4697]: I0127 15:09:57.755417 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:57 crc kubenswrapper[4697]: I0127 15:09:57.755432 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:57 crc kubenswrapper[4697]: I0127 15:09:57.755446 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:57 crc kubenswrapper[4697]: I0127 15:09:57.755458 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:57Z","lastTransitionTime":"2026-01-27T15:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:57 crc kubenswrapper[4697]: I0127 15:09:57.857999 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:57 crc kubenswrapper[4697]: I0127 15:09:57.858054 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:57 crc kubenswrapper[4697]: I0127 15:09:57.858070 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:57 crc kubenswrapper[4697]: I0127 15:09:57.858093 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:57 crc kubenswrapper[4697]: I0127 15:09:57.858110 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:57Z","lastTransitionTime":"2026-01-27T15:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:57 crc kubenswrapper[4697]: I0127 15:09:57.960162 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:57 crc kubenswrapper[4697]: I0127 15:09:57.960230 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:57 crc kubenswrapper[4697]: I0127 15:09:57.960252 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:57 crc kubenswrapper[4697]: I0127 15:09:57.960280 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:57 crc kubenswrapper[4697]: I0127 15:09:57.960302 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:57Z","lastTransitionTime":"2026-01-27T15:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:58 crc kubenswrapper[4697]: I0127 15:09:58.063682 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:58 crc kubenswrapper[4697]: I0127 15:09:58.063728 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:58 crc kubenswrapper[4697]: I0127 15:09:58.063740 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:58 crc kubenswrapper[4697]: I0127 15:09:58.063762 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:58 crc kubenswrapper[4697]: I0127 15:09:58.063775 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:58Z","lastTransitionTime":"2026-01-27T15:09:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:58 crc kubenswrapper[4697]: I0127 15:09:58.166769 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:58 crc kubenswrapper[4697]: I0127 15:09:58.166894 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:58 crc kubenswrapper[4697]: I0127 15:09:58.166913 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:58 crc kubenswrapper[4697]: I0127 15:09:58.166938 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:58 crc kubenswrapper[4697]: I0127 15:09:58.166954 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:58Z","lastTransitionTime":"2026-01-27T15:09:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:58 crc kubenswrapper[4697]: I0127 15:09:58.269883 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:58 crc kubenswrapper[4697]: I0127 15:09:58.269945 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:58 crc kubenswrapper[4697]: I0127 15:09:58.269960 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:58 crc kubenswrapper[4697]: I0127 15:09:58.269982 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:58 crc kubenswrapper[4697]: I0127 15:09:58.270000 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:58Z","lastTransitionTime":"2026-01-27T15:09:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:58 crc kubenswrapper[4697]: I0127 15:09:58.372849 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:58 crc kubenswrapper[4697]: I0127 15:09:58.372900 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:58 crc kubenswrapper[4697]: I0127 15:09:58.372912 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:58 crc kubenswrapper[4697]: I0127 15:09:58.372930 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:58 crc kubenswrapper[4697]: I0127 15:09:58.372942 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:58Z","lastTransitionTime":"2026-01-27T15:09:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:58 crc kubenswrapper[4697]: I0127 15:09:58.475832 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:58 crc kubenswrapper[4697]: I0127 15:09:58.475875 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:58 crc kubenswrapper[4697]: I0127 15:09:58.475911 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:58 crc kubenswrapper[4697]: I0127 15:09:58.475930 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:58 crc kubenswrapper[4697]: I0127 15:09:58.475943 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:58Z","lastTransitionTime":"2026-01-27T15:09:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:58 crc kubenswrapper[4697]: I0127 15:09:58.568255 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vwctp" Jan 27 15:09:58 crc kubenswrapper[4697]: I0127 15:09:58.568395 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:09:58 crc kubenswrapper[4697]: E0127 15:09:58.568456 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vwctp" podUID="11ed6885-450d-477c-8e08-acf5fbde2fa3" Jan 27 15:09:58 crc kubenswrapper[4697]: E0127 15:09:58.568605 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:09:58 crc kubenswrapper[4697]: I0127 15:09:58.578507 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:58 crc kubenswrapper[4697]: I0127 15:09:58.578579 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:58 crc kubenswrapper[4697]: I0127 15:09:58.578599 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:58 crc kubenswrapper[4697]: I0127 15:09:58.578624 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:58 crc kubenswrapper[4697]: I0127 15:09:58.578643 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:58Z","lastTransitionTime":"2026-01-27T15:09:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:58 crc kubenswrapper[4697]: I0127 15:09:58.578876 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 05:11:10.482882273 +0000 UTC Jan 27 15:09:58 crc kubenswrapper[4697]: I0127 15:09:58.680930 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:58 crc kubenswrapper[4697]: I0127 15:09:58.681228 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:58 crc kubenswrapper[4697]: I0127 15:09:58.681332 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:58 crc kubenswrapper[4697]: I0127 15:09:58.681416 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:58 crc kubenswrapper[4697]: I0127 15:09:58.681490 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:58Z","lastTransitionTime":"2026-01-27T15:09:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:58 crc kubenswrapper[4697]: I0127 15:09:58.783978 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:58 crc kubenswrapper[4697]: I0127 15:09:58.784155 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:58 crc kubenswrapper[4697]: I0127 15:09:58.784174 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:58 crc kubenswrapper[4697]: I0127 15:09:58.784196 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:58 crc kubenswrapper[4697]: I0127 15:09:58.784213 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:58Z","lastTransitionTime":"2026-01-27T15:09:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:58 crc kubenswrapper[4697]: I0127 15:09:58.886651 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:58 crc kubenswrapper[4697]: I0127 15:09:58.886712 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:58 crc kubenswrapper[4697]: I0127 15:09:58.886729 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:58 crc kubenswrapper[4697]: I0127 15:09:58.886752 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:58 crc kubenswrapper[4697]: I0127 15:09:58.886769 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:58Z","lastTransitionTime":"2026-01-27T15:09:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:58 crc kubenswrapper[4697]: I0127 15:09:58.989993 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:58 crc kubenswrapper[4697]: I0127 15:09:58.990223 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:58 crc kubenswrapper[4697]: I0127 15:09:58.990282 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:58 crc kubenswrapper[4697]: I0127 15:09:58.990348 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:58 crc kubenswrapper[4697]: I0127 15:09:58.990401 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:58Z","lastTransitionTime":"2026-01-27T15:09:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:59 crc kubenswrapper[4697]: I0127 15:09:59.092514 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:59 crc kubenswrapper[4697]: I0127 15:09:59.092553 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:59 crc kubenswrapper[4697]: I0127 15:09:59.092560 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:59 crc kubenswrapper[4697]: I0127 15:09:59.092573 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:59 crc kubenswrapper[4697]: I0127 15:09:59.092581 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:59Z","lastTransitionTime":"2026-01-27T15:09:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:59 crc kubenswrapper[4697]: I0127 15:09:59.196351 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:59 crc kubenswrapper[4697]: I0127 15:09:59.196749 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:59 crc kubenswrapper[4697]: I0127 15:09:59.197192 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:59 crc kubenswrapper[4697]: I0127 15:09:59.197541 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:59 crc kubenswrapper[4697]: I0127 15:09:59.198223 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:59Z","lastTransitionTime":"2026-01-27T15:09:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:59 crc kubenswrapper[4697]: I0127 15:09:59.301041 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:59 crc kubenswrapper[4697]: I0127 15:09:59.301075 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:59 crc kubenswrapper[4697]: I0127 15:09:59.301085 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:59 crc kubenswrapper[4697]: I0127 15:09:59.301100 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:59 crc kubenswrapper[4697]: I0127 15:09:59.301112 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:59Z","lastTransitionTime":"2026-01-27T15:09:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:59 crc kubenswrapper[4697]: I0127 15:09:59.403145 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:59 crc kubenswrapper[4697]: I0127 15:09:59.403202 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:59 crc kubenswrapper[4697]: I0127 15:09:59.403214 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:59 crc kubenswrapper[4697]: I0127 15:09:59.403231 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:59 crc kubenswrapper[4697]: I0127 15:09:59.403241 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:59Z","lastTransitionTime":"2026-01-27T15:09:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:59 crc kubenswrapper[4697]: I0127 15:09:59.506253 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:59 crc kubenswrapper[4697]: I0127 15:09:59.506294 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:59 crc kubenswrapper[4697]: I0127 15:09:59.506307 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:59 crc kubenswrapper[4697]: I0127 15:09:59.506323 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:59 crc kubenswrapper[4697]: I0127 15:09:59.506332 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:59Z","lastTransitionTime":"2026-01-27T15:09:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:59 crc kubenswrapper[4697]: I0127 15:09:59.568291 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:09:59 crc kubenswrapper[4697]: E0127 15:09:59.568654 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:09:59 crc kubenswrapper[4697]: I0127 15:09:59.568384 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:09:59 crc kubenswrapper[4697]: E0127 15:09:59.569048 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:09:59 crc kubenswrapper[4697]: I0127 15:09:59.579541 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 06:59:02.719147366 +0000 UTC Jan 27 15:09:59 crc kubenswrapper[4697]: I0127 15:09:59.608746 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:59 crc kubenswrapper[4697]: I0127 15:09:59.608812 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:59 crc kubenswrapper[4697]: I0127 15:09:59.608823 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:59 crc kubenswrapper[4697]: I0127 15:09:59.608841 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:59 crc kubenswrapper[4697]: I0127 15:09:59.608852 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:59Z","lastTransitionTime":"2026-01-27T15:09:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:59 crc kubenswrapper[4697]: I0127 15:09:59.712431 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:59 crc kubenswrapper[4697]: I0127 15:09:59.712497 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:59 crc kubenswrapper[4697]: I0127 15:09:59.712515 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:59 crc kubenswrapper[4697]: I0127 15:09:59.712540 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:59 crc kubenswrapper[4697]: I0127 15:09:59.712557 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:59Z","lastTransitionTime":"2026-01-27T15:09:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:59 crc kubenswrapper[4697]: I0127 15:09:59.816223 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:59 crc kubenswrapper[4697]: I0127 15:09:59.816657 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:59 crc kubenswrapper[4697]: I0127 15:09:59.816735 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:59 crc kubenswrapper[4697]: I0127 15:09:59.817038 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:59 crc kubenswrapper[4697]: I0127 15:09:59.817104 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:59Z","lastTransitionTime":"2026-01-27T15:09:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:09:59 crc kubenswrapper[4697]: I0127 15:09:59.920054 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:09:59 crc kubenswrapper[4697]: I0127 15:09:59.920086 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:09:59 crc kubenswrapper[4697]: I0127 15:09:59.920094 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:09:59 crc kubenswrapper[4697]: I0127 15:09:59.920108 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:09:59 crc kubenswrapper[4697]: I0127 15:09:59.920118 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:09:59Z","lastTransitionTime":"2026-01-27T15:09:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:10:00 crc kubenswrapper[4697]: I0127 15:10:00.022912 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:10:00 crc kubenswrapper[4697]: I0127 15:10:00.023275 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:10:00 crc kubenswrapper[4697]: I0127 15:10:00.023493 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:10:00 crc kubenswrapper[4697]: I0127 15:10:00.023683 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:10:00 crc kubenswrapper[4697]: I0127 15:10:00.023903 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:10:00Z","lastTransitionTime":"2026-01-27T15:10:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:10:00 crc kubenswrapper[4697]: I0127 15:10:00.127030 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:10:00 crc kubenswrapper[4697]: I0127 15:10:00.127091 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:10:00 crc kubenswrapper[4697]: I0127 15:10:00.127104 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:10:00 crc kubenswrapper[4697]: I0127 15:10:00.127120 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:10:00 crc kubenswrapper[4697]: I0127 15:10:00.127132 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:10:00Z","lastTransitionTime":"2026-01-27T15:10:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:10:00 crc kubenswrapper[4697]: I0127 15:10:00.230151 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:10:00 crc kubenswrapper[4697]: I0127 15:10:00.230593 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:10:00 crc kubenswrapper[4697]: I0127 15:10:00.230709 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:10:00 crc kubenswrapper[4697]: I0127 15:10:00.230861 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:10:00 crc kubenswrapper[4697]: I0127 15:10:00.230986 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:10:00Z","lastTransitionTime":"2026-01-27T15:10:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:10:00 crc kubenswrapper[4697]: I0127 15:10:00.335184 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:10:00 crc kubenswrapper[4697]: I0127 15:10:00.335230 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:10:00 crc kubenswrapper[4697]: I0127 15:10:00.335242 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:10:00 crc kubenswrapper[4697]: I0127 15:10:00.335260 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:10:00 crc kubenswrapper[4697]: I0127 15:10:00.335275 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:10:00Z","lastTransitionTime":"2026-01-27T15:10:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:10:00 crc kubenswrapper[4697]: I0127 15:10:00.437468 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:10:00 crc kubenswrapper[4697]: I0127 15:10:00.437758 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:10:00 crc kubenswrapper[4697]: I0127 15:10:00.437934 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:10:00 crc kubenswrapper[4697]: I0127 15:10:00.438050 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:10:00 crc kubenswrapper[4697]: I0127 15:10:00.438175 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:10:00Z","lastTransitionTime":"2026-01-27T15:10:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:10:00 crc kubenswrapper[4697]: I0127 15:10:00.541183 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:10:00 crc kubenswrapper[4697]: I0127 15:10:00.541248 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:10:00 crc kubenswrapper[4697]: I0127 15:10:00.541273 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:10:00 crc kubenswrapper[4697]: I0127 15:10:00.541300 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:10:00 crc kubenswrapper[4697]: I0127 15:10:00.541318 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:10:00Z","lastTransitionTime":"2026-01-27T15:10:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:10:00 crc kubenswrapper[4697]: I0127 15:10:00.567970 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:10:00 crc kubenswrapper[4697]: E0127 15:10:00.568176 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:10:00 crc kubenswrapper[4697]: I0127 15:10:00.568507 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vwctp" Jan 27 15:10:00 crc kubenswrapper[4697]: E0127 15:10:00.568675 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vwctp" podUID="11ed6885-450d-477c-8e08-acf5fbde2fa3" Jan 27 15:10:00 crc kubenswrapper[4697]: I0127 15:10:00.579883 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 15:58:45.578967282 +0000 UTC Jan 27 15:10:00 crc kubenswrapper[4697]: I0127 15:10:00.643921 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:10:00 crc kubenswrapper[4697]: I0127 15:10:00.644000 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:10:00 crc kubenswrapper[4697]: I0127 15:10:00.644023 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:10:00 crc kubenswrapper[4697]: I0127 15:10:00.644055 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:10:00 crc kubenswrapper[4697]: I0127 15:10:00.644081 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:10:00Z","lastTransitionTime":"2026-01-27T15:10:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:10:00 crc kubenswrapper[4697]: I0127 15:10:00.746938 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:10:00 crc kubenswrapper[4697]: I0127 15:10:00.747011 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:10:00 crc kubenswrapper[4697]: I0127 15:10:00.747032 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:10:00 crc kubenswrapper[4697]: I0127 15:10:00.747062 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:10:00 crc kubenswrapper[4697]: I0127 15:10:00.747083 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:10:00Z","lastTransitionTime":"2026-01-27T15:10:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:10:00 crc kubenswrapper[4697]: I0127 15:10:00.849960 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:10:00 crc kubenswrapper[4697]: I0127 15:10:00.850018 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:10:00 crc kubenswrapper[4697]: I0127 15:10:00.850035 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:10:00 crc kubenswrapper[4697]: I0127 15:10:00.850061 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:10:00 crc kubenswrapper[4697]: I0127 15:10:00.850077 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:10:00Z","lastTransitionTime":"2026-01-27T15:10:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:10:00 crc kubenswrapper[4697]: I0127 15:10:00.952598 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:10:00 crc kubenswrapper[4697]: I0127 15:10:00.952686 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:10:00 crc kubenswrapper[4697]: I0127 15:10:00.952697 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:10:00 crc kubenswrapper[4697]: I0127 15:10:00.952715 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:10:00 crc kubenswrapper[4697]: I0127 15:10:00.952726 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:10:00Z","lastTransitionTime":"2026-01-27T15:10:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:10:01 crc kubenswrapper[4697]: I0127 15:10:01.055710 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:10:01 crc kubenswrapper[4697]: I0127 15:10:01.056072 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:10:01 crc kubenswrapper[4697]: I0127 15:10:01.056188 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:10:01 crc kubenswrapper[4697]: I0127 15:10:01.056307 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:10:01 crc kubenswrapper[4697]: I0127 15:10:01.056396 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:10:01Z","lastTransitionTime":"2026-01-27T15:10:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:10:01 crc kubenswrapper[4697]: I0127 15:10:01.158845 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:10:01 crc kubenswrapper[4697]: I0127 15:10:01.158894 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:10:01 crc kubenswrapper[4697]: I0127 15:10:01.158909 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:10:01 crc kubenswrapper[4697]: I0127 15:10:01.158925 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:10:01 crc kubenswrapper[4697]: I0127 15:10:01.158937 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:10:01Z","lastTransitionTime":"2026-01-27T15:10:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:10:01 crc kubenswrapper[4697]: I0127 15:10:01.261400 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:10:01 crc kubenswrapper[4697]: I0127 15:10:01.261435 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:10:01 crc kubenswrapper[4697]: I0127 15:10:01.261445 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:10:01 crc kubenswrapper[4697]: I0127 15:10:01.261459 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:10:01 crc kubenswrapper[4697]: I0127 15:10:01.261469 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:10:01Z","lastTransitionTime":"2026-01-27T15:10:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:10:01 crc kubenswrapper[4697]: I0127 15:10:01.364310 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:10:01 crc kubenswrapper[4697]: I0127 15:10:01.364359 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:10:01 crc kubenswrapper[4697]: I0127 15:10:01.364367 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:10:01 crc kubenswrapper[4697]: I0127 15:10:01.364381 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:10:01 crc kubenswrapper[4697]: I0127 15:10:01.364389 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:10:01Z","lastTransitionTime":"2026-01-27T15:10:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:10:01 crc kubenswrapper[4697]: I0127 15:10:01.467551 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:10:01 crc kubenswrapper[4697]: I0127 15:10:01.467587 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:10:01 crc kubenswrapper[4697]: I0127 15:10:01.467598 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:10:01 crc kubenswrapper[4697]: I0127 15:10:01.467613 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:10:01 crc kubenswrapper[4697]: I0127 15:10:01.467622 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:10:01Z","lastTransitionTime":"2026-01-27T15:10:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:10:01 crc kubenswrapper[4697]: I0127 15:10:01.567467 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:10:01 crc kubenswrapper[4697]: I0127 15:10:01.567544 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:10:01 crc kubenswrapper[4697]: E0127 15:10:01.567885 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:10:01 crc kubenswrapper[4697]: E0127 15:10:01.568067 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:10:01 crc kubenswrapper[4697]: I0127 15:10:01.568295 4697 scope.go:117] "RemoveContainer" containerID="8434917bca076a475c1e4b907733bca9cee4559bea25a20542bc654c51f925fd" Jan 27 15:10:01 crc kubenswrapper[4697]: E0127 15:10:01.568509 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-z6jxw_openshift-ovn-kubernetes(6a1ce5ad-1a8c-4a28-99d8-fc71649954ad)\"" pod="openshift-ovn-kubernetes/ovnkube-node-z6jxw" podUID="6a1ce5ad-1a8c-4a28-99d8-fc71649954ad" Jan 27 15:10:01 crc kubenswrapper[4697]: I0127 15:10:01.569476 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:10:01 crc kubenswrapper[4697]: I0127 15:10:01.569508 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:10:01 crc kubenswrapper[4697]: I0127 15:10:01.569518 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:10:01 crc kubenswrapper[4697]: I0127 15:10:01.569531 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:10:01 crc kubenswrapper[4697]: I0127 15:10:01.569541 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:10:01Z","lastTransitionTime":"2026-01-27T15:10:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:10:01 crc kubenswrapper[4697]: I0127 15:10:01.580814 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 23:57:16.507202936 +0000 UTC Jan 27 15:10:01 crc kubenswrapper[4697]: I0127 15:10:01.671552 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:10:01 crc kubenswrapper[4697]: I0127 15:10:01.671592 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:10:01 crc kubenswrapper[4697]: I0127 15:10:01.671601 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:10:01 crc kubenswrapper[4697]: I0127 15:10:01.671620 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:10:01 crc kubenswrapper[4697]: I0127 15:10:01.671631 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:10:01Z","lastTransitionTime":"2026-01-27T15:10:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:10:01 crc kubenswrapper[4697]: I0127 15:10:01.774357 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:10:01 crc kubenswrapper[4697]: I0127 15:10:01.774577 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:10:01 crc kubenswrapper[4697]: I0127 15:10:01.774672 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:10:01 crc kubenswrapper[4697]: I0127 15:10:01.774773 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:10:01 crc kubenswrapper[4697]: I0127 15:10:01.774872 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:10:01Z","lastTransitionTime":"2026-01-27T15:10:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:10:01 crc kubenswrapper[4697]: I0127 15:10:01.877252 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:10:01 crc kubenswrapper[4697]: I0127 15:10:01.877300 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:10:01 crc kubenswrapper[4697]: I0127 15:10:01.877318 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:10:01 crc kubenswrapper[4697]: I0127 15:10:01.877338 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:10:01 crc kubenswrapper[4697]: I0127 15:10:01.877352 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:10:01Z","lastTransitionTime":"2026-01-27T15:10:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:10:01 crc kubenswrapper[4697]: I0127 15:10:01.979649 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:10:01 crc kubenswrapper[4697]: I0127 15:10:01.980060 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:10:01 crc kubenswrapper[4697]: I0127 15:10:01.980147 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:10:01 crc kubenswrapper[4697]: I0127 15:10:01.980216 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:10:01 crc kubenswrapper[4697]: I0127 15:10:01.980273 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:10:01Z","lastTransitionTime":"2026-01-27T15:10:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:10:02 crc kubenswrapper[4697]: I0127 15:10:02.083316 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:10:02 crc kubenswrapper[4697]: I0127 15:10:02.083349 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:10:02 crc kubenswrapper[4697]: I0127 15:10:02.083364 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:10:02 crc kubenswrapper[4697]: I0127 15:10:02.083384 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:10:02 crc kubenswrapper[4697]: I0127 15:10:02.083398 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:10:02Z","lastTransitionTime":"2026-01-27T15:10:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:10:02 crc kubenswrapper[4697]: I0127 15:10:02.186663 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:10:02 crc kubenswrapper[4697]: I0127 15:10:02.186895 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:10:02 crc kubenswrapper[4697]: I0127 15:10:02.186995 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:10:02 crc kubenswrapper[4697]: I0127 15:10:02.187060 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:10:02 crc kubenswrapper[4697]: I0127 15:10:02.187123 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:10:02Z","lastTransitionTime":"2026-01-27T15:10:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:10:02 crc kubenswrapper[4697]: I0127 15:10:02.289147 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:10:02 crc kubenswrapper[4697]: I0127 15:10:02.289488 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:10:02 crc kubenswrapper[4697]: I0127 15:10:02.289592 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:10:02 crc kubenswrapper[4697]: I0127 15:10:02.289704 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:10:02 crc kubenswrapper[4697]: I0127 15:10:02.289815 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:10:02Z","lastTransitionTime":"2026-01-27T15:10:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:10:02 crc kubenswrapper[4697]: I0127 15:10:02.392671 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:10:02 crc kubenswrapper[4697]: I0127 15:10:02.392706 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:10:02 crc kubenswrapper[4697]: I0127 15:10:02.392715 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:10:02 crc kubenswrapper[4697]: I0127 15:10:02.392728 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:10:02 crc kubenswrapper[4697]: I0127 15:10:02.392754 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:10:02Z","lastTransitionTime":"2026-01-27T15:10:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:10:02 crc kubenswrapper[4697]: I0127 15:10:02.439900 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:10:02 crc kubenswrapper[4697]: I0127 15:10:02.439935 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:10:02 crc kubenswrapper[4697]: I0127 15:10:02.439945 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:10:02 crc kubenswrapper[4697]: I0127 15:10:02.439959 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:10:02 crc kubenswrapper[4697]: I0127 15:10:02.439967 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:10:02Z","lastTransitionTime":"2026-01-27T15:10:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:10:02 crc kubenswrapper[4697]: I0127 15:10:02.478836 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-n7bfx"] Jan 27 15:10:02 crc kubenswrapper[4697]: I0127 15:10:02.479394 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n7bfx" Jan 27 15:10:02 crc kubenswrapper[4697]: I0127 15:10:02.482431 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 27 15:10:02 crc kubenswrapper[4697]: I0127 15:10:02.482696 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 27 15:10:02 crc kubenswrapper[4697]: I0127 15:10:02.486384 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 27 15:10:02 crc kubenswrapper[4697]: I0127 15:10:02.486712 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 27 15:10:02 crc kubenswrapper[4697]: I0127 15:10:02.488228 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/a8ed0ff2-a28e-45e6-b545-6fe4f26a4929-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-n7bfx\" (UID: \"a8ed0ff2-a28e-45e6-b545-6fe4f26a4929\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n7bfx" Jan 27 15:10:02 crc kubenswrapper[4697]: I0127 15:10:02.488294 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a8ed0ff2-a28e-45e6-b545-6fe4f26a4929-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-n7bfx\" (UID: \"a8ed0ff2-a28e-45e6-b545-6fe4f26a4929\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n7bfx" Jan 27 15:10:02 crc kubenswrapper[4697]: I0127 15:10:02.488324 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/a8ed0ff2-a28e-45e6-b545-6fe4f26a4929-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-n7bfx\" (UID: \"a8ed0ff2-a28e-45e6-b545-6fe4f26a4929\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n7bfx" Jan 27 15:10:02 crc kubenswrapper[4697]: I0127 15:10:02.488343 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a8ed0ff2-a28e-45e6-b545-6fe4f26a4929-service-ca\") pod \"cluster-version-operator-5c965bbfc6-n7bfx\" (UID: \"a8ed0ff2-a28e-45e6-b545-6fe4f26a4929\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n7bfx" Jan 27 15:10:02 crc kubenswrapper[4697]: I0127 15:10:02.488375 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8ed0ff2-a28e-45e6-b545-6fe4f26a4929-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-n7bfx\" (UID: \"a8ed0ff2-a28e-45e6-b545-6fe4f26a4929\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n7bfx" Jan 27 15:10:02 crc kubenswrapper[4697]: I0127 15:10:02.527059 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6lf86" podStartSLOduration=75.527038311 podStartE2EDuration="1m15.527038311s" podCreationTimestamp="2026-01-27 15:08:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:10:02.526359195 +0000 UTC m=+98.698758976" watchObservedRunningTime="2026-01-27 15:10:02.527038311 +0000 UTC m=+98.699438092" Jan 27 15:10:02 crc kubenswrapper[4697]: I0127 15:10:02.527365 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-bdclj" podStartSLOduration=77.527358109 podStartE2EDuration="1m17.527358109s" podCreationTimestamp="2026-01-27 15:08:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:10:02.514548851 +0000 UTC m=+98.686948632" watchObservedRunningTime="2026-01-27 15:10:02.527358109 +0000 UTC m=+98.699757890" Jan 27 15:10:02 crc kubenswrapper[4697]: I0127 15:10:02.559466 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=19.559448088 podStartE2EDuration="19.559448088s" podCreationTimestamp="2026-01-27 15:09:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:10:02.55661807 +0000 UTC m=+98.729017851" watchObservedRunningTime="2026-01-27 15:10:02.559448088 +0000 UTC m=+98.731847869" Jan 27 15:10:02 crc kubenswrapper[4697]: I0127 15:10:02.569968 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vwctp" Jan 27 15:10:02 crc kubenswrapper[4697]: E0127 15:10:02.570103 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vwctp" podUID="11ed6885-450d-477c-8e08-acf5fbde2fa3" Jan 27 15:10:02 crc kubenswrapper[4697]: I0127 15:10:02.570159 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:10:02 crc kubenswrapper[4697]: E0127 15:10:02.570279 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:10:02 crc kubenswrapper[4697]: I0127 15:10:02.575795 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=76.575766258 podStartE2EDuration="1m16.575766258s" podCreationTimestamp="2026-01-27 15:08:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:10:02.575467391 +0000 UTC m=+98.747867172" watchObservedRunningTime="2026-01-27 15:10:02.575766258 +0000 UTC m=+98.748166039" Jan 27 15:10:02 crc kubenswrapper[4697]: I0127 15:10:02.581145 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 14:32:25.053810173 +0000 UTC Jan 27 15:10:02 crc kubenswrapper[4697]: I0127 15:10:02.581211 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 27 15:10:02 crc kubenswrapper[4697]: I0127 15:10:02.587828 4697 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 27 15:10:02 crc kubenswrapper[4697]: I0127 15:10:02.589156 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8ed0ff2-a28e-45e6-b545-6fe4f26a4929-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-n7bfx\" (UID: \"a8ed0ff2-a28e-45e6-b545-6fe4f26a4929\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n7bfx" Jan 27 15:10:02 crc kubenswrapper[4697]: I0127 15:10:02.589204 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/a8ed0ff2-a28e-45e6-b545-6fe4f26a4929-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-n7bfx\" (UID: \"a8ed0ff2-a28e-45e6-b545-6fe4f26a4929\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n7bfx" Jan 27 15:10:02 crc kubenswrapper[4697]: I0127 15:10:02.589243 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a8ed0ff2-a28e-45e6-b545-6fe4f26a4929-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-n7bfx\" (UID: \"a8ed0ff2-a28e-45e6-b545-6fe4f26a4929\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n7bfx" Jan 27 15:10:02 crc kubenswrapper[4697]: I0127 15:10:02.589263 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/a8ed0ff2-a28e-45e6-b545-6fe4f26a4929-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-n7bfx\" (UID: \"a8ed0ff2-a28e-45e6-b545-6fe4f26a4929\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n7bfx" Jan 27 15:10:02 crc kubenswrapper[4697]: I0127 15:10:02.589281 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a8ed0ff2-a28e-45e6-b545-6fe4f26a4929-service-ca\") pod \"cluster-version-operator-5c965bbfc6-n7bfx\" (UID: \"a8ed0ff2-a28e-45e6-b545-6fe4f26a4929\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n7bfx" Jan 27 15:10:02 crc kubenswrapper[4697]: I0127 15:10:02.589299 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/a8ed0ff2-a28e-45e6-b545-6fe4f26a4929-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-n7bfx\" (UID: \"a8ed0ff2-a28e-45e6-b545-6fe4f26a4929\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n7bfx" Jan 27 15:10:02 crc kubenswrapper[4697]: I0127 15:10:02.589399 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/a8ed0ff2-a28e-45e6-b545-6fe4f26a4929-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-n7bfx\" (UID: \"a8ed0ff2-a28e-45e6-b545-6fe4f26a4929\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n7bfx" Jan 27 15:10:02 crc kubenswrapper[4697]: I0127 15:10:02.590423 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a8ed0ff2-a28e-45e6-b545-6fe4f26a4929-service-ca\") pod \"cluster-version-operator-5c965bbfc6-n7bfx\" (UID: \"a8ed0ff2-a28e-45e6-b545-6fe4f26a4929\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n7bfx" Jan 27 15:10:02 crc kubenswrapper[4697]: I0127 15:10:02.594514 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8ed0ff2-a28e-45e6-b545-6fe4f26a4929-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-n7bfx\" (UID: \"a8ed0ff2-a28e-45e6-b545-6fe4f26a4929\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n7bfx" Jan 27 15:10:02 crc kubenswrapper[4697]: I0127 15:10:02.610547 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=45.610531132 podStartE2EDuration="45.610531132s" podCreationTimestamp="2026-01-27 15:09:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:10:02.58625729 +0000 UTC m=+98.758657071" watchObservedRunningTime="2026-01-27 15:10:02.610531132 +0000 UTC m=+98.782930913" Jan 27 15:10:02 crc kubenswrapper[4697]: I0127 15:10:02.611103 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a8ed0ff2-a28e-45e6-b545-6fe4f26a4929-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-n7bfx\" (UID: \"a8ed0ff2-a28e-45e6-b545-6fe4f26a4929\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n7bfx" Jan 27 15:10:02 crc kubenswrapper[4697]: I0127 15:10:02.635338 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-rq89t" podStartSLOduration=76.635315686 podStartE2EDuration="1m16.635315686s" podCreationTimestamp="2026-01-27 15:08:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:10:02.634728011 +0000 UTC m=+98.807127792" watchObservedRunningTime="2026-01-27 15:10:02.635315686 +0000 UTC m=+98.807715467" Jan 27 15:10:02 crc kubenswrapper[4697]: I0127 15:10:02.700621 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=71.700599409 podStartE2EDuration="1m11.700599409s" podCreationTimestamp="2026-01-27 15:08:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:10:02.684613717 +0000 UTC m=+98.857013498" watchObservedRunningTime="2026-01-27 15:10:02.700599409 +0000 UTC m=+98.872999190" Jan 27 15:10:02 crc kubenswrapper[4697]: I0127 15:10:02.744026 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-bcb9s" podStartSLOduration=76.744000319 podStartE2EDuration="1m16.744000319s" podCreationTimestamp="2026-01-27 15:08:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:10:02.742400371 +0000 UTC m=+98.914800152" watchObservedRunningTime="2026-01-27 15:10:02.744000319 +0000 UTC m=+98.916400100" Jan 27 15:10:02 crc kubenswrapper[4697]: I0127 15:10:02.760983 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=11.760965656 podStartE2EDuration="11.760965656s" podCreationTimestamp="2026-01-27 15:09:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:10:02.760383182 +0000 UTC m=+98.932782963" watchObservedRunningTime="2026-01-27 15:10:02.760965656 +0000 UTC m=+98.933365437" Jan 27 15:10:02 crc kubenswrapper[4697]: I0127 15:10:02.771612 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podStartSLOduration=77.771597861 podStartE2EDuration="1m17.771597861s" podCreationTimestamp="2026-01-27 15:08:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:10:02.770962826 +0000 UTC m=+98.943362607" watchObservedRunningTime="2026-01-27 15:10:02.771597861 +0000 UTC m=+98.943997642" Jan 27 15:10:02 crc kubenswrapper[4697]: I0127 15:10:02.805205 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n7bfx" Jan 27 15:10:03 crc kubenswrapper[4697]: I0127 15:10:03.145144 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n7bfx" event={"ID":"a8ed0ff2-a28e-45e6-b545-6fe4f26a4929","Type":"ContainerStarted","Data":"127a4a36b1a6b6a7318e183596fd2f870bbe103cb3e06edd7145edaea2e4c551"} Jan 27 15:10:03 crc kubenswrapper[4697]: I0127 15:10:03.145616 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n7bfx" event={"ID":"a8ed0ff2-a28e-45e6-b545-6fe4f26a4929","Type":"ContainerStarted","Data":"ec86c3619fa428dd621d7b55856039499c38a7d51fa43c31812249e246ad8bd4"} Jan 27 15:10:03 crc kubenswrapper[4697]: I0127 15:10:03.159854 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n7bfx" podStartSLOduration=77.159826573 podStartE2EDuration="1m17.159826573s" podCreationTimestamp="2026-01-27 15:08:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:10:03.158424039 +0000 UTC m=+99.330823820" watchObservedRunningTime="2026-01-27 15:10:03.159826573 +0000 UTC m=+99.332226384" Jan 27 15:10:03 crc kubenswrapper[4697]: I0127 15:10:03.161098 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-lpz4j" podStartSLOduration=77.161090673 podStartE2EDuration="1m17.161090673s" podCreationTimestamp="2026-01-27 15:08:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:10:02.791631181 +0000 UTC m=+98.964030962" watchObservedRunningTime="2026-01-27 15:10:03.161090673 +0000 UTC m=+99.333490474" Jan 27 15:10:03 crc kubenswrapper[4697]: I0127 15:10:03.568217 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:10:03 crc kubenswrapper[4697]: I0127 15:10:03.568297 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:10:03 crc kubenswrapper[4697]: E0127 15:10:03.568760 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:10:03 crc kubenswrapper[4697]: E0127 15:10:03.568935 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:10:04 crc kubenswrapper[4697]: I0127 15:10:04.304052 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/11ed6885-450d-477c-8e08-acf5fbde2fa3-metrics-certs\") pod \"network-metrics-daemon-vwctp\" (UID: \"11ed6885-450d-477c-8e08-acf5fbde2fa3\") " pod="openshift-multus/network-metrics-daemon-vwctp" Jan 27 15:10:04 crc kubenswrapper[4697]: E0127 15:10:04.304161 4697 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 15:10:04 crc kubenswrapper[4697]: E0127 15:10:04.304222 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/11ed6885-450d-477c-8e08-acf5fbde2fa3-metrics-certs podName:11ed6885-450d-477c-8e08-acf5fbde2fa3 nodeName:}" failed. No retries permitted until 2026-01-27 15:11:08.304203344 +0000 UTC m=+164.476603125 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/11ed6885-450d-477c-8e08-acf5fbde2fa3-metrics-certs") pod "network-metrics-daemon-vwctp" (UID: "11ed6885-450d-477c-8e08-acf5fbde2fa3") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 15:10:04 crc kubenswrapper[4697]: I0127 15:10:04.568028 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:10:04 crc kubenswrapper[4697]: I0127 15:10:04.568029 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vwctp" Jan 27 15:10:04 crc kubenswrapper[4697]: E0127 15:10:04.569718 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:10:04 crc kubenswrapper[4697]: E0127 15:10:04.569993 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vwctp" podUID="11ed6885-450d-477c-8e08-acf5fbde2fa3" Jan 27 15:10:05 crc kubenswrapper[4697]: I0127 15:10:05.567648 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:10:05 crc kubenswrapper[4697]: E0127 15:10:05.567848 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:10:05 crc kubenswrapper[4697]: I0127 15:10:05.567650 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:10:05 crc kubenswrapper[4697]: E0127 15:10:05.568007 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:10:06 crc kubenswrapper[4697]: I0127 15:10:06.567365 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vwctp" Jan 27 15:10:06 crc kubenswrapper[4697]: I0127 15:10:06.567378 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:10:06 crc kubenswrapper[4697]: E0127 15:10:06.567566 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vwctp" podUID="11ed6885-450d-477c-8e08-acf5fbde2fa3" Jan 27 15:10:06 crc kubenswrapper[4697]: E0127 15:10:06.567643 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:10:07 crc kubenswrapper[4697]: I0127 15:10:07.567286 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:10:07 crc kubenswrapper[4697]: E0127 15:10:07.567428 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:10:07 crc kubenswrapper[4697]: I0127 15:10:07.567289 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:10:07 crc kubenswrapper[4697]: E0127 15:10:07.567642 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:10:08 crc kubenswrapper[4697]: I0127 15:10:08.567548 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vwctp" Jan 27 15:10:08 crc kubenswrapper[4697]: I0127 15:10:08.567600 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:10:08 crc kubenswrapper[4697]: E0127 15:10:08.567716 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vwctp" podUID="11ed6885-450d-477c-8e08-acf5fbde2fa3" Jan 27 15:10:08 crc kubenswrapper[4697]: E0127 15:10:08.567798 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:10:09 crc kubenswrapper[4697]: I0127 15:10:09.568092 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:10:09 crc kubenswrapper[4697]: E0127 15:10:09.568207 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:10:09 crc kubenswrapper[4697]: I0127 15:10:09.568372 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:10:09 crc kubenswrapper[4697]: E0127 15:10:09.568415 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:10:10 crc kubenswrapper[4697]: I0127 15:10:10.567667 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:10:10 crc kubenswrapper[4697]: I0127 15:10:10.567692 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vwctp" Jan 27 15:10:10 crc kubenswrapper[4697]: E0127 15:10:10.568037 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vwctp" podUID="11ed6885-450d-477c-8e08-acf5fbde2fa3" Jan 27 15:10:10 crc kubenswrapper[4697]: E0127 15:10:10.568129 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:10:11 crc kubenswrapper[4697]: I0127 15:10:11.568275 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:10:11 crc kubenswrapper[4697]: E0127 15:10:11.568493 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:10:11 crc kubenswrapper[4697]: I0127 15:10:11.568289 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:10:11 crc kubenswrapper[4697]: E0127 15:10:11.568771 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:10:12 crc kubenswrapper[4697]: I0127 15:10:12.567662 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vwctp" Jan 27 15:10:12 crc kubenswrapper[4697]: I0127 15:10:12.567814 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:10:12 crc kubenswrapper[4697]: E0127 15:10:12.567934 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vwctp" podUID="11ed6885-450d-477c-8e08-acf5fbde2fa3" Jan 27 15:10:12 crc kubenswrapper[4697]: E0127 15:10:12.568658 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:10:13 crc kubenswrapper[4697]: I0127 15:10:13.567497 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:10:13 crc kubenswrapper[4697]: I0127 15:10:13.567636 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:10:13 crc kubenswrapper[4697]: E0127 15:10:13.568041 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:10:13 crc kubenswrapper[4697]: E0127 15:10:13.568251 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:10:14 crc kubenswrapper[4697]: I0127 15:10:14.567510 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:10:14 crc kubenswrapper[4697]: I0127 15:10:14.567586 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vwctp" Jan 27 15:10:14 crc kubenswrapper[4697]: E0127 15:10:14.569019 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:10:14 crc kubenswrapper[4697]: E0127 15:10:14.569140 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vwctp" podUID="11ed6885-450d-477c-8e08-acf5fbde2fa3" Jan 27 15:10:14 crc kubenswrapper[4697]: I0127 15:10:14.570370 4697 scope.go:117] "RemoveContainer" containerID="8434917bca076a475c1e4b907733bca9cee4559bea25a20542bc654c51f925fd" Jan 27 15:10:14 crc kubenswrapper[4697]: E0127 15:10:14.570630 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-z6jxw_openshift-ovn-kubernetes(6a1ce5ad-1a8c-4a28-99d8-fc71649954ad)\"" pod="openshift-ovn-kubernetes/ovnkube-node-z6jxw" podUID="6a1ce5ad-1a8c-4a28-99d8-fc71649954ad" Jan 27 15:10:15 crc kubenswrapper[4697]: I0127 15:10:15.568004 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:10:15 crc kubenswrapper[4697]: I0127 15:10:15.568004 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:10:15 crc kubenswrapper[4697]: E0127 15:10:15.568934 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:10:15 crc kubenswrapper[4697]: E0127 15:10:15.569461 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:10:16 crc kubenswrapper[4697]: I0127 15:10:16.570026 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vwctp" Jan 27 15:10:16 crc kubenswrapper[4697]: E0127 15:10:16.570162 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vwctp" podUID="11ed6885-450d-477c-8e08-acf5fbde2fa3" Jan 27 15:10:16 crc kubenswrapper[4697]: I0127 15:10:16.570857 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:10:16 crc kubenswrapper[4697]: E0127 15:10:16.571692 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:10:17 crc kubenswrapper[4697]: I0127 15:10:17.567976 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:10:17 crc kubenswrapper[4697]: E0127 15:10:17.568455 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:10:17 crc kubenswrapper[4697]: I0127 15:10:17.568878 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:10:17 crc kubenswrapper[4697]: E0127 15:10:17.569376 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:10:18 crc kubenswrapper[4697]: I0127 15:10:18.567857 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vwctp" Jan 27 15:10:18 crc kubenswrapper[4697]: I0127 15:10:18.567904 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:10:18 crc kubenswrapper[4697]: E0127 15:10:18.568114 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vwctp" podUID="11ed6885-450d-477c-8e08-acf5fbde2fa3" Jan 27 15:10:18 crc kubenswrapper[4697]: E0127 15:10:18.568233 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:10:19 crc kubenswrapper[4697]: I0127 15:10:19.567850 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:10:19 crc kubenswrapper[4697]: E0127 15:10:19.568022 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:10:19 crc kubenswrapper[4697]: I0127 15:10:19.568080 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:10:19 crc kubenswrapper[4697]: E0127 15:10:19.568132 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:10:20 crc kubenswrapper[4697]: I0127 15:10:20.567850 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vwctp" Jan 27 15:10:20 crc kubenswrapper[4697]: I0127 15:10:20.567955 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:10:20 crc kubenswrapper[4697]: E0127 15:10:20.568001 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vwctp" podUID="11ed6885-450d-477c-8e08-acf5fbde2fa3" Jan 27 15:10:20 crc kubenswrapper[4697]: E0127 15:10:20.568149 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:10:21 crc kubenswrapper[4697]: I0127 15:10:21.202636 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rq89t_7fbc1c27-fba2-40df-95dd-3842bd1f1906/kube-multus/1.log" Jan 27 15:10:21 crc kubenswrapper[4697]: I0127 15:10:21.204200 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rq89t_7fbc1c27-fba2-40df-95dd-3842bd1f1906/kube-multus/0.log" Jan 27 15:10:21 crc kubenswrapper[4697]: I0127 15:10:21.204448 4697 generic.go:334] "Generic (PLEG): container finished" podID="7fbc1c27-fba2-40df-95dd-3842bd1f1906" containerID="55217260dcb8aebc9ddf2d903bc0257bc8a122956102c0215d6a5a20451d6afe" exitCode=1 Jan 27 15:10:21 crc kubenswrapper[4697]: I0127 15:10:21.204563 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-rq89t" event={"ID":"7fbc1c27-fba2-40df-95dd-3842bd1f1906","Type":"ContainerDied","Data":"55217260dcb8aebc9ddf2d903bc0257bc8a122956102c0215d6a5a20451d6afe"} Jan 27 15:10:21 crc kubenswrapper[4697]: I0127 15:10:21.204643 4697 scope.go:117] "RemoveContainer" containerID="c0c056e48d3130806317f25486fea67d938a0e610f19b6089873f2fcfe4759a0" Jan 27 15:10:21 crc kubenswrapper[4697]: I0127 15:10:21.205253 4697 scope.go:117] "RemoveContainer" containerID="55217260dcb8aebc9ddf2d903bc0257bc8a122956102c0215d6a5a20451d6afe" Jan 27 15:10:21 crc kubenswrapper[4697]: E0127 15:10:21.205560 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-rq89t_openshift-multus(7fbc1c27-fba2-40df-95dd-3842bd1f1906)\"" pod="openshift-multus/multus-rq89t" podUID="7fbc1c27-fba2-40df-95dd-3842bd1f1906" Jan 27 15:10:21 crc kubenswrapper[4697]: I0127 15:10:21.568200 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:10:21 crc kubenswrapper[4697]: I0127 15:10:21.568256 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:10:21 crc kubenswrapper[4697]: E0127 15:10:21.568334 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:10:21 crc kubenswrapper[4697]: E0127 15:10:21.568387 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:10:22 crc kubenswrapper[4697]: I0127 15:10:22.209958 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rq89t_7fbc1c27-fba2-40df-95dd-3842bd1f1906/kube-multus/1.log" Jan 27 15:10:22 crc kubenswrapper[4697]: I0127 15:10:22.567760 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vwctp" Jan 27 15:10:22 crc kubenswrapper[4697]: I0127 15:10:22.567859 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:10:22 crc kubenswrapper[4697]: E0127 15:10:22.568466 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vwctp" podUID="11ed6885-450d-477c-8e08-acf5fbde2fa3" Jan 27 15:10:22 crc kubenswrapper[4697]: E0127 15:10:22.568473 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:10:23 crc kubenswrapper[4697]: I0127 15:10:23.568117 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:10:23 crc kubenswrapper[4697]: E0127 15:10:23.568387 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:10:23 crc kubenswrapper[4697]: I0127 15:10:23.568151 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:10:23 crc kubenswrapper[4697]: E0127 15:10:23.568973 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:10:24 crc kubenswrapper[4697]: E0127 15:10:24.536428 4697 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 27 15:10:24 crc kubenswrapper[4697]: I0127 15:10:24.568094 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vwctp" Jan 27 15:10:24 crc kubenswrapper[4697]: E0127 15:10:24.569130 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vwctp" podUID="11ed6885-450d-477c-8e08-acf5fbde2fa3" Jan 27 15:10:24 crc kubenswrapper[4697]: I0127 15:10:24.569242 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:10:24 crc kubenswrapper[4697]: E0127 15:10:24.569531 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:10:24 crc kubenswrapper[4697]: E0127 15:10:24.676362 4697 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 27 15:10:25 crc kubenswrapper[4697]: I0127 15:10:25.567579 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:10:25 crc kubenswrapper[4697]: I0127 15:10:25.567661 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:10:25 crc kubenswrapper[4697]: E0127 15:10:25.567973 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:10:25 crc kubenswrapper[4697]: E0127 15:10:25.568298 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:10:26 crc kubenswrapper[4697]: I0127 15:10:26.568111 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:10:26 crc kubenswrapper[4697]: I0127 15:10:26.568940 4697 scope.go:117] "RemoveContainer" containerID="8434917bca076a475c1e4b907733bca9cee4559bea25a20542bc654c51f925fd" Jan 27 15:10:26 crc kubenswrapper[4697]: E0127 15:10:26.569026 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:10:26 crc kubenswrapper[4697]: I0127 15:10:26.568110 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vwctp" Jan 27 15:10:26 crc kubenswrapper[4697]: E0127 15:10:26.569970 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vwctp" podUID="11ed6885-450d-477c-8e08-acf5fbde2fa3" Jan 27 15:10:27 crc kubenswrapper[4697]: I0127 15:10:27.228896 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-z6jxw_6a1ce5ad-1a8c-4a28-99d8-fc71649954ad/ovnkube-controller/3.log" Jan 27 15:10:27 crc kubenswrapper[4697]: I0127 15:10:27.231880 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6jxw" event={"ID":"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad","Type":"ContainerStarted","Data":"918734811d71295456ae6cb8f392e78a32b0db85d73470ffd6ddfaadc1efa3bd"} Jan 27 15:10:27 crc kubenswrapper[4697]: I0127 15:10:27.232282 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-z6jxw" Jan 27 15:10:27 crc kubenswrapper[4697]: I0127 15:10:27.261044 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-z6jxw" podStartSLOduration=101.26102745 podStartE2EDuration="1m41.26102745s" podCreationTimestamp="2026-01-27 15:08:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:10:27.259998356 +0000 UTC m=+123.432398167" watchObservedRunningTime="2026-01-27 15:10:27.26102745 +0000 UTC m=+123.433427231" Jan 27 15:10:27 crc kubenswrapper[4697]: I0127 15:10:27.495194 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-vwctp"] Jan 27 15:10:27 crc kubenswrapper[4697]: I0127 15:10:27.495332 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vwctp" Jan 27 15:10:27 crc kubenswrapper[4697]: E0127 15:10:27.495460 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vwctp" podUID="11ed6885-450d-477c-8e08-acf5fbde2fa3" Jan 27 15:10:27 crc kubenswrapper[4697]: I0127 15:10:27.568205 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:10:27 crc kubenswrapper[4697]: I0127 15:10:27.568308 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:10:27 crc kubenswrapper[4697]: E0127 15:10:27.568341 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:10:27 crc kubenswrapper[4697]: E0127 15:10:27.568481 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:10:28 crc kubenswrapper[4697]: I0127 15:10:28.567769 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:10:28 crc kubenswrapper[4697]: E0127 15:10:28.568116 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:10:29 crc kubenswrapper[4697]: I0127 15:10:29.568270 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:10:29 crc kubenswrapper[4697]: I0127 15:10:29.568300 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:10:29 crc kubenswrapper[4697]: E0127 15:10:29.568456 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:10:29 crc kubenswrapper[4697]: E0127 15:10:29.568529 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:10:29 crc kubenswrapper[4697]: I0127 15:10:29.568312 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vwctp" Jan 27 15:10:29 crc kubenswrapper[4697]: E0127 15:10:29.568636 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vwctp" podUID="11ed6885-450d-477c-8e08-acf5fbde2fa3" Jan 27 15:10:29 crc kubenswrapper[4697]: E0127 15:10:29.677953 4697 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 27 15:10:30 crc kubenswrapper[4697]: I0127 15:10:30.568442 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:10:30 crc kubenswrapper[4697]: E0127 15:10:30.568655 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:10:31 crc kubenswrapper[4697]: I0127 15:10:31.567885 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:10:31 crc kubenswrapper[4697]: I0127 15:10:31.567936 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:10:31 crc kubenswrapper[4697]: E0127 15:10:31.568030 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:10:31 crc kubenswrapper[4697]: I0127 15:10:31.567885 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vwctp" Jan 27 15:10:31 crc kubenswrapper[4697]: E0127 15:10:31.568171 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:10:31 crc kubenswrapper[4697]: E0127 15:10:31.568407 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vwctp" podUID="11ed6885-450d-477c-8e08-acf5fbde2fa3" Jan 27 15:10:32 crc kubenswrapper[4697]: I0127 15:10:32.567773 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:10:32 crc kubenswrapper[4697]: E0127 15:10:32.568351 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:10:33 crc kubenswrapper[4697]: I0127 15:10:33.568342 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vwctp" Jan 27 15:10:33 crc kubenswrapper[4697]: I0127 15:10:33.568475 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:10:33 crc kubenswrapper[4697]: E0127 15:10:33.568510 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vwctp" podUID="11ed6885-450d-477c-8e08-acf5fbde2fa3" Jan 27 15:10:33 crc kubenswrapper[4697]: E0127 15:10:33.568625 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:10:33 crc kubenswrapper[4697]: I0127 15:10:33.569598 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:10:33 crc kubenswrapper[4697]: E0127 15:10:33.569948 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:10:34 crc kubenswrapper[4697]: I0127 15:10:34.569352 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:10:34 crc kubenswrapper[4697]: I0127 15:10:34.569458 4697 scope.go:117] "RemoveContainer" containerID="55217260dcb8aebc9ddf2d903bc0257bc8a122956102c0215d6a5a20451d6afe" Jan 27 15:10:34 crc kubenswrapper[4697]: E0127 15:10:34.570670 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:10:34 crc kubenswrapper[4697]: E0127 15:10:34.679594 4697 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 27 15:10:35 crc kubenswrapper[4697]: I0127 15:10:35.260299 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rq89t_7fbc1c27-fba2-40df-95dd-3842bd1f1906/kube-multus/1.log" Jan 27 15:10:35 crc kubenswrapper[4697]: I0127 15:10:35.260357 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-rq89t" event={"ID":"7fbc1c27-fba2-40df-95dd-3842bd1f1906","Type":"ContainerStarted","Data":"5609c867cf313ac9913a53179ba2ad268eb1d131842e8928d5d9605076980d92"} Jan 27 15:10:35 crc kubenswrapper[4697]: I0127 15:10:35.567876 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:10:35 crc kubenswrapper[4697]: I0127 15:10:35.567917 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:10:35 crc kubenswrapper[4697]: I0127 15:10:35.567909 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vwctp" Jan 27 15:10:35 crc kubenswrapper[4697]: E0127 15:10:35.568037 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:10:35 crc kubenswrapper[4697]: E0127 15:10:35.568121 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:10:35 crc kubenswrapper[4697]: E0127 15:10:35.568196 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vwctp" podUID="11ed6885-450d-477c-8e08-acf5fbde2fa3" Jan 27 15:10:36 crc kubenswrapper[4697]: I0127 15:10:36.568323 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:10:36 crc kubenswrapper[4697]: E0127 15:10:36.568707 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:10:37 crc kubenswrapper[4697]: I0127 15:10:37.568164 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:10:37 crc kubenswrapper[4697]: I0127 15:10:37.568220 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:10:37 crc kubenswrapper[4697]: I0127 15:10:37.568205 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vwctp" Jan 27 15:10:37 crc kubenswrapper[4697]: E0127 15:10:37.568391 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:10:37 crc kubenswrapper[4697]: E0127 15:10:37.568524 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:10:37 crc kubenswrapper[4697]: E0127 15:10:37.568619 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vwctp" podUID="11ed6885-450d-477c-8e08-acf5fbde2fa3" Jan 27 15:10:38 crc kubenswrapper[4697]: I0127 15:10:38.567912 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:10:38 crc kubenswrapper[4697]: E0127 15:10:38.568038 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:10:39 crc kubenswrapper[4697]: I0127 15:10:39.568268 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:10:39 crc kubenswrapper[4697]: E0127 15:10:39.568388 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:10:39 crc kubenswrapper[4697]: I0127 15:10:39.568572 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vwctp" Jan 27 15:10:39 crc kubenswrapper[4697]: E0127 15:10:39.568619 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vwctp" podUID="11ed6885-450d-477c-8e08-acf5fbde2fa3" Jan 27 15:10:39 crc kubenswrapper[4697]: I0127 15:10:39.568715 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:10:39 crc kubenswrapper[4697]: E0127 15:10:39.568766 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:10:40 crc kubenswrapper[4697]: I0127 15:10:40.567791 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:10:40 crc kubenswrapper[4697]: I0127 15:10:40.570284 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 27 15:10:40 crc kubenswrapper[4697]: I0127 15:10:40.570611 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 27 15:10:41 crc kubenswrapper[4697]: I0127 15:10:41.568009 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vwctp" Jan 27 15:10:41 crc kubenswrapper[4697]: I0127 15:10:41.568052 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:10:41 crc kubenswrapper[4697]: I0127 15:10:41.568077 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:10:41 crc kubenswrapper[4697]: I0127 15:10:41.570722 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 27 15:10:41 crc kubenswrapper[4697]: I0127 15:10:41.570993 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 27 15:10:41 crc kubenswrapper[4697]: I0127 15:10:41.571149 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 27 15:10:41 crc kubenswrapper[4697]: I0127 15:10:41.571160 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.262218 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.306952 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-9p5q5"] Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.308670 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-9p5q5" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.312377 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-7ddjp"] Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.313189 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-7ddjp" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.318107 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-wjd95"] Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.318996 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.319589 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-wjd95" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.320165 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.320476 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.321518 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.321556 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-nmrvs"] Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.325196 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.325301 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.325222 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.325635 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.326020 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.326211 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.326089 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.329875 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.330125 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.340474 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-v52qb"] Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.340600 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-nmrvs" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.341269 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-v52qb" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.347993 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-z6snp"] Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.349677 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-z6snp" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.360215 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.361256 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.363872 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-w85nj"] Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.364433 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-vj475"] Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.365436 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-vj475" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.365748 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.365862 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-w85nj" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.372989 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.373137 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.373400 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.374291 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.374519 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.374642 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.374751 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.374752 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.374962 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.375077 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.375344 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.374889 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.374927 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.376446 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.376575 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.378891 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.379052 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-h46d6"] Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.379530 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h46d6" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.381301 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.381564 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.381803 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.381966 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.381991 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.382123 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.382573 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.382586 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.382943 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.383148 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.383433 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.388867 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tggvq"] Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.389444 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tggvq" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.390066 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.390128 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.390285 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.390066 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.390840 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.393372 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.393585 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.393818 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.393841 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.394037 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.394225 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.394342 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.394469 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.394538 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.394630 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-78k6r"] Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.394728 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.394898 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.395155 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.395292 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.395998 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.400306 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.402640 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.402847 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.427268 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-46h6b"] Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.427804 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mwhvk"] Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.428341 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rx9c4"] Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.428843 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-zdp8l"] Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.429336 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-tgccn"] Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.433993 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-nmrvs"] Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.434146 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-tgccn" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.434704 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-78k6r" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.435027 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-46h6b" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.435357 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mwhvk" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.435647 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rx9c4" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.436026 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-zdp8l" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.441243 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.441908 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.443221 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.443475 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.443914 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.444891 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-7ddjp"] Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.444943 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-9p5q5"] Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.446668 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-2p4pk"] Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.448274 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-2p4pk" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.449115 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.449380 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.449870 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.450021 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.450187 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.450368 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.450510 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.450558 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.450723 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.450853 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/894a6339-d208-46db-8769-ac9153cb1ba0-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-7ddjp\" (UID: \"894a6339-d208-46db-8769-ac9153cb1ba0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-7ddjp" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.450905 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0f95124d-8a5d-4a0d-b4cd-906d0341a6a2-oauth-serving-cert\") pod \"console-f9d7485db-wjd95\" (UID: \"0f95124d-8a5d-4a0d-b4cd-906d0341a6a2\") " pod="openshift-console/console-f9d7485db-wjd95" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.450868 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.451100 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0f95124d-8a5d-4a0d-b4cd-906d0341a6a2-console-config\") pod \"console-f9d7485db-wjd95\" (UID: \"0f95124d-8a5d-4a0d-b4cd-906d0341a6a2\") " pod="openshift-console/console-f9d7485db-wjd95" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.451195 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f9c4aaa3-b53b-4b3f-8d6a-b9b7eef37362-serving-cert\") pod \"apiserver-76f77b778f-nmrvs\" (UID: \"f9c4aaa3-b53b-4b3f-8d6a-b9b7eef37362\") " pod="openshift-apiserver/apiserver-76f77b778f-nmrvs" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.451261 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-flj75\" (UniqueName: \"kubernetes.io/projected/cbd9208d-08ed-47af-a7cf-b9ee3973b964-kube-api-access-flj75\") pod \"machine-api-operator-5694c8668f-9p5q5\" (UID: \"cbd9208d-08ed-47af-a7cf-b9ee3973b964\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-9p5q5" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.451296 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9c4aaa3-b53b-4b3f-8d6a-b9b7eef37362-config\") pod \"apiserver-76f77b778f-nmrvs\" (UID: \"f9c4aaa3-b53b-4b3f-8d6a-b9b7eef37362\") " pod="openshift-apiserver/apiserver-76f77b778f-nmrvs" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.451381 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cbd9208d-08ed-47af-a7cf-b9ee3973b964-config\") pod \"machine-api-operator-5694c8668f-9p5q5\" (UID: \"cbd9208d-08ed-47af-a7cf-b9ee3973b964\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-9p5q5" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.451412 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/894a6339-d208-46db-8769-ac9153cb1ba0-serving-cert\") pod \"controller-manager-879f6c89f-7ddjp\" (UID: \"894a6339-d208-46db-8769-ac9153cb1ba0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-7ddjp" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.451429 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f9c4aaa3-b53b-4b3f-8d6a-b9b7eef37362-trusted-ca-bundle\") pod \"apiserver-76f77b778f-nmrvs\" (UID: \"f9c4aaa3-b53b-4b3f-8d6a-b9b7eef37362\") " pod="openshift-apiserver/apiserver-76f77b778f-nmrvs" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.451446 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f9c4aaa3-b53b-4b3f-8d6a-b9b7eef37362-encryption-config\") pod \"apiserver-76f77b778f-nmrvs\" (UID: \"f9c4aaa3-b53b-4b3f-8d6a-b9b7eef37362\") " pod="openshift-apiserver/apiserver-76f77b778f-nmrvs" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.451477 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f9c4aaa3-b53b-4b3f-8d6a-b9b7eef37362-etcd-client\") pod \"apiserver-76f77b778f-nmrvs\" (UID: \"f9c4aaa3-b53b-4b3f-8d6a-b9b7eef37362\") " pod="openshift-apiserver/apiserver-76f77b778f-nmrvs" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.451495 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgwpf\" (UniqueName: \"kubernetes.io/projected/f9c4aaa3-b53b-4b3f-8d6a-b9b7eef37362-kube-api-access-xgwpf\") pod \"apiserver-76f77b778f-nmrvs\" (UID: \"f9c4aaa3-b53b-4b3f-8d6a-b9b7eef37362\") " pod="openshift-apiserver/apiserver-76f77b778f-nmrvs" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.451720 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/894a6339-d208-46db-8769-ac9153cb1ba0-client-ca\") pod \"controller-manager-879f6c89f-7ddjp\" (UID: \"894a6339-d208-46db-8769-ac9153cb1ba0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-7ddjp" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.451743 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/f9c4aaa3-b53b-4b3f-8d6a-b9b7eef37362-audit\") pod \"apiserver-76f77b778f-nmrvs\" (UID: \"f9c4aaa3-b53b-4b3f-8d6a-b9b7eef37362\") " pod="openshift-apiserver/apiserver-76f77b778f-nmrvs" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.451778 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0f95124d-8a5d-4a0d-b4cd-906d0341a6a2-service-ca\") pod \"console-f9d7485db-wjd95\" (UID: \"0f95124d-8a5d-4a0d-b4cd-906d0341a6a2\") " pod="openshift-console/console-f9d7485db-wjd95" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.452987 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0f95124d-8a5d-4a0d-b4cd-906d0341a6a2-console-oauth-config\") pod \"console-f9d7485db-wjd95\" (UID: \"0f95124d-8a5d-4a0d-b4cd-906d0341a6a2\") " pod="openshift-console/console-f9d7485db-wjd95" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.453133 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25dwx\" (UniqueName: \"kubernetes.io/projected/894a6339-d208-46db-8769-ac9153cb1ba0-kube-api-access-25dwx\") pod \"controller-manager-879f6c89f-7ddjp\" (UID: \"894a6339-d208-46db-8769-ac9153cb1ba0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-7ddjp" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.453186 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f9c4aaa3-b53b-4b3f-8d6a-b9b7eef37362-etcd-serving-ca\") pod \"apiserver-76f77b778f-nmrvs\" (UID: \"f9c4aaa3-b53b-4b3f-8d6a-b9b7eef37362\") " pod="openshift-apiserver/apiserver-76f77b778f-nmrvs" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.453306 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f9c4aaa3-b53b-4b3f-8d6a-b9b7eef37362-node-pullsecrets\") pod \"apiserver-76f77b778f-nmrvs\" (UID: \"f9c4aaa3-b53b-4b3f-8d6a-b9b7eef37362\") " pod="openshift-apiserver/apiserver-76f77b778f-nmrvs" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.453356 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0f95124d-8a5d-4a0d-b4cd-906d0341a6a2-trusted-ca-bundle\") pod \"console-f9d7485db-wjd95\" (UID: \"0f95124d-8a5d-4a0d-b4cd-906d0341a6a2\") " pod="openshift-console/console-f9d7485db-wjd95" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.453433 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/cbd9208d-08ed-47af-a7cf-b9ee3973b964-images\") pod \"machine-api-operator-5694c8668f-9p5q5\" (UID: \"cbd9208d-08ed-47af-a7cf-b9ee3973b964\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-9p5q5" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.453491 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/cbd9208d-08ed-47af-a7cf-b9ee3973b964-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-9p5q5\" (UID: \"cbd9208d-08ed-47af-a7cf-b9ee3973b964\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-9p5q5" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.453522 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f9c4aaa3-b53b-4b3f-8d6a-b9b7eef37362-audit-dir\") pod \"apiserver-76f77b778f-nmrvs\" (UID: \"f9c4aaa3-b53b-4b3f-8d6a-b9b7eef37362\") " pod="openshift-apiserver/apiserver-76f77b778f-nmrvs" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.453575 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hh84z\" (UniqueName: \"kubernetes.io/projected/0f95124d-8a5d-4a0d-b4cd-906d0341a6a2-kube-api-access-hh84z\") pod \"console-f9d7485db-wjd95\" (UID: \"0f95124d-8a5d-4a0d-b4cd-906d0341a6a2\") " pod="openshift-console/console-f9d7485db-wjd95" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.453631 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0f95124d-8a5d-4a0d-b4cd-906d0341a6a2-console-serving-cert\") pod \"console-f9d7485db-wjd95\" (UID: \"0f95124d-8a5d-4a0d-b4cd-906d0341a6a2\") " pod="openshift-console/console-f9d7485db-wjd95" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.453663 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/894a6339-d208-46db-8769-ac9153cb1ba0-config\") pod \"controller-manager-879f6c89f-7ddjp\" (UID: \"894a6339-d208-46db-8769-ac9153cb1ba0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-7ddjp" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.453685 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/f9c4aaa3-b53b-4b3f-8d6a-b9b7eef37362-image-import-ca\") pod \"apiserver-76f77b778f-nmrvs\" (UID: \"f9c4aaa3-b53b-4b3f-8d6a-b9b7eef37362\") " pod="openshift-apiserver/apiserver-76f77b778f-nmrvs" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.454431 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.455718 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.456954 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-qlprf"] Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.474811 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-qlprf" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.476887 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-v52qb"] Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.477528 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-w85nj"] Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.480414 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.485098 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-9ds6c"] Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.485685 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-l8h5h"] Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.486095 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-l8h5h" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.486338 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-9ds6c" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.487836 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.488102 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.488591 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sztfd"] Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.489210 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xxcz8"] Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.489621 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6vnzn"] Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.489801 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.489963 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.490107 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6vnzn" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.490461 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sztfd" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.490697 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xxcz8" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.490109 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.491019 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-q7j4f"] Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.492438 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.490254 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.490295 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.490362 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.490532 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.490585 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.490620 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.490650 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.495735 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.496047 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.497050 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.497277 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.497757 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-wmwsd"] Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.497933 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.499236 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.499700 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.500283 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.501144 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-q7j4f" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.501699 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-r5rsq"] Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.501775 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-wmwsd" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.502459 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-wf8n4"] Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.502987 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-wf8n4" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.503258 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-r5rsq" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.503556 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.505646 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-fxbvr"] Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.511111 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qkkgm"] Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.512283 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.512528 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wp9j5"] Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.512690 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-fxbvr" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.514777 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qkkgm" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.521132 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.523647 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-s2bmd"] Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.527047 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wp9j5" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.527969 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-js7zh"] Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.529012 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-vj475"] Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.529212 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-js7zh" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.529735 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-s2bmd" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.534230 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-5xn9m"] Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.534333 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.535813 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-5xn9m" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.543971 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-4tnq9"] Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.545688 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-4tnq9" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.556700 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-qvsns"] Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.557807 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-qvsns" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.565003 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-k5tl7"] Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.565916 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-k5tl7" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.568635 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-45xm2"] Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.569507 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-45xm2" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.569828 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dr2qr"] Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.570384 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dr2qr" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.571907 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-wjd95"] Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.573556 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mwhvk"] Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.574802 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.575733 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f9c4aaa3-b53b-4b3f-8d6a-b9b7eef37362-etcd-client\") pod \"apiserver-76f77b778f-nmrvs\" (UID: \"f9c4aaa3-b53b-4b3f-8d6a-b9b7eef37362\") " pod="openshift-apiserver/apiserver-76f77b778f-nmrvs" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.575770 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/420813d8-71d5-401e-9af6-61296b8a25ba-machine-approver-tls\") pod \"machine-approver-56656f9798-z6snp\" (UID: \"420813d8-71d5-401e-9af6-61296b8a25ba\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-z6snp" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.575813 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a6dbb097-e288-4cf9-8aa5-f35c997358df-audit-dir\") pod \"apiserver-7bbb656c7d-h46d6\" (UID: \"a6dbb097-e288-4cf9-8aa5-f35c997358df\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h46d6" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.575834 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c9f06c8d-2fe6-44f7-8870-0142002dfae8-serving-cert\") pod \"console-operator-58897d9998-v52qb\" (UID: \"c9f06c8d-2fe6-44f7-8870-0142002dfae8\") " pod="openshift-console-operator/console-operator-58897d9998-v52qb" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.575913 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xgwpf\" (UniqueName: \"kubernetes.io/projected/f9c4aaa3-b53b-4b3f-8d6a-b9b7eef37362-kube-api-access-xgwpf\") pod \"apiserver-76f77b778f-nmrvs\" (UID: \"f9c4aaa3-b53b-4b3f-8d6a-b9b7eef37362\") " pod="openshift-apiserver/apiserver-76f77b778f-nmrvs" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.575937 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a6dbb097-e288-4cf9-8aa5-f35c997358df-encryption-config\") pod \"apiserver-7bbb656c7d-h46d6\" (UID: \"a6dbb097-e288-4cf9-8aa5-f35c997358df\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h46d6" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.575962 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d1d0b154-f221-4132-9d6f-a17173841b1f-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-tgccn\" (UID: \"d1d0b154-f221-4132-9d6f-a17173841b1f\") " pod="openshift-authentication/oauth-openshift-558db77b4-tgccn" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.576025 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5fql\" (UniqueName: \"kubernetes.io/projected/0ac2a583-d3dc-433a-8b0f-92c9984f6b20-kube-api-access-f5fql\") pod \"openshift-apiserver-operator-796bbdcf4f-mwhvk\" (UID: \"0ac2a583-d3dc-433a-8b0f-92c9984f6b20\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mwhvk" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.576051 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/894a6339-d208-46db-8769-ac9153cb1ba0-client-ca\") pod \"controller-manager-879f6c89f-7ddjp\" (UID: \"894a6339-d208-46db-8769-ac9153cb1ba0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-7ddjp" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.576112 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/f9c4aaa3-b53b-4b3f-8d6a-b9b7eef37362-audit\") pod \"apiserver-76f77b778f-nmrvs\" (UID: \"f9c4aaa3-b53b-4b3f-8d6a-b9b7eef37362\") " pod="openshift-apiserver/apiserver-76f77b778f-nmrvs" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.576140 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8l5qn\" (UniqueName: \"kubernetes.io/projected/8ab80f79-35d6-42cc-8480-e2a778d41da7-kube-api-access-8l5qn\") pod \"cluster-samples-operator-665b6dd947-tggvq\" (UID: \"8ab80f79-35d6-42cc-8480-e2a778d41da7\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tggvq" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.576190 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f8433f4-0c5f-40eb-b4c5-88c02b1595ad-config\") pod \"authentication-operator-69f744f599-46h6b\" (UID: \"4f8433f4-0c5f-40eb-b4c5-88c02b1595ad\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-46h6b" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.576215 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0f95124d-8a5d-4a0d-b4cd-906d0341a6a2-console-oauth-config\") pod \"console-f9d7485db-wjd95\" (UID: \"0f95124d-8a5d-4a0d-b4cd-906d0341a6a2\") " pod="openshift-console/console-f9d7485db-wjd95" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.576269 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0f95124d-8a5d-4a0d-b4cd-906d0341a6a2-service-ca\") pod \"console-f9d7485db-wjd95\" (UID: \"0f95124d-8a5d-4a0d-b4cd-906d0341a6a2\") " pod="openshift-console/console-f9d7485db-wjd95" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.576297 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0ac2a583-d3dc-433a-8b0f-92c9984f6b20-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-mwhvk\" (UID: \"0ac2a583-d3dc-433a-8b0f-92c9984f6b20\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mwhvk" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.576347 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ac2a583-d3dc-433a-8b0f-92c9984f6b20-config\") pod \"openshift-apiserver-operator-796bbdcf4f-mwhvk\" (UID: \"0ac2a583-d3dc-433a-8b0f-92c9984f6b20\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mwhvk" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.576379 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25dwx\" (UniqueName: \"kubernetes.io/projected/894a6339-d208-46db-8769-ac9153cb1ba0-kube-api-access-25dwx\") pod \"controller-manager-879f6c89f-7ddjp\" (UID: \"894a6339-d208-46db-8769-ac9153cb1ba0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-7ddjp" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.576429 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a6dbb097-e288-4cf9-8aa5-f35c997358df-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-h46d6\" (UID: \"a6dbb097-e288-4cf9-8aa5-f35c997358df\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h46d6" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.576455 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cef607c4-e16c-4ae6-9b66-3206b100267c-serving-cert\") pod \"openshift-config-operator-7777fb866f-vj475\" (UID: \"cef607c4-e16c-4ae6-9b66-3206b100267c\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-vj475" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.576504 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfcb2\" (UniqueName: \"kubernetes.io/projected/24828dfa-ec12-4de9-aaba-96716e62d49a-kube-api-access-dfcb2\") pod \"route-controller-manager-6576b87f9c-w85nj\" (UID: \"24828dfa-ec12-4de9-aaba-96716e62d49a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-w85nj" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.576533 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f9c4aaa3-b53b-4b3f-8d6a-b9b7eef37362-etcd-serving-ca\") pod \"apiserver-76f77b778f-nmrvs\" (UID: \"f9c4aaa3-b53b-4b3f-8d6a-b9b7eef37362\") " pod="openshift-apiserver/apiserver-76f77b778f-nmrvs" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.576593 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a6dbb097-e288-4cf9-8aa5-f35c997358df-serving-cert\") pod \"apiserver-7bbb656c7d-h46d6\" (UID: \"a6dbb097-e288-4cf9-8aa5-f35c997358df\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h46d6" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.576618 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5jh6\" (UniqueName: \"kubernetes.io/projected/a6dbb097-e288-4cf9-8aa5-f35c997358df-kube-api-access-l5jh6\") pod \"apiserver-7bbb656c7d-h46d6\" (UID: \"a6dbb097-e288-4cf9-8aa5-f35c997358df\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h46d6" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.576680 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f9c4aaa3-b53b-4b3f-8d6a-b9b7eef37362-node-pullsecrets\") pod \"apiserver-76f77b778f-nmrvs\" (UID: \"f9c4aaa3-b53b-4b3f-8d6a-b9b7eef37362\") " pod="openshift-apiserver/apiserver-76f77b778f-nmrvs" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.576705 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0f95124d-8a5d-4a0d-b4cd-906d0341a6a2-trusted-ca-bundle\") pod \"console-f9d7485db-wjd95\" (UID: \"0f95124d-8a5d-4a0d-b4cd-906d0341a6a2\") " pod="openshift-console/console-f9d7485db-wjd95" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.577983 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f9c4aaa3-b53b-4b3f-8d6a-b9b7eef37362-etcd-serving-ca\") pod \"apiserver-76f77b778f-nmrvs\" (UID: \"f9c4aaa3-b53b-4b3f-8d6a-b9b7eef37362\") " pod="openshift-apiserver/apiserver-76f77b778f-nmrvs" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.578648 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/894a6339-d208-46db-8769-ac9153cb1ba0-client-ca\") pod \"controller-manager-879f6c89f-7ddjp\" (UID: \"894a6339-d208-46db-8769-ac9153cb1ba0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-7ddjp" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.579285 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/f9c4aaa3-b53b-4b3f-8d6a-b9b7eef37362-audit\") pod \"apiserver-76f77b778f-nmrvs\" (UID: \"f9c4aaa3-b53b-4b3f-8d6a-b9b7eef37362\") " pod="openshift-apiserver/apiserver-76f77b778f-nmrvs" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.579328 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f9c4aaa3-b53b-4b3f-8d6a-b9b7eef37362-node-pullsecrets\") pod \"apiserver-76f77b778f-nmrvs\" (UID: \"f9c4aaa3-b53b-4b3f-8d6a-b9b7eef37362\") " pod="openshift-apiserver/apiserver-76f77b778f-nmrvs" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.579571 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0f95124d-8a5d-4a0d-b4cd-906d0341a6a2-trusted-ca-bundle\") pod \"console-f9d7485db-wjd95\" (UID: \"0f95124d-8a5d-4a0d-b4cd-906d0341a6a2\") " pod="openshift-console/console-f9d7485db-wjd95" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.576814 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e93d54dd-4445-4bfd-a9fb-914d3b06e049-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-rx9c4\" (UID: \"e93d54dd-4445-4bfd-a9fb-914d3b06e049\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rx9c4" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.579772 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d1d0b154-f221-4132-9d6f-a17173841b1f-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-tgccn\" (UID: \"d1d0b154-f221-4132-9d6f-a17173841b1f\") " pod="openshift-authentication/oauth-openshift-558db77b4-tgccn" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.579838 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/24828dfa-ec12-4de9-aaba-96716e62d49a-client-ca\") pod \"route-controller-manager-6576b87f9c-w85nj\" (UID: \"24828dfa-ec12-4de9-aaba-96716e62d49a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-w85nj" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.579899 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d1d0b154-f221-4132-9d6f-a17173841b1f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-tgccn\" (UID: \"d1d0b154-f221-4132-9d6f-a17173841b1f\") " pod="openshift-authentication/oauth-openshift-558db77b4-tgccn" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.579934 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/cbd9208d-08ed-47af-a7cf-b9ee3973b964-images\") pod \"machine-api-operator-5694c8668f-9p5q5\" (UID: \"cbd9208d-08ed-47af-a7cf-b9ee3973b964\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-9p5q5" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.579959 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/cbd9208d-08ed-47af-a7cf-b9ee3973b964-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-9p5q5\" (UID: \"cbd9208d-08ed-47af-a7cf-b9ee3973b964\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-9p5q5" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.579985 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d1d0b154-f221-4132-9d6f-a17173841b1f-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-tgccn\" (UID: \"d1d0b154-f221-4132-9d6f-a17173841b1f\") " pod="openshift-authentication/oauth-openshift-558db77b4-tgccn" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.580010 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwh64\" (UniqueName: \"kubernetes.io/projected/927094cf-5a33-4170-97f1-b9b2c4f5b519-kube-api-access-rwh64\") pod \"dns-operator-744455d44c-2p4pk\" (UID: \"927094cf-5a33-4170-97f1-b9b2c4f5b519\") " pod="openshift-dns-operator/dns-operator-744455d44c-2p4pk" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.580037 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f9c4aaa3-b53b-4b3f-8d6a-b9b7eef37362-audit-dir\") pod \"apiserver-76f77b778f-nmrvs\" (UID: \"f9c4aaa3-b53b-4b3f-8d6a-b9b7eef37362\") " pod="openshift-apiserver/apiserver-76f77b778f-nmrvs" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.580059 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hh84z\" (UniqueName: \"kubernetes.io/projected/0f95124d-8a5d-4a0d-b4cd-906d0341a6a2-kube-api-access-hh84z\") pod \"console-f9d7485db-wjd95\" (UID: \"0f95124d-8a5d-4a0d-b4cd-906d0341a6a2\") " pod="openshift-console/console-f9d7485db-wjd95" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.580080 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d1d0b154-f221-4132-9d6f-a17173841b1f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-tgccn\" (UID: \"d1d0b154-f221-4132-9d6f-a17173841b1f\") " pod="openshift-authentication/oauth-openshift-558db77b4-tgccn" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.580105 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/8ab80f79-35d6-42cc-8480-e2a778d41da7-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-tggvq\" (UID: \"8ab80f79-35d6-42cc-8480-e2a778d41da7\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tggvq" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.580132 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xp985\" (UniqueName: \"kubernetes.io/projected/e2b1acef-4d0c-4ce0-aa5a-0e5b28ae08c1-kube-api-access-xp985\") pod \"cluster-image-registry-operator-dc59b4c8b-zdp8l\" (UID: \"e2b1acef-4d0c-4ce0-aa5a-0e5b28ae08c1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-zdp8l" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.580153 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a6dbb097-e288-4cf9-8aa5-f35c997358df-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-h46d6\" (UID: \"a6dbb097-e288-4cf9-8aa5-f35c997358df\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h46d6" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.580174 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4f8433f4-0c5f-40eb-b4c5-88c02b1595ad-service-ca-bundle\") pod \"authentication-operator-69f744f599-46h6b\" (UID: \"4f8433f4-0c5f-40eb-b4c5-88c02b1595ad\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-46h6b" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.580196 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdq8m\" (UniqueName: \"kubernetes.io/projected/4f8433f4-0c5f-40eb-b4c5-88c02b1595ad-kube-api-access-zdq8m\") pod \"authentication-operator-69f744f599-46h6b\" (UID: \"4f8433f4-0c5f-40eb-b4c5-88c02b1595ad\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-46h6b" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.580220 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qczn\" (UniqueName: \"kubernetes.io/projected/420813d8-71d5-401e-9af6-61296b8a25ba-kube-api-access-2qczn\") pod \"machine-approver-56656f9798-z6snp\" (UID: \"420813d8-71d5-401e-9af6-61296b8a25ba\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-z6snp" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.580240 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a6dbb097-e288-4cf9-8aa5-f35c997358df-etcd-client\") pod \"apiserver-7bbb656c7d-h46d6\" (UID: \"a6dbb097-e288-4cf9-8aa5-f35c997358df\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h46d6" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.580263 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9f06c8d-2fe6-44f7-8870-0142002dfae8-config\") pod \"console-operator-58897d9998-v52qb\" (UID: \"c9f06c8d-2fe6-44f7-8870-0142002dfae8\") " pod="openshift-console-operator/console-operator-58897d9998-v52qb" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.580283 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/e2b1acef-4d0c-4ce0-aa5a-0e5b28ae08c1-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-zdp8l\" (UID: \"e2b1acef-4d0c-4ce0-aa5a-0e5b28ae08c1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-zdp8l" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.580301 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e2b1acef-4d0c-4ce0-aa5a-0e5b28ae08c1-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-zdp8l\" (UID: \"e2b1acef-4d0c-4ce0-aa5a-0e5b28ae08c1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-zdp8l" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.580325 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e93d54dd-4445-4bfd-a9fb-914d3b06e049-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-rx9c4\" (UID: \"e93d54dd-4445-4bfd-a9fb-914d3b06e049\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rx9c4" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.580348 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/927094cf-5a33-4170-97f1-b9b2c4f5b519-metrics-tls\") pod \"dns-operator-744455d44c-2p4pk\" (UID: \"927094cf-5a33-4170-97f1-b9b2c4f5b519\") " pod="openshift-dns-operator/dns-operator-744455d44c-2p4pk" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.580371 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/f9c4aaa3-b53b-4b3f-8d6a-b9b7eef37362-image-import-ca\") pod \"apiserver-76f77b778f-nmrvs\" (UID: \"f9c4aaa3-b53b-4b3f-8d6a-b9b7eef37362\") " pod="openshift-apiserver/apiserver-76f77b778f-nmrvs" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.580395 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0f95124d-8a5d-4a0d-b4cd-906d0341a6a2-console-serving-cert\") pod \"console-f9d7485db-wjd95\" (UID: \"0f95124d-8a5d-4a0d-b4cd-906d0341a6a2\") " pod="openshift-console/console-f9d7485db-wjd95" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.580417 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d1d0b154-f221-4132-9d6f-a17173841b1f-audit-policies\") pod \"oauth-openshift-558db77b4-tgccn\" (UID: \"d1d0b154-f221-4132-9d6f-a17173841b1f\") " pod="openshift-authentication/oauth-openshift-558db77b4-tgccn" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.580441 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/894a6339-d208-46db-8769-ac9153cb1ba0-config\") pod \"controller-manager-879f6c89f-7ddjp\" (UID: \"894a6339-d208-46db-8769-ac9153cb1ba0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-7ddjp" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.580465 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c9f06c8d-2fe6-44f7-8870-0142002dfae8-trusted-ca\") pod \"console-operator-58897d9998-v52qb\" (UID: \"c9f06c8d-2fe6-44f7-8870-0142002dfae8\") " pod="openshift-console-operator/console-operator-58897d9998-v52qb" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.580488 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/24828dfa-ec12-4de9-aaba-96716e62d49a-serving-cert\") pod \"route-controller-manager-6576b87f9c-w85nj\" (UID: \"24828dfa-ec12-4de9-aaba-96716e62d49a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-w85nj" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.580514 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/894a6339-d208-46db-8769-ac9153cb1ba0-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-7ddjp\" (UID: \"894a6339-d208-46db-8769-ac9153cb1ba0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-7ddjp" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.580537 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5k96j\" (UniqueName: \"kubernetes.io/projected/d1d0b154-f221-4132-9d6f-a17173841b1f-kube-api-access-5k96j\") pod \"oauth-openshift-558db77b4-tgccn\" (UID: \"d1d0b154-f221-4132-9d6f-a17173841b1f\") " pod="openshift-authentication/oauth-openshift-558db77b4-tgccn" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.580755 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-h46d6"] Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.580820 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-ng2lw"] Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.581629 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/cbd9208d-08ed-47af-a7cf-b9ee3973b964-images\") pod \"machine-api-operator-5694c8668f-9p5q5\" (UID: \"cbd9208d-08ed-47af-a7cf-b9ee3973b964\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-9p5q5" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.582005 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f9c4aaa3-b53b-4b3f-8d6a-b9b7eef37362-audit-dir\") pod \"apiserver-76f77b778f-nmrvs\" (UID: \"f9c4aaa3-b53b-4b3f-8d6a-b9b7eef37362\") " pod="openshift-apiserver/apiserver-76f77b778f-nmrvs" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.582442 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-r5rsq"] Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.582536 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-ng2lw" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.580512 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0f95124d-8a5d-4a0d-b4cd-906d0341a6a2-service-ca\") pod \"console-f9d7485db-wjd95\" (UID: \"0f95124d-8a5d-4a0d-b4cd-906d0341a6a2\") " pod="openshift-console/console-f9d7485db-wjd95" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.583516 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d1d0b154-f221-4132-9d6f-a17173841b1f-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-tgccn\" (UID: \"d1d0b154-f221-4132-9d6f-a17173841b1f\") " pod="openshift-authentication/oauth-openshift-558db77b4-tgccn" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.583591 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0f95124d-8a5d-4a0d-b4cd-906d0341a6a2-oauth-serving-cert\") pod \"console-f9d7485db-wjd95\" (UID: \"0f95124d-8a5d-4a0d-b4cd-906d0341a6a2\") " pod="openshift-console/console-f9d7485db-wjd95" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.583633 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d1d0b154-f221-4132-9d6f-a17173841b1f-audit-dir\") pod \"oauth-openshift-558db77b4-tgccn\" (UID: \"d1d0b154-f221-4132-9d6f-a17173841b1f\") " pod="openshift-authentication/oauth-openshift-558db77b4-tgccn" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.584352 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/f9c4aaa3-b53b-4b3f-8d6a-b9b7eef37362-image-import-ca\") pod \"apiserver-76f77b778f-nmrvs\" (UID: \"f9c4aaa3-b53b-4b3f-8d6a-b9b7eef37362\") " pod="openshift-apiserver/apiserver-76f77b778f-nmrvs" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.584358 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0f95124d-8a5d-4a0d-b4cd-906d0341a6a2-oauth-serving-cert\") pod \"console-f9d7485db-wjd95\" (UID: \"0f95124d-8a5d-4a0d-b4cd-906d0341a6a2\") " pod="openshift-console/console-f9d7485db-wjd95" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.584429 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0f95124d-8a5d-4a0d-b4cd-906d0341a6a2-console-config\") pod \"console-f9d7485db-wjd95\" (UID: \"0f95124d-8a5d-4a0d-b4cd-906d0341a6a2\") " pod="openshift-console/console-f9d7485db-wjd95" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.584553 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzs8x\" (UniqueName: \"kubernetes.io/projected/73d9ac28-74b0-4ead-b4e4-b270264feb05-kube-api-access-gzs8x\") pod \"downloads-7954f5f757-78k6r\" (UID: \"73d9ac28-74b0-4ead-b4e4-b270264feb05\") " pod="openshift-console/downloads-7954f5f757-78k6r" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.584577 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/894a6339-d208-46db-8769-ac9153cb1ba0-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-7ddjp\" (UID: \"894a6339-d208-46db-8769-ac9153cb1ba0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-7ddjp" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.584580 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d1d0b154-f221-4132-9d6f-a17173841b1f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-tgccn\" (UID: \"d1d0b154-f221-4132-9d6f-a17173841b1f\") " pod="openshift-authentication/oauth-openshift-558db77b4-tgccn" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.584655 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4f8433f4-0c5f-40eb-b4c5-88c02b1595ad-serving-cert\") pod \"authentication-operator-69f744f599-46h6b\" (UID: \"4f8433f4-0c5f-40eb-b4c5-88c02b1595ad\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-46h6b" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.584679 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/420813d8-71d5-401e-9af6-61296b8a25ba-config\") pod \"machine-approver-56656f9798-z6snp\" (UID: \"420813d8-71d5-401e-9af6-61296b8a25ba\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-z6snp" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.584694 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a6dbb097-e288-4cf9-8aa5-f35c997358df-audit-policies\") pod \"apiserver-7bbb656c7d-h46d6\" (UID: \"a6dbb097-e288-4cf9-8aa5-f35c997358df\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h46d6" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.584735 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvfx7\" (UniqueName: \"kubernetes.io/projected/cef607c4-e16c-4ae6-9b66-3206b100267c-kube-api-access-bvfx7\") pod \"openshift-config-operator-7777fb866f-vj475\" (UID: \"cef607c4-e16c-4ae6-9b66-3206b100267c\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-vj475" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.584753 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d1d0b154-f221-4132-9d6f-a17173841b1f-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-tgccn\" (UID: \"d1d0b154-f221-4132-9d6f-a17173841b1f\") " pod="openshift-authentication/oauth-openshift-558db77b4-tgccn" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.584768 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d1d0b154-f221-4132-9d6f-a17173841b1f-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-tgccn\" (UID: \"d1d0b154-f221-4132-9d6f-a17173841b1f\") " pod="openshift-authentication/oauth-openshift-558db77b4-tgccn" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.584854 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f9c4aaa3-b53b-4b3f-8d6a-b9b7eef37362-serving-cert\") pod \"apiserver-76f77b778f-nmrvs\" (UID: \"f9c4aaa3-b53b-4b3f-8d6a-b9b7eef37362\") " pod="openshift-apiserver/apiserver-76f77b778f-nmrvs" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.584908 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e2b1acef-4d0c-4ce0-aa5a-0e5b28ae08c1-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-zdp8l\" (UID: \"e2b1acef-4d0c-4ce0-aa5a-0e5b28ae08c1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-zdp8l" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.585064 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4f8433f4-0c5f-40eb-b4c5-88c02b1595ad-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-46h6b\" (UID: \"4f8433f4-0c5f-40eb-b4c5-88c02b1595ad\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-46h6b" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.585088 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24828dfa-ec12-4de9-aaba-96716e62d49a-config\") pod \"route-controller-manager-6576b87f9c-w85nj\" (UID: \"24828dfa-ec12-4de9-aaba-96716e62d49a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-w85nj" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.585107 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-flj75\" (UniqueName: \"kubernetes.io/projected/cbd9208d-08ed-47af-a7cf-b9ee3973b964-kube-api-access-flj75\") pod \"machine-api-operator-5694c8668f-9p5q5\" (UID: \"cbd9208d-08ed-47af-a7cf-b9ee3973b964\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-9p5q5" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.585145 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72bzh\" (UniqueName: \"kubernetes.io/projected/e93d54dd-4445-4bfd-a9fb-914d3b06e049-kube-api-access-72bzh\") pod \"openshift-controller-manager-operator-756b6f6bc6-rx9c4\" (UID: \"e93d54dd-4445-4bfd-a9fb-914d3b06e049\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rx9c4" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.585162 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d1d0b154-f221-4132-9d6f-a17173841b1f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-tgccn\" (UID: \"d1d0b154-f221-4132-9d6f-a17173841b1f\") " pod="openshift-authentication/oauth-openshift-558db77b4-tgccn" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.585191 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9c4aaa3-b53b-4b3f-8d6a-b9b7eef37362-config\") pod \"apiserver-76f77b778f-nmrvs\" (UID: \"f9c4aaa3-b53b-4b3f-8d6a-b9b7eef37362\") " pod="openshift-apiserver/apiserver-76f77b778f-nmrvs" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.585226 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/cef607c4-e16c-4ae6-9b66-3206b100267c-available-featuregates\") pod \"openshift-config-operator-7777fb866f-vj475\" (UID: \"cef607c4-e16c-4ae6-9b66-3206b100267c\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-vj475" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.585244 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cbd9208d-08ed-47af-a7cf-b9ee3973b964-config\") pod \"machine-api-operator-5694c8668f-9p5q5\" (UID: \"cbd9208d-08ed-47af-a7cf-b9ee3973b964\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-9p5q5" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.585258 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/894a6339-d208-46db-8769-ac9153cb1ba0-serving-cert\") pod \"controller-manager-879f6c89f-7ddjp\" (UID: \"894a6339-d208-46db-8769-ac9153cb1ba0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-7ddjp" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.585273 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f9c4aaa3-b53b-4b3f-8d6a-b9b7eef37362-trusted-ca-bundle\") pod \"apiserver-76f77b778f-nmrvs\" (UID: \"f9c4aaa3-b53b-4b3f-8d6a-b9b7eef37362\") " pod="openshift-apiserver/apiserver-76f77b778f-nmrvs" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.585288 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f9c4aaa3-b53b-4b3f-8d6a-b9b7eef37362-encryption-config\") pod \"apiserver-76f77b778f-nmrvs\" (UID: \"f9c4aaa3-b53b-4b3f-8d6a-b9b7eef37362\") " pod="openshift-apiserver/apiserver-76f77b778f-nmrvs" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.585757 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9c4aaa3-b53b-4b3f-8d6a-b9b7eef37362-config\") pod \"apiserver-76f77b778f-nmrvs\" (UID: \"f9c4aaa3-b53b-4b3f-8d6a-b9b7eef37362\") " pod="openshift-apiserver/apiserver-76f77b778f-nmrvs" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.586730 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/894a6339-d208-46db-8769-ac9153cb1ba0-config\") pod \"controller-manager-879f6c89f-7ddjp\" (UID: \"894a6339-d208-46db-8769-ac9153cb1ba0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-7ddjp" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.587223 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cbd9208d-08ed-47af-a7cf-b9ee3973b964-config\") pod \"machine-api-operator-5694c8668f-9p5q5\" (UID: \"cbd9208d-08ed-47af-a7cf-b9ee3973b964\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-9p5q5" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.587322 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0f95124d-8a5d-4a0d-b4cd-906d0341a6a2-console-config\") pod \"console-f9d7485db-wjd95\" (UID: \"0f95124d-8a5d-4a0d-b4cd-906d0341a6a2\") " pod="openshift-console/console-f9d7485db-wjd95" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.587376 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f9c4aaa3-b53b-4b3f-8d6a-b9b7eef37362-trusted-ca-bundle\") pod \"apiserver-76f77b778f-nmrvs\" (UID: \"f9c4aaa3-b53b-4b3f-8d6a-b9b7eef37362\") " pod="openshift-apiserver/apiserver-76f77b778f-nmrvs" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.587777 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f9c4aaa3-b53b-4b3f-8d6a-b9b7eef37362-serving-cert\") pod \"apiserver-76f77b778f-nmrvs\" (UID: \"f9c4aaa3-b53b-4b3f-8d6a-b9b7eef37362\") " pod="openshift-apiserver/apiserver-76f77b778f-nmrvs" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.587869 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tggvq"] Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.587956 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/420813d8-71d5-401e-9af6-61296b8a25ba-auth-proxy-config\") pod \"machine-approver-56656f9798-z6snp\" (UID: \"420813d8-71d5-401e-9af6-61296b8a25ba\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-z6snp" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.588008 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rlcc\" (UniqueName: \"kubernetes.io/projected/c9f06c8d-2fe6-44f7-8870-0142002dfae8-kube-api-access-6rlcc\") pod \"console-operator-58897d9998-v52qb\" (UID: \"c9f06c8d-2fe6-44f7-8870-0142002dfae8\") " pod="openshift-console-operator/console-operator-58897d9998-v52qb" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.588038 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d1d0b154-f221-4132-9d6f-a17173841b1f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-tgccn\" (UID: \"d1d0b154-f221-4132-9d6f-a17173841b1f\") " pod="openshift-authentication/oauth-openshift-558db77b4-tgccn" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.589263 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/cbd9208d-08ed-47af-a7cf-b9ee3973b964-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-9p5q5\" (UID: \"cbd9208d-08ed-47af-a7cf-b9ee3973b964\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-9p5q5" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.589302 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sztfd"] Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.590015 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f9c4aaa3-b53b-4b3f-8d6a-b9b7eef37362-encryption-config\") pod \"apiserver-76f77b778f-nmrvs\" (UID: \"f9c4aaa3-b53b-4b3f-8d6a-b9b7eef37362\") " pod="openshift-apiserver/apiserver-76f77b778f-nmrvs" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.590054 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-46h6b"] Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.590168 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/894a6339-d208-46db-8769-ac9153cb1ba0-serving-cert\") pod \"controller-manager-879f6c89f-7ddjp\" (UID: \"894a6339-d208-46db-8769-ac9153cb1ba0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-7ddjp" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.591374 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.591534 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-2p4pk"] Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.596699 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-qlprf"] Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.596747 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492100-sfd69"] Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.597305 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-9ds6c"] Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.597379 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492100-sfd69" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.599501 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xxcz8"] Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.601533 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f9c4aaa3-b53b-4b3f-8d6a-b9b7eef37362-etcd-client\") pod \"apiserver-76f77b778f-nmrvs\" (UID: \"f9c4aaa3-b53b-4b3f-8d6a-b9b7eef37362\") " pod="openshift-apiserver/apiserver-76f77b778f-nmrvs" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.601888 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-78k6r"] Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.602175 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0f95124d-8a5d-4a0d-b4cd-906d0341a6a2-console-serving-cert\") pod \"console-f9d7485db-wjd95\" (UID: \"0f95124d-8a5d-4a0d-b4cd-906d0341a6a2\") " pod="openshift-console/console-f9d7485db-wjd95" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.602324 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0f95124d-8a5d-4a0d-b4cd-906d0341a6a2-console-oauth-config\") pod \"console-f9d7485db-wjd95\" (UID: \"0f95124d-8a5d-4a0d-b4cd-906d0341a6a2\") " pod="openshift-console/console-f9d7485db-wjd95" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.605845 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-tgccn"] Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.606052 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-5xn9m"] Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.611447 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.619030 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-s2bmd"] Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.619079 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-zdp8l"] Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.619092 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-js7zh"] Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.628122 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rx9c4"] Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.633532 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-wf8n4"] Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.633578 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qkkgm"] Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.633590 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-fxbvr"] Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.634967 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.638821 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-q7j4f"] Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.638870 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492100-sfd69"] Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.638884 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-qvsns"] Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.648222 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wp9j5"] Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.650751 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6vnzn"] Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.651717 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.652061 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-l8h5h"] Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.654365 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-4tnq9"] Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.654399 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-k5tl7"] Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.659271 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-45xm2"] Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.662434 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-8b9kp"] Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.664577 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-8b9kp" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.666346 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dr2qr"] Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.671991 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.677534 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-8b9kp"] Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.681457 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-d4smc"] Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.682460 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-d4smc"] Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.682548 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-d4smc" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.688465 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d1d0b154-f221-4132-9d6f-a17173841b1f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-tgccn\" (UID: \"d1d0b154-f221-4132-9d6f-a17173841b1f\") " pod="openshift-authentication/oauth-openshift-558db77b4-tgccn" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.688504 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d1d0b154-f221-4132-9d6f-a17173841b1f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-tgccn\" (UID: \"d1d0b154-f221-4132-9d6f-a17173841b1f\") " pod="openshift-authentication/oauth-openshift-558db77b4-tgccn" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.688523 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d1d0b154-f221-4132-9d6f-a17173841b1f-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-tgccn\" (UID: \"d1d0b154-f221-4132-9d6f-a17173841b1f\") " pod="openshift-authentication/oauth-openshift-558db77b4-tgccn" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.688538 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rwh64\" (UniqueName: \"kubernetes.io/projected/927094cf-5a33-4170-97f1-b9b2c4f5b519-kube-api-access-rwh64\") pod \"dns-operator-744455d44c-2p4pk\" (UID: \"927094cf-5a33-4170-97f1-b9b2c4f5b519\") " pod="openshift-dns-operator/dns-operator-744455d44c-2p4pk" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.688555 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/8ab80f79-35d6-42cc-8480-e2a778d41da7-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-tggvq\" (UID: \"8ab80f79-35d6-42cc-8480-e2a778d41da7\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tggvq" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.688571 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a6dbb097-e288-4cf9-8aa5-f35c997358df-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-h46d6\" (UID: \"a6dbb097-e288-4cf9-8aa5-f35c997358df\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h46d6" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.688587 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4f8433f4-0c5f-40eb-b4c5-88c02b1595ad-service-ca-bundle\") pod \"authentication-operator-69f744f599-46h6b\" (UID: \"4f8433f4-0c5f-40eb-b4c5-88c02b1595ad\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-46h6b" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.688602 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zdq8m\" (UniqueName: \"kubernetes.io/projected/4f8433f4-0c5f-40eb-b4c5-88c02b1595ad-kube-api-access-zdq8m\") pod \"authentication-operator-69f744f599-46h6b\" (UID: \"4f8433f4-0c5f-40eb-b4c5-88c02b1595ad\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-46h6b" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.688618 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xp985\" (UniqueName: \"kubernetes.io/projected/e2b1acef-4d0c-4ce0-aa5a-0e5b28ae08c1-kube-api-access-xp985\") pod \"cluster-image-registry-operator-dc59b4c8b-zdp8l\" (UID: \"e2b1acef-4d0c-4ce0-aa5a-0e5b28ae08c1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-zdp8l" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.688634 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e2b1acef-4d0c-4ce0-aa5a-0e5b28ae08c1-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-zdp8l\" (UID: \"e2b1acef-4d0c-4ce0-aa5a-0e5b28ae08c1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-zdp8l" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.688650 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e93d54dd-4445-4bfd-a9fb-914d3b06e049-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-rx9c4\" (UID: \"e93d54dd-4445-4bfd-a9fb-914d3b06e049\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rx9c4" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.688665 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qczn\" (UniqueName: \"kubernetes.io/projected/420813d8-71d5-401e-9af6-61296b8a25ba-kube-api-access-2qczn\") pod \"machine-approver-56656f9798-z6snp\" (UID: \"420813d8-71d5-401e-9af6-61296b8a25ba\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-z6snp" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.688679 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a6dbb097-e288-4cf9-8aa5-f35c997358df-etcd-client\") pod \"apiserver-7bbb656c7d-h46d6\" (UID: \"a6dbb097-e288-4cf9-8aa5-f35c997358df\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h46d6" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.688696 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9f06c8d-2fe6-44f7-8870-0142002dfae8-config\") pod \"console-operator-58897d9998-v52qb\" (UID: \"c9f06c8d-2fe6-44f7-8870-0142002dfae8\") " pod="openshift-console-operator/console-operator-58897d9998-v52qb" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.688711 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/e2b1acef-4d0c-4ce0-aa5a-0e5b28ae08c1-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-zdp8l\" (UID: \"e2b1acef-4d0c-4ce0-aa5a-0e5b28ae08c1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-zdp8l" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.688725 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/927094cf-5a33-4170-97f1-b9b2c4f5b519-metrics-tls\") pod \"dns-operator-744455d44c-2p4pk\" (UID: \"927094cf-5a33-4170-97f1-b9b2c4f5b519\") " pod="openshift-dns-operator/dns-operator-744455d44c-2p4pk" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.688739 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d1d0b154-f221-4132-9d6f-a17173841b1f-audit-policies\") pod \"oauth-openshift-558db77b4-tgccn\" (UID: \"d1d0b154-f221-4132-9d6f-a17173841b1f\") " pod="openshift-authentication/oauth-openshift-558db77b4-tgccn" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.688754 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c9f06c8d-2fe6-44f7-8870-0142002dfae8-trusted-ca\") pod \"console-operator-58897d9998-v52qb\" (UID: \"c9f06c8d-2fe6-44f7-8870-0142002dfae8\") " pod="openshift-console-operator/console-operator-58897d9998-v52qb" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.688771 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/24828dfa-ec12-4de9-aaba-96716e62d49a-serving-cert\") pod \"route-controller-manager-6576b87f9c-w85nj\" (UID: \"24828dfa-ec12-4de9-aaba-96716e62d49a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-w85nj" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.688834 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5k96j\" (UniqueName: \"kubernetes.io/projected/d1d0b154-f221-4132-9d6f-a17173841b1f-kube-api-access-5k96j\") pod \"oauth-openshift-558db77b4-tgccn\" (UID: \"d1d0b154-f221-4132-9d6f-a17173841b1f\") " pod="openshift-authentication/oauth-openshift-558db77b4-tgccn" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.688852 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d1d0b154-f221-4132-9d6f-a17173841b1f-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-tgccn\" (UID: \"d1d0b154-f221-4132-9d6f-a17173841b1f\") " pod="openshift-authentication/oauth-openshift-558db77b4-tgccn" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.688883 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d1d0b154-f221-4132-9d6f-a17173841b1f-audit-dir\") pod \"oauth-openshift-558db77b4-tgccn\" (UID: \"d1d0b154-f221-4132-9d6f-a17173841b1f\") " pod="openshift-authentication/oauth-openshift-558db77b4-tgccn" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.688899 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d1d0b154-f221-4132-9d6f-a17173841b1f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-tgccn\" (UID: \"d1d0b154-f221-4132-9d6f-a17173841b1f\") " pod="openshift-authentication/oauth-openshift-558db77b4-tgccn" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.688916 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzs8x\" (UniqueName: \"kubernetes.io/projected/73d9ac28-74b0-4ead-b4e4-b270264feb05-kube-api-access-gzs8x\") pod \"downloads-7954f5f757-78k6r\" (UID: \"73d9ac28-74b0-4ead-b4e4-b270264feb05\") " pod="openshift-console/downloads-7954f5f757-78k6r" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.688930 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4f8433f4-0c5f-40eb-b4c5-88c02b1595ad-serving-cert\") pod \"authentication-operator-69f744f599-46h6b\" (UID: \"4f8433f4-0c5f-40eb-b4c5-88c02b1595ad\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-46h6b" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.688947 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d1d0b154-f221-4132-9d6f-a17173841b1f-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-tgccn\" (UID: \"d1d0b154-f221-4132-9d6f-a17173841b1f\") " pod="openshift-authentication/oauth-openshift-558db77b4-tgccn" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.688962 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/420813d8-71d5-401e-9af6-61296b8a25ba-config\") pod \"machine-approver-56656f9798-z6snp\" (UID: \"420813d8-71d5-401e-9af6-61296b8a25ba\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-z6snp" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.688975 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a6dbb097-e288-4cf9-8aa5-f35c997358df-audit-policies\") pod \"apiserver-7bbb656c7d-h46d6\" (UID: \"a6dbb097-e288-4cf9-8aa5-f35c997358df\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h46d6" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.688994 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvfx7\" (UniqueName: \"kubernetes.io/projected/cef607c4-e16c-4ae6-9b66-3206b100267c-kube-api-access-bvfx7\") pod \"openshift-config-operator-7777fb866f-vj475\" (UID: \"cef607c4-e16c-4ae6-9b66-3206b100267c\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-vj475" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.689009 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d1d0b154-f221-4132-9d6f-a17173841b1f-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-tgccn\" (UID: \"d1d0b154-f221-4132-9d6f-a17173841b1f\") " pod="openshift-authentication/oauth-openshift-558db77b4-tgccn" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.689031 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e2b1acef-4d0c-4ce0-aa5a-0e5b28ae08c1-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-zdp8l\" (UID: \"e2b1acef-4d0c-4ce0-aa5a-0e5b28ae08c1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-zdp8l" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.689052 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72bzh\" (UniqueName: \"kubernetes.io/projected/e93d54dd-4445-4bfd-a9fb-914d3b06e049-kube-api-access-72bzh\") pod \"openshift-controller-manager-operator-756b6f6bc6-rx9c4\" (UID: \"e93d54dd-4445-4bfd-a9fb-914d3b06e049\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rx9c4" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.689067 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d1d0b154-f221-4132-9d6f-a17173841b1f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-tgccn\" (UID: \"d1d0b154-f221-4132-9d6f-a17173841b1f\") " pod="openshift-authentication/oauth-openshift-558db77b4-tgccn" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.689084 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4f8433f4-0c5f-40eb-b4c5-88c02b1595ad-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-46h6b\" (UID: \"4f8433f4-0c5f-40eb-b4c5-88c02b1595ad\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-46h6b" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.689101 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24828dfa-ec12-4de9-aaba-96716e62d49a-config\") pod \"route-controller-manager-6576b87f9c-w85nj\" (UID: \"24828dfa-ec12-4de9-aaba-96716e62d49a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-w85nj" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.689118 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/cef607c4-e16c-4ae6-9b66-3206b100267c-available-featuregates\") pod \"openshift-config-operator-7777fb866f-vj475\" (UID: \"cef607c4-e16c-4ae6-9b66-3206b100267c\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-vj475" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.689134 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/420813d8-71d5-401e-9af6-61296b8a25ba-auth-proxy-config\") pod \"machine-approver-56656f9798-z6snp\" (UID: \"420813d8-71d5-401e-9af6-61296b8a25ba\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-z6snp" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.689149 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6rlcc\" (UniqueName: \"kubernetes.io/projected/c9f06c8d-2fe6-44f7-8870-0142002dfae8-kube-api-access-6rlcc\") pod \"console-operator-58897d9998-v52qb\" (UID: \"c9f06c8d-2fe6-44f7-8870-0142002dfae8\") " pod="openshift-console-operator/console-operator-58897d9998-v52qb" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.689165 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d1d0b154-f221-4132-9d6f-a17173841b1f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-tgccn\" (UID: \"d1d0b154-f221-4132-9d6f-a17173841b1f\") " pod="openshift-authentication/oauth-openshift-558db77b4-tgccn" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.689186 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/420813d8-71d5-401e-9af6-61296b8a25ba-machine-approver-tls\") pod \"machine-approver-56656f9798-z6snp\" (UID: \"420813d8-71d5-401e-9af6-61296b8a25ba\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-z6snp" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.689201 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a6dbb097-e288-4cf9-8aa5-f35c997358df-audit-dir\") pod \"apiserver-7bbb656c7d-h46d6\" (UID: \"a6dbb097-e288-4cf9-8aa5-f35c997358df\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h46d6" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.689217 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c9f06c8d-2fe6-44f7-8870-0142002dfae8-serving-cert\") pod \"console-operator-58897d9998-v52qb\" (UID: \"c9f06c8d-2fe6-44f7-8870-0142002dfae8\") " pod="openshift-console-operator/console-operator-58897d9998-v52qb" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.689240 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a6dbb097-e288-4cf9-8aa5-f35c997358df-encryption-config\") pod \"apiserver-7bbb656c7d-h46d6\" (UID: \"a6dbb097-e288-4cf9-8aa5-f35c997358df\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h46d6" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.689257 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d1d0b154-f221-4132-9d6f-a17173841b1f-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-tgccn\" (UID: \"d1d0b154-f221-4132-9d6f-a17173841b1f\") " pod="openshift-authentication/oauth-openshift-558db77b4-tgccn" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.689272 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5fql\" (UniqueName: \"kubernetes.io/projected/0ac2a583-d3dc-433a-8b0f-92c9984f6b20-kube-api-access-f5fql\") pod \"openshift-apiserver-operator-796bbdcf4f-mwhvk\" (UID: \"0ac2a583-d3dc-433a-8b0f-92c9984f6b20\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mwhvk" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.689287 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f8433f4-0c5f-40eb-b4c5-88c02b1595ad-config\") pod \"authentication-operator-69f744f599-46h6b\" (UID: \"4f8433f4-0c5f-40eb-b4c5-88c02b1595ad\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-46h6b" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.689305 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8l5qn\" (UniqueName: \"kubernetes.io/projected/8ab80f79-35d6-42cc-8480-e2a778d41da7-kube-api-access-8l5qn\") pod \"cluster-samples-operator-665b6dd947-tggvq\" (UID: \"8ab80f79-35d6-42cc-8480-e2a778d41da7\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tggvq" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.689321 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0ac2a583-d3dc-433a-8b0f-92c9984f6b20-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-mwhvk\" (UID: \"0ac2a583-d3dc-433a-8b0f-92c9984f6b20\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mwhvk" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.689335 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ac2a583-d3dc-433a-8b0f-92c9984f6b20-config\") pod \"openshift-apiserver-operator-796bbdcf4f-mwhvk\" (UID: \"0ac2a583-d3dc-433a-8b0f-92c9984f6b20\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mwhvk" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.689358 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a6dbb097-e288-4cf9-8aa5-f35c997358df-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-h46d6\" (UID: \"a6dbb097-e288-4cf9-8aa5-f35c997358df\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h46d6" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.689375 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cef607c4-e16c-4ae6-9b66-3206b100267c-serving-cert\") pod \"openshift-config-operator-7777fb866f-vj475\" (UID: \"cef607c4-e16c-4ae6-9b66-3206b100267c\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-vj475" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.689390 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dfcb2\" (UniqueName: \"kubernetes.io/projected/24828dfa-ec12-4de9-aaba-96716e62d49a-kube-api-access-dfcb2\") pod \"route-controller-manager-6576b87f9c-w85nj\" (UID: \"24828dfa-ec12-4de9-aaba-96716e62d49a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-w85nj" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.689411 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a6dbb097-e288-4cf9-8aa5-f35c997358df-serving-cert\") pod \"apiserver-7bbb656c7d-h46d6\" (UID: \"a6dbb097-e288-4cf9-8aa5-f35c997358df\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h46d6" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.689425 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l5jh6\" (UniqueName: \"kubernetes.io/projected/a6dbb097-e288-4cf9-8aa5-f35c997358df-kube-api-access-l5jh6\") pod \"apiserver-7bbb656c7d-h46d6\" (UID: \"a6dbb097-e288-4cf9-8aa5-f35c997358df\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h46d6" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.689441 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e93d54dd-4445-4bfd-a9fb-914d3b06e049-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-rx9c4\" (UID: \"e93d54dd-4445-4bfd-a9fb-914d3b06e049\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rx9c4" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.689455 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d1d0b154-f221-4132-9d6f-a17173841b1f-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-tgccn\" (UID: \"d1d0b154-f221-4132-9d6f-a17173841b1f\") " pod="openshift-authentication/oauth-openshift-558db77b4-tgccn" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.689470 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/24828dfa-ec12-4de9-aaba-96716e62d49a-client-ca\") pod \"route-controller-manager-6576b87f9c-w85nj\" (UID: \"24828dfa-ec12-4de9-aaba-96716e62d49a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-w85nj" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.689718 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d1d0b154-f221-4132-9d6f-a17173841b1f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-tgccn\" (UID: \"d1d0b154-f221-4132-9d6f-a17173841b1f\") " pod="openshift-authentication/oauth-openshift-558db77b4-tgccn" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.690083 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/24828dfa-ec12-4de9-aaba-96716e62d49a-client-ca\") pod \"route-controller-manager-6576b87f9c-w85nj\" (UID: \"24828dfa-ec12-4de9-aaba-96716e62d49a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-w85nj" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.691243 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d1d0b154-f221-4132-9d6f-a17173841b1f-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-tgccn\" (UID: \"d1d0b154-f221-4132-9d6f-a17173841b1f\") " pod="openshift-authentication/oauth-openshift-558db77b4-tgccn" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.691358 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/8ab80f79-35d6-42cc-8480-e2a778d41da7-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-tggvq\" (UID: \"8ab80f79-35d6-42cc-8480-e2a778d41da7\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tggvq" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.692187 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e93d54dd-4445-4bfd-a9fb-914d3b06e049-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-rx9c4\" (UID: \"e93d54dd-4445-4bfd-a9fb-914d3b06e049\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rx9c4" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.692444 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d1d0b154-f221-4132-9d6f-a17173841b1f-audit-dir\") pod \"oauth-openshift-558db77b4-tgccn\" (UID: \"d1d0b154-f221-4132-9d6f-a17173841b1f\") " pod="openshift-authentication/oauth-openshift-558db77b4-tgccn" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.692817 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.693023 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4f8433f4-0c5f-40eb-b4c5-88c02b1595ad-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-46h6b\" (UID: \"4f8433f4-0c5f-40eb-b4c5-88c02b1595ad\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-46h6b" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.693224 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a6dbb097-e288-4cf9-8aa5-f35c997358df-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-h46d6\" (UID: \"a6dbb097-e288-4cf9-8aa5-f35c997358df\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h46d6" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.693331 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4f8433f4-0c5f-40eb-b4c5-88c02b1595ad-service-ca-bundle\") pod \"authentication-operator-69f744f599-46h6b\" (UID: \"4f8433f4-0c5f-40eb-b4c5-88c02b1595ad\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-46h6b" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.692884 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e2b1acef-4d0c-4ce0-aa5a-0e5b28ae08c1-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-zdp8l\" (UID: \"e2b1acef-4d0c-4ce0-aa5a-0e5b28ae08c1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-zdp8l" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.693534 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d1d0b154-f221-4132-9d6f-a17173841b1f-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-tgccn\" (UID: \"d1d0b154-f221-4132-9d6f-a17173841b1f\") " pod="openshift-authentication/oauth-openshift-558db77b4-tgccn" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.693737 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a6dbb097-e288-4cf9-8aa5-f35c997358df-audit-policies\") pod \"apiserver-7bbb656c7d-h46d6\" (UID: \"a6dbb097-e288-4cf9-8aa5-f35c997358df\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h46d6" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.694267 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24828dfa-ec12-4de9-aaba-96716e62d49a-config\") pod \"route-controller-manager-6576b87f9c-w85nj\" (UID: \"24828dfa-ec12-4de9-aaba-96716e62d49a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-w85nj" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.694308 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/420813d8-71d5-401e-9af6-61296b8a25ba-config\") pod \"machine-approver-56656f9798-z6snp\" (UID: \"420813d8-71d5-401e-9af6-61296b8a25ba\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-z6snp" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.689196 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d1d0b154-f221-4132-9d6f-a17173841b1f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-tgccn\" (UID: \"d1d0b154-f221-4132-9d6f-a17173841b1f\") " pod="openshift-authentication/oauth-openshift-558db77b4-tgccn" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.694755 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/cef607c4-e16c-4ae6-9b66-3206b100267c-available-featuregates\") pod \"openshift-config-operator-7777fb866f-vj475\" (UID: \"cef607c4-e16c-4ae6-9b66-3206b100267c\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-vj475" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.695112 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9f06c8d-2fe6-44f7-8870-0142002dfae8-config\") pod \"console-operator-58897d9998-v52qb\" (UID: \"c9f06c8d-2fe6-44f7-8870-0142002dfae8\") " pod="openshift-console-operator/console-operator-58897d9998-v52qb" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.695231 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/420813d8-71d5-401e-9af6-61296b8a25ba-auth-proxy-config\") pod \"machine-approver-56656f9798-z6snp\" (UID: \"420813d8-71d5-401e-9af6-61296b8a25ba\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-z6snp" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.695628 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d1d0b154-f221-4132-9d6f-a17173841b1f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-tgccn\" (UID: \"d1d0b154-f221-4132-9d6f-a17173841b1f\") " pod="openshift-authentication/oauth-openshift-558db77b4-tgccn" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.695732 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d1d0b154-f221-4132-9d6f-a17173841b1f-audit-policies\") pod \"oauth-openshift-558db77b4-tgccn\" (UID: \"d1d0b154-f221-4132-9d6f-a17173841b1f\") " pod="openshift-authentication/oauth-openshift-558db77b4-tgccn" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.696287 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ac2a583-d3dc-433a-8b0f-92c9984f6b20-config\") pod \"openshift-apiserver-operator-796bbdcf4f-mwhvk\" (UID: \"0ac2a583-d3dc-433a-8b0f-92c9984f6b20\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mwhvk" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.696331 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a6dbb097-e288-4cf9-8aa5-f35c997358df-audit-dir\") pod \"apiserver-7bbb656c7d-h46d6\" (UID: \"a6dbb097-e288-4cf9-8aa5-f35c997358df\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h46d6" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.696739 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a6dbb097-e288-4cf9-8aa5-f35c997358df-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-h46d6\" (UID: \"a6dbb097-e288-4cf9-8aa5-f35c997358df\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h46d6" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.696743 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d1d0b154-f221-4132-9d6f-a17173841b1f-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-tgccn\" (UID: \"d1d0b154-f221-4132-9d6f-a17173841b1f\") " pod="openshift-authentication/oauth-openshift-558db77b4-tgccn" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.696885 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d1d0b154-f221-4132-9d6f-a17173841b1f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-tgccn\" (UID: \"d1d0b154-f221-4132-9d6f-a17173841b1f\") " pod="openshift-authentication/oauth-openshift-558db77b4-tgccn" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.697428 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a6dbb097-e288-4cf9-8aa5-f35c997358df-serving-cert\") pod \"apiserver-7bbb656c7d-h46d6\" (UID: \"a6dbb097-e288-4cf9-8aa5-f35c997358df\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h46d6" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.697483 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e93d54dd-4445-4bfd-a9fb-914d3b06e049-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-rx9c4\" (UID: \"e93d54dd-4445-4bfd-a9fb-914d3b06e049\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rx9c4" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.698428 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f8433f4-0c5f-40eb-b4c5-88c02b1595ad-config\") pod \"authentication-operator-69f744f599-46h6b\" (UID: \"4f8433f4-0c5f-40eb-b4c5-88c02b1595ad\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-46h6b" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.699190 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4f8433f4-0c5f-40eb-b4c5-88c02b1595ad-serving-cert\") pod \"authentication-operator-69f744f599-46h6b\" (UID: \"4f8433f4-0c5f-40eb-b4c5-88c02b1595ad\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-46h6b" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.699555 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cef607c4-e16c-4ae6-9b66-3206b100267c-serving-cert\") pod \"openshift-config-operator-7777fb866f-vj475\" (UID: \"cef607c4-e16c-4ae6-9b66-3206b100267c\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-vj475" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.699683 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d1d0b154-f221-4132-9d6f-a17173841b1f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-tgccn\" (UID: \"d1d0b154-f221-4132-9d6f-a17173841b1f\") " pod="openshift-authentication/oauth-openshift-558db77b4-tgccn" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.699911 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c9f06c8d-2fe6-44f7-8870-0142002dfae8-trusted-ca\") pod \"console-operator-58897d9998-v52qb\" (UID: \"c9f06c8d-2fe6-44f7-8870-0142002dfae8\") " pod="openshift-console-operator/console-operator-58897d9998-v52qb" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.700093 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/420813d8-71d5-401e-9af6-61296b8a25ba-machine-approver-tls\") pod \"machine-approver-56656f9798-z6snp\" (UID: \"420813d8-71d5-401e-9af6-61296b8a25ba\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-z6snp" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.700879 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d1d0b154-f221-4132-9d6f-a17173841b1f-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-tgccn\" (UID: \"d1d0b154-f221-4132-9d6f-a17173841b1f\") " pod="openshift-authentication/oauth-openshift-558db77b4-tgccn" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.701238 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/e2b1acef-4d0c-4ce0-aa5a-0e5b28ae08c1-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-zdp8l\" (UID: \"e2b1acef-4d0c-4ce0-aa5a-0e5b28ae08c1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-zdp8l" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.701476 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c9f06c8d-2fe6-44f7-8870-0142002dfae8-serving-cert\") pod \"console-operator-58897d9998-v52qb\" (UID: \"c9f06c8d-2fe6-44f7-8870-0142002dfae8\") " pod="openshift-console-operator/console-operator-58897d9998-v52qb" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.701596 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d1d0b154-f221-4132-9d6f-a17173841b1f-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-tgccn\" (UID: \"d1d0b154-f221-4132-9d6f-a17173841b1f\") " pod="openshift-authentication/oauth-openshift-558db77b4-tgccn" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.701951 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a6dbb097-e288-4cf9-8aa5-f35c997358df-etcd-client\") pod \"apiserver-7bbb656c7d-h46d6\" (UID: \"a6dbb097-e288-4cf9-8aa5-f35c997358df\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h46d6" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.702350 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0ac2a583-d3dc-433a-8b0f-92c9984f6b20-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-mwhvk\" (UID: \"0ac2a583-d3dc-433a-8b0f-92c9984f6b20\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mwhvk" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.703926 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/927094cf-5a33-4170-97f1-b9b2c4f5b519-metrics-tls\") pod \"dns-operator-744455d44c-2p4pk\" (UID: \"927094cf-5a33-4170-97f1-b9b2c4f5b519\") " pod="openshift-dns-operator/dns-operator-744455d44c-2p4pk" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.704049 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a6dbb097-e288-4cf9-8aa5-f35c997358df-encryption-config\") pod \"apiserver-7bbb656c7d-h46d6\" (UID: \"a6dbb097-e288-4cf9-8aa5-f35c997358df\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h46d6" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.704332 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d1d0b154-f221-4132-9d6f-a17173841b1f-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-tgccn\" (UID: \"d1d0b154-f221-4132-9d6f-a17173841b1f\") " pod="openshift-authentication/oauth-openshift-558db77b4-tgccn" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.705737 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/24828dfa-ec12-4de9-aaba-96716e62d49a-serving-cert\") pod \"route-controller-manager-6576b87f9c-w85nj\" (UID: \"24828dfa-ec12-4de9-aaba-96716e62d49a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-w85nj" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.711520 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.731162 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.751679 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.770546 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.790536 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.811268 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.831296 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.850975 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.871457 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.891400 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.911278 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.931693 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.951807 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.971486 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 27 15:10:43 crc kubenswrapper[4697]: I0127 15:10:43.991156 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 27 15:10:44 crc kubenswrapper[4697]: I0127 15:10:44.011477 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 27 15:10:44 crc kubenswrapper[4697]: I0127 15:10:44.032175 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 27 15:10:44 crc kubenswrapper[4697]: I0127 15:10:44.051262 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 27 15:10:44 crc kubenswrapper[4697]: I0127 15:10:44.072247 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 27 15:10:44 crc kubenswrapper[4697]: I0127 15:10:44.091383 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 27 15:10:44 crc kubenswrapper[4697]: I0127 15:10:44.112448 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 27 15:10:44 crc kubenswrapper[4697]: I0127 15:10:44.131624 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 27 15:10:44 crc kubenswrapper[4697]: I0127 15:10:44.152611 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 27 15:10:44 crc kubenswrapper[4697]: I0127 15:10:44.171853 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 27 15:10:44 crc kubenswrapper[4697]: I0127 15:10:44.192425 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 27 15:10:44 crc kubenswrapper[4697]: I0127 15:10:44.212240 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 27 15:10:44 crc kubenswrapper[4697]: I0127 15:10:44.232593 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 27 15:10:44 crc kubenswrapper[4697]: I0127 15:10:44.252746 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 27 15:10:44 crc kubenswrapper[4697]: I0127 15:10:44.272241 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 27 15:10:44 crc kubenswrapper[4697]: I0127 15:10:44.292635 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 27 15:10:44 crc kubenswrapper[4697]: I0127 15:10:44.312018 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 27 15:10:44 crc kubenswrapper[4697]: I0127 15:10:44.331509 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 27 15:10:44 crc kubenswrapper[4697]: I0127 15:10:44.351028 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 27 15:10:44 crc kubenswrapper[4697]: I0127 15:10:44.370924 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 27 15:10:44 crc kubenswrapper[4697]: I0127 15:10:44.391086 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 27 15:10:44 crc kubenswrapper[4697]: I0127 15:10:44.410442 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 27 15:10:44 crc kubenswrapper[4697]: I0127 15:10:44.431920 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 27 15:10:44 crc kubenswrapper[4697]: I0127 15:10:44.451747 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 27 15:10:44 crc kubenswrapper[4697]: I0127 15:10:44.470898 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 27 15:10:44 crc kubenswrapper[4697]: I0127 15:10:44.491714 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 27 15:10:44 crc kubenswrapper[4697]: I0127 15:10:44.511676 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 27 15:10:44 crc kubenswrapper[4697]: I0127 15:10:44.530325 4697 request.go:700] Waited for 1.015139751s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dolm-operator-serviceaccount-dockercfg-rq7zk&limit=500&resourceVersion=0 Jan 27 15:10:44 crc kubenswrapper[4697]: I0127 15:10:44.532161 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 27 15:10:44 crc kubenswrapper[4697]: I0127 15:10:44.552448 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 27 15:10:44 crc kubenswrapper[4697]: I0127 15:10:44.571624 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 27 15:10:44 crc kubenswrapper[4697]: I0127 15:10:44.591858 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 27 15:10:44 crc kubenswrapper[4697]: I0127 15:10:44.611608 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 27 15:10:44 crc kubenswrapper[4697]: I0127 15:10:44.631091 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 27 15:10:44 crc kubenswrapper[4697]: I0127 15:10:44.652340 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 27 15:10:44 crc kubenswrapper[4697]: I0127 15:10:44.672392 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 27 15:10:44 crc kubenswrapper[4697]: I0127 15:10:44.702122 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 27 15:10:44 crc kubenswrapper[4697]: I0127 15:10:44.712557 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 27 15:10:44 crc kubenswrapper[4697]: I0127 15:10:44.732100 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 27 15:10:44 crc kubenswrapper[4697]: I0127 15:10:44.751930 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 27 15:10:44 crc kubenswrapper[4697]: I0127 15:10:44.772039 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 27 15:10:44 crc kubenswrapper[4697]: I0127 15:10:44.791940 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 27 15:10:44 crc kubenswrapper[4697]: I0127 15:10:44.811587 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 27 15:10:44 crc kubenswrapper[4697]: I0127 15:10:44.831505 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 27 15:10:44 crc kubenswrapper[4697]: I0127 15:10:44.851582 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 27 15:10:44 crc kubenswrapper[4697]: I0127 15:10:44.872468 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 27 15:10:44 crc kubenswrapper[4697]: I0127 15:10:44.891673 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 27 15:10:44 crc kubenswrapper[4697]: I0127 15:10:44.931651 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 27 15:10:44 crc kubenswrapper[4697]: I0127 15:10:44.951950 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 27 15:10:44 crc kubenswrapper[4697]: I0127 15:10:44.972355 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 27 15:10:44 crc kubenswrapper[4697]: I0127 15:10:44.992255 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.012014 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.042720 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.052216 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.072582 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.092937 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.111915 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.131509 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.170805 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xgwpf\" (UniqueName: \"kubernetes.io/projected/f9c4aaa3-b53b-4b3f-8d6a-b9b7eef37362-kube-api-access-xgwpf\") pod \"apiserver-76f77b778f-nmrvs\" (UID: \"f9c4aaa3-b53b-4b3f-8d6a-b9b7eef37362\") " pod="openshift-apiserver/apiserver-76f77b778f-nmrvs" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.193767 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25dwx\" (UniqueName: \"kubernetes.io/projected/894a6339-d208-46db-8769-ac9153cb1ba0-kube-api-access-25dwx\") pod \"controller-manager-879f6c89f-7ddjp\" (UID: \"894a6339-d208-46db-8769-ac9153cb1ba0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-7ddjp" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.232127 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.233557 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hh84z\" (UniqueName: \"kubernetes.io/projected/0f95124d-8a5d-4a0d-b4cd-906d0341a6a2-kube-api-access-hh84z\") pod \"console-f9d7485db-wjd95\" (UID: \"0f95124d-8a5d-4a0d-b4cd-906d0341a6a2\") " pod="openshift-console/console-f9d7485db-wjd95" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.240818 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-nmrvs" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.251280 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.273278 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.309127 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-flj75\" (UniqueName: \"kubernetes.io/projected/cbd9208d-08ed-47af-a7cf-b9ee3973b964-kube-api-access-flj75\") pod \"machine-api-operator-5694c8668f-9p5q5\" (UID: \"cbd9208d-08ed-47af-a7cf-b9ee3973b964\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-9p5q5" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.311170 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.331229 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.353407 4697 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.372692 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.391272 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.410573 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.431394 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.451944 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.452457 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-9p5q5" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.471245 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.483183 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-7ddjp" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.497998 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-wjd95" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.505028 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rwh64\" (UniqueName: \"kubernetes.io/projected/927094cf-5a33-4170-97f1-b9b2c4f5b519-kube-api-access-rwh64\") pod \"dns-operator-744455d44c-2p4pk\" (UID: \"927094cf-5a33-4170-97f1-b9b2c4f5b519\") " pod="openshift-dns-operator/dns-operator-744455d44c-2p4pk" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.532387 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e2b1acef-4d0c-4ce0-aa5a-0e5b28ae08c1-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-zdp8l\" (UID: \"e2b1acef-4d0c-4ce0-aa5a-0e5b28ae08c1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-zdp8l" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.546320 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72bzh\" (UniqueName: \"kubernetes.io/projected/e93d54dd-4445-4bfd-a9fb-914d3b06e049-kube-api-access-72bzh\") pod \"openshift-controller-manager-operator-756b6f6bc6-rx9c4\" (UID: \"e93d54dd-4445-4bfd-a9fb-914d3b06e049\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rx9c4" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.550246 4697 request.go:700] Waited for 1.858859265s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication-operator/serviceaccounts/authentication-operator/token Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.568336 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zdq8m\" (UniqueName: \"kubernetes.io/projected/4f8433f4-0c5f-40eb-b4c5-88c02b1595ad-kube-api-access-zdq8m\") pod \"authentication-operator-69f744f599-46h6b\" (UID: \"4f8433f4-0c5f-40eb-b4c5-88c02b1595ad\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-46h6b" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.591957 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qczn\" (UniqueName: \"kubernetes.io/projected/420813d8-71d5-401e-9af6-61296b8a25ba-kube-api-access-2qczn\") pod \"machine-approver-56656f9798-z6snp\" (UID: \"420813d8-71d5-401e-9af6-61296b8a25ba\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-z6snp" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.596556 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-z6snp" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.606268 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzs8x\" (UniqueName: \"kubernetes.io/projected/73d9ac28-74b0-4ead-b4e4-b270264feb05-kube-api-access-gzs8x\") pod \"downloads-7954f5f757-78k6r\" (UID: \"73d9ac28-74b0-4ead-b4e4-b270264feb05\") " pod="openshift-console/downloads-7954f5f757-78k6r" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.629861 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5k96j\" (UniqueName: \"kubernetes.io/projected/d1d0b154-f221-4132-9d6f-a17173841b1f-kube-api-access-5k96j\") pod \"oauth-openshift-558db77b4-tgccn\" (UID: \"d1d0b154-f221-4132-9d6f-a17173841b1f\") " pod="openshift-authentication/oauth-openshift-558db77b4-tgccn" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.652570 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xp985\" (UniqueName: \"kubernetes.io/projected/e2b1acef-4d0c-4ce0-aa5a-0e5b28ae08c1-kube-api-access-xp985\") pod \"cluster-image-registry-operator-dc59b4c8b-zdp8l\" (UID: \"e2b1acef-4d0c-4ce0-aa5a-0e5b28ae08c1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-zdp8l" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.667682 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvfx7\" (UniqueName: \"kubernetes.io/projected/cef607c4-e16c-4ae6-9b66-3206b100267c-kube-api-access-bvfx7\") pod \"openshift-config-operator-7777fb866f-vj475\" (UID: \"cef607c4-e16c-4ae6-9b66-3206b100267c\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-vj475" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.668422 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-tgccn" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.677074 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-78k6r" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.689135 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-46h6b" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.690840 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dfcb2\" (UniqueName: \"kubernetes.io/projected/24828dfa-ec12-4de9-aaba-96716e62d49a-kube-api-access-dfcb2\") pod \"route-controller-manager-6576b87f9c-w85nj\" (UID: \"24828dfa-ec12-4de9-aaba-96716e62d49a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-w85nj" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.709213 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l5jh6\" (UniqueName: \"kubernetes.io/projected/a6dbb097-e288-4cf9-8aa5-f35c997358df-kube-api-access-l5jh6\") pod \"apiserver-7bbb656c7d-h46d6\" (UID: \"a6dbb097-e288-4cf9-8aa5-f35c997358df\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h46d6" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.725649 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rlcc\" (UniqueName: \"kubernetes.io/projected/c9f06c8d-2fe6-44f7-8870-0142002dfae8-kube-api-access-6rlcc\") pod \"console-operator-58897d9998-v52qb\" (UID: \"c9f06c8d-2fe6-44f7-8870-0142002dfae8\") " pod="openshift-console-operator/console-operator-58897d9998-v52qb" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.737677 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-7ddjp"] Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.743770 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-nmrvs"] Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.743836 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-wjd95"] Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.745018 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-9p5q5"] Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.748071 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rx9c4" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.758069 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-zdp8l" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.759768 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5fql\" (UniqueName: \"kubernetes.io/projected/0ac2a583-d3dc-433a-8b0f-92c9984f6b20-kube-api-access-f5fql\") pod \"openshift-apiserver-operator-796bbdcf4f-mwhvk\" (UID: \"0ac2a583-d3dc-433a-8b0f-92c9984f6b20\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mwhvk" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.762763 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-2p4pk" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.765064 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8l5qn\" (UniqueName: \"kubernetes.io/projected/8ab80f79-35d6-42cc-8480-e2a778d41da7-kube-api-access-8l5qn\") pod \"cluster-samples-operator-665b6dd947-tggvq\" (UID: \"8ab80f79-35d6-42cc-8480-e2a778d41da7\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tggvq" Jan 27 15:10:45 crc kubenswrapper[4697]: W0127 15:10:45.780469 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0f95124d_8a5d_4a0d_b4cd_906d0341a6a2.slice/crio-29d5c9787502e9669abbe96eaf4ab7794727491f9681e13bbc4697ee3d0371ce WatchSource:0}: Error finding container 29d5c9787502e9669abbe96eaf4ab7794727491f9681e13bbc4697ee3d0371ce: Status 404 returned error can't find the container with id 29d5c9787502e9669abbe96eaf4ab7794727491f9681e13bbc4697ee3d0371ce Jan 27 15:10:45 crc kubenswrapper[4697]: W0127 15:10:45.781467 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf9c4aaa3_b53b_4b3f_8d6a_b9b7eef37362.slice/crio-3f541973973951c10c5ad73ee11a60fcd2c7b1a343ad4ce5c52f74f80e817225 WatchSource:0}: Error finding container 3f541973973951c10c5ad73ee11a60fcd2c7b1a343ad4ce5c52f74f80e817225: Status 404 returned error can't find the container with id 3f541973973951c10c5ad73ee11a60fcd2c7b1a343ad4ce5c52f74f80e817225 Jan 27 15:10:45 crc kubenswrapper[4697]: W0127 15:10:45.782167 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcbd9208d_08ed_47af_a7cf_b9ee3973b964.slice/crio-abc06315e62eec4230e597a3282470ba1b3e378e3825b14d4dbe7bcd2a5a5c31 WatchSource:0}: Error finding container abc06315e62eec4230e597a3282470ba1b3e378e3825b14d4dbe7bcd2a5a5c31: Status 404 returned error can't find the container with id abc06315e62eec4230e597a3282470ba1b3e378e3825b14d4dbe7bcd2a5a5c31 Jan 27 15:10:45 crc kubenswrapper[4697]: W0127 15:10:45.784369 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod894a6339_d208_46db_8769_ac9153cb1ba0.slice/crio-885ac78ad4178301d09e55851c0a8ba33a024a355890367aa811516fdf404619 WatchSource:0}: Error finding container 885ac78ad4178301d09e55851c0a8ba33a024a355890367aa811516fdf404619: Status 404 returned error can't find the container with id 885ac78ad4178301d09e55851c0a8ba33a024a355890367aa811516fdf404619 Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.839312 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a17bf464-23fd-475b-b25a-33e14cd9ced0-config-volume\") pod \"dns-default-5xn9m\" (UID: \"a17bf464-23fd-475b-b25a-33e14cd9ced0\") " pod="openshift-dns/dns-default-5xn9m" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.839345 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24381e71-6f3a-4c4c-b60d-a10c06aa12f7-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-sztfd\" (UID: \"24381e71-6f3a-4c4c-b60d-a10c06aa12f7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sztfd" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.839394 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/50568641-b008-4747-bdbc-f474cc35bf58-serving-cert\") pod \"service-ca-operator-777779d784-qvsns\" (UID: \"50568641-b008-4747-bdbc-f474cc35bf58\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-qvsns" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.839426 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d50c0395-ec10-4463-92e4-29defdd47f62-service-ca-bundle\") pod \"router-default-5444994796-wmwsd\" (UID: \"d50c0395-ec10-4463-92e4-29defdd47f62\") " pod="openshift-ingress/router-default-5444994796-wmwsd" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.839461 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9189511e-af92-4fee-be70-e2baab592c98-config\") pod \"kube-controller-manager-operator-78b949d7b-6vnzn\" (UID: \"9189511e-af92-4fee-be70-e2baab592c98\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6vnzn" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.839480 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/dd09f8c7-aace-4ca6-ac8c-aa4425391032-webhook-cert\") pod \"packageserver-d55dfcdfc-js7zh\" (UID: \"dd09f8c7-aace-4ca6-ac8c-aa4425391032\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-js7zh" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.839560 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/991f5f59-9d29-4a19-974a-2b358b8b38a0-etcd-client\") pod \"etcd-operator-b45778765-l8h5h\" (UID: \"991f5f59-9d29-4a19-974a-2b358b8b38a0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-l8h5h" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.839577 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/5477c5d7-5d1a-46a7-bee2-eeb3d5ef2ade-images\") pod \"machine-config-operator-74547568cd-wf8n4\" (UID: \"5477c5d7-5d1a-46a7-bee2-eeb3d5ef2ade\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-wf8n4" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.839610 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f0e1afff-d250-45de-bc85-3a20f622a52d-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-r5rsq\" (UID: \"f0e1afff-d250-45de-bc85-3a20f622a52d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-r5rsq" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.839626 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5k5gm\" (UniqueName: \"kubernetes.io/projected/a1556a89-d5d8-4eca-bf26-6475efb42496-kube-api-access-5k5gm\") pod \"catalog-operator-68c6474976-dr2qr\" (UID: \"a1556a89-d5d8-4eca-bf26-6475efb42496\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dr2qr" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.839684 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/991f5f59-9d29-4a19-974a-2b358b8b38a0-etcd-service-ca\") pod \"etcd-operator-b45778765-l8h5h\" (UID: \"991f5f59-9d29-4a19-974a-2b358b8b38a0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-l8h5h" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.839756 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cpztt\" (UniqueName: \"kubernetes.io/projected/ecacf3dd-ae8b-4d81-87b5-0bfbf0575e24-kube-api-access-cpztt\") pod \"olm-operator-6b444d44fb-qkkgm\" (UID: \"ecacf3dd-ae8b-4d81-87b5-0bfbf0575e24\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qkkgm" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.839775 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/991f5f59-9d29-4a19-974a-2b358b8b38a0-serving-cert\") pod \"etcd-operator-b45778765-l8h5h\" (UID: \"991f5f59-9d29-4a19-974a-2b358b8b38a0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-l8h5h" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.839817 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/43fd9fa4-b232-4d49-8f52-27d016de4cad-bound-sa-token\") pod \"image-registry-697d97f7c8-qlprf\" (UID: \"43fd9fa4-b232-4d49-8f52-27d016de4cad\") " pod="openshift-image-registry/image-registry-697d97f7c8-qlprf" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.840144 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgmtq\" (UniqueName: \"kubernetes.io/projected/49dd977b-6315-4446-8804-242e7e94a375-kube-api-access-rgmtq\") pod \"control-plane-machine-set-operator-78cbb6b69f-4tnq9\" (UID: \"49dd977b-6315-4446-8804-242e7e94a375\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-4tnq9" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.840425 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d50c0395-ec10-4463-92e4-29defdd47f62-metrics-certs\") pod \"router-default-5444994796-wmwsd\" (UID: \"d50c0395-ec10-4463-92e4-29defdd47f62\") " pod="openshift-ingress/router-default-5444994796-wmwsd" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.840494 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/43fd9fa4-b232-4d49-8f52-27d016de4cad-ca-trust-extracted\") pod \"image-registry-697d97f7c8-qlprf\" (UID: \"43fd9fa4-b232-4d49-8f52-27d016de4cad\") " pod="openshift-image-registry/image-registry-697d97f7c8-qlprf" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.840511 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/43fd9fa4-b232-4d49-8f52-27d016de4cad-registry-certificates\") pod \"image-registry-697d97f7c8-qlprf\" (UID: \"43fd9fa4-b232-4d49-8f52-27d016de4cad\") " pod="openshift-image-registry/image-registry-697d97f7c8-qlprf" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.840532 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc64ac2f-83b2-421c-b9e7-194a679b8653-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-xxcz8\" (UID: \"cc64ac2f-83b2-421c-b9e7-194a679b8653\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xxcz8" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.840550 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/43fd9fa4-b232-4d49-8f52-27d016de4cad-installation-pull-secrets\") pod \"image-registry-697d97f7c8-qlprf\" (UID: \"43fd9fa4-b232-4d49-8f52-27d016de4cad\") " pod="openshift-image-registry/image-registry-697d97f7c8-qlprf" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.840630 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/d50c0395-ec10-4463-92e4-29defdd47f62-stats-auth\") pod \"router-default-5444994796-wmwsd\" (UID: \"d50c0395-ec10-4463-92e4-29defdd47f62\") " pod="openshift-ingress/router-default-5444994796-wmwsd" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.840684 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/49dd977b-6315-4446-8804-242e7e94a375-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-4tnq9\" (UID: \"49dd977b-6315-4446-8804-242e7e94a375\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-4tnq9" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.840705 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/b777a7db-a7b4-4fdd-b30c-fb2243b658a0-signing-key\") pod \"service-ca-9c57cc56f-k5tl7\" (UID: \"b777a7db-a7b4-4fdd-b30c-fb2243b658a0\") " pod="openshift-service-ca/service-ca-9c57cc56f-k5tl7" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.840719 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/24381e71-6f3a-4c4c-b60d-a10c06aa12f7-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-sztfd\" (UID: \"24381e71-6f3a-4c4c-b60d-a10c06aa12f7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sztfd" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.840822 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qlprf\" (UID: \"43fd9fa4-b232-4d49-8f52-27d016de4cad\") " pod="openshift-image-registry/image-registry-697d97f7c8-qlprf" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.840851 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50568641-b008-4747-bdbc-f474cc35bf58-config\") pod \"service-ca-operator-777779d784-qvsns\" (UID: \"50568641-b008-4747-bdbc-f474cc35bf58\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-qvsns" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.840893 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7sx5\" (UniqueName: \"kubernetes.io/projected/a1a8ed06-fb86-479b-a5a1-0dbac195717a-kube-api-access-h7sx5\") pod \"ingress-operator-5b745b69d9-s2bmd\" (UID: \"a1a8ed06-fb86-479b-a5a1-0dbac195717a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-s2bmd" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.840921 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgksq\" (UniqueName: \"kubernetes.io/projected/8041fd88-aa5f-41ea-acab-b694e78d4355-kube-api-access-mgksq\") pod \"package-server-manager-789f6589d5-wp9j5\" (UID: \"8041fd88-aa5f-41ea-acab-b694e78d4355\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wp9j5" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.840941 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-td428\" (UniqueName: \"kubernetes.io/projected/25c9b0f1-c6a8-4521-a480-9f46238e3a22-kube-api-access-td428\") pod \"multus-admission-controller-857f4d67dd-fxbvr\" (UID: \"25c9b0f1-c6a8-4521-a480-9f46238e3a22\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-fxbvr" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.841579 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ecacf3dd-ae8b-4d81-87b5-0bfbf0575e24-srv-cert\") pod \"olm-operator-6b444d44fb-qkkgm\" (UID: \"ecacf3dd-ae8b-4d81-87b5-0bfbf0575e24\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qkkgm" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.841674 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfr9l\" (UniqueName: \"kubernetes.io/projected/d50c0395-ec10-4463-92e4-29defdd47f62-kube-api-access-qfr9l\") pod \"router-default-5444994796-wmwsd\" (UID: \"d50c0395-ec10-4463-92e4-29defdd47f62\") " pod="openshift-ingress/router-default-5444994796-wmwsd" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.841698 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/a1556a89-d5d8-4eca-bf26-6475efb42496-profile-collector-cert\") pod \"catalog-operator-68c6474976-dr2qr\" (UID: \"a1556a89-d5d8-4eca-bf26-6475efb42496\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dr2qr" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.842016 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tc6l2\" (UniqueName: \"kubernetes.io/projected/dd09f8c7-aace-4ca6-ac8c-aa4425391032-kube-api-access-tc6l2\") pod \"packageserver-d55dfcdfc-js7zh\" (UID: \"dd09f8c7-aace-4ca6-ac8c-aa4425391032\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-js7zh" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.842045 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/25c9b0f1-c6a8-4521-a480-9f46238e3a22-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-fxbvr\" (UID: \"25c9b0f1-c6a8-4521-a480-9f46238e3a22\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-fxbvr" Jan 27 15:10:45 crc kubenswrapper[4697]: E0127 15:10:45.842104 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:10:46.342089129 +0000 UTC m=+142.514489030 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qlprf" (UID: "43fd9fa4-b232-4d49-8f52-27d016de4cad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.842131 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/17342d35-ebda-45fa-8b7a-9be1064954a9-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-q7j4f\" (UID: \"17342d35-ebda-45fa-8b7a-9be1064954a9\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-q7j4f" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.842283 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/ecacf3dd-ae8b-4d81-87b5-0bfbf0575e24-profile-collector-cert\") pod \"olm-operator-6b444d44fb-qkkgm\" (UID: \"ecacf3dd-ae8b-4d81-87b5-0bfbf0575e24\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qkkgm" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.842317 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/d50c0395-ec10-4463-92e4-29defdd47f62-default-certificate\") pod \"router-default-5444994796-wmwsd\" (UID: \"d50c0395-ec10-4463-92e4-29defdd47f62\") " pod="openshift-ingress/router-default-5444994796-wmwsd" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.842345 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkkvj\" (UniqueName: \"kubernetes.io/projected/b777a7db-a7b4-4fdd-b30c-fb2243b658a0-kube-api-access-gkkvj\") pod \"service-ca-9c57cc56f-k5tl7\" (UID: \"b777a7db-a7b4-4fdd-b30c-fb2243b658a0\") " pod="openshift-service-ca/service-ca-9c57cc56f-k5tl7" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.842445 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/991f5f59-9d29-4a19-974a-2b358b8b38a0-etcd-ca\") pod \"etcd-operator-b45778765-l8h5h\" (UID: \"991f5f59-9d29-4a19-974a-2b358b8b38a0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-l8h5h" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.842645 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfzvj\" (UniqueName: \"kubernetes.io/projected/06d5e1be-2c28-4e27-ba7e-e24b4e72401b-kube-api-access-qfzvj\") pod \"migrator-59844c95c7-9ds6c\" (UID: \"06d5e1be-2c28-4e27-ba7e-e24b4e72401b\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-9ds6c" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.842852 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbgkk\" (UniqueName: \"kubernetes.io/projected/50568641-b008-4747-bdbc-f474cc35bf58-kube-api-access-bbgkk\") pod \"service-ca-operator-777779d784-qvsns\" (UID: \"50568641-b008-4747-bdbc-f474cc35bf58\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-qvsns" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.843038 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a1a8ed06-fb86-479b-a5a1-0dbac195717a-bound-sa-token\") pod \"ingress-operator-5b745b69d9-s2bmd\" (UID: \"a1a8ed06-fb86-479b-a5a1-0dbac195717a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-s2bmd" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.843111 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/43fd9fa4-b232-4d49-8f52-27d016de4cad-registry-tls\") pod \"image-registry-697d97f7c8-qlprf\" (UID: \"43fd9fa4-b232-4d49-8f52-27d016de4cad\") " pod="openshift-image-registry/image-registry-697d97f7c8-qlprf" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.846907 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtnjp\" (UniqueName: \"kubernetes.io/projected/5477c5d7-5d1a-46a7-bee2-eeb3d5ef2ade-kube-api-access-wtnjp\") pod \"machine-config-operator-74547568cd-wf8n4\" (UID: \"5477c5d7-5d1a-46a7-bee2-eeb3d5ef2ade\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-wf8n4" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.846952 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/8041fd88-aa5f-41ea-acab-b694e78d4355-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-wp9j5\" (UID: \"8041fd88-aa5f-41ea-acab-b694e78d4355\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wp9j5" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.847023 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/43fd9fa4-b232-4d49-8f52-27d016de4cad-trusted-ca\") pod \"image-registry-697d97f7c8-qlprf\" (UID: \"43fd9fa4-b232-4d49-8f52-27d016de4cad\") " pod="openshift-image-registry/image-registry-697d97f7c8-qlprf" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.847068 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/baa7401d-bcad-4175-af1b-46414c003f9e-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-45xm2\" (UID: \"baa7401d-bcad-4175-af1b-46414c003f9e\") " pod="openshift-marketplace/marketplace-operator-79b997595-45xm2" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.847093 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a1a8ed06-fb86-479b-a5a1-0dbac195717a-metrics-tls\") pod \"ingress-operator-5b745b69d9-s2bmd\" (UID: \"a1a8ed06-fb86-479b-a5a1-0dbac195717a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-s2bmd" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.847115 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/b777a7db-a7b4-4fdd-b30c-fb2243b658a0-signing-cabundle\") pod \"service-ca-9c57cc56f-k5tl7\" (UID: \"b777a7db-a7b4-4fdd-b30c-fb2243b658a0\") " pod="openshift-service-ca/service-ca-9c57cc56f-k5tl7" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.847142 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmssv\" (UniqueName: \"kubernetes.io/projected/a17bf464-23fd-475b-b25a-33e14cd9ced0-kube-api-access-zmssv\") pod \"dns-default-5xn9m\" (UID: \"a17bf464-23fd-475b-b25a-33e14cd9ced0\") " pod="openshift-dns/dns-default-5xn9m" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.847156 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a1a8ed06-fb86-479b-a5a1-0dbac195717a-trusted-ca\") pod \"ingress-operator-5b745b69d9-s2bmd\" (UID: \"a1a8ed06-fb86-479b-a5a1-0dbac195717a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-s2bmd" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.847171 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9189511e-af92-4fee-be70-e2baab592c98-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-6vnzn\" (UID: \"9189511e-af92-4fee-be70-e2baab592c98\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6vnzn" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.847189 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/dd09f8c7-aace-4ca6-ac8c-aa4425391032-tmpfs\") pod \"packageserver-d55dfcdfc-js7zh\" (UID: \"dd09f8c7-aace-4ca6-ac8c-aa4425391032\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-js7zh" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.847234 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mh57w\" (UniqueName: \"kubernetes.io/projected/f0e1afff-d250-45de-bc85-3a20f622a52d-kube-api-access-mh57w\") pod \"machine-config-controller-84d6567774-r5rsq\" (UID: \"f0e1afff-d250-45de-bc85-3a20f622a52d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-r5rsq" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.847299 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxdxh\" (UniqueName: \"kubernetes.io/projected/baa7401d-bcad-4175-af1b-46414c003f9e-kube-api-access-kxdxh\") pod \"marketplace-operator-79b997595-45xm2\" (UID: \"baa7401d-bcad-4175-af1b-46414c003f9e\") " pod="openshift-marketplace/marketplace-operator-79b997595-45xm2" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.847329 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-msng2\" (UniqueName: \"kubernetes.io/projected/24381e71-6f3a-4c4c-b60d-a10c06aa12f7-kube-api-access-msng2\") pod \"kube-storage-version-migrator-operator-b67b599dd-sztfd\" (UID: \"24381e71-6f3a-4c4c-b60d-a10c06aa12f7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sztfd" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.847346 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cc64ac2f-83b2-421c-b9e7-194a679b8653-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-xxcz8\" (UID: \"cc64ac2f-83b2-421c-b9e7-194a679b8653\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xxcz8" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.847401 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hs6nr\" (UniqueName: \"kubernetes.io/projected/43fd9fa4-b232-4d49-8f52-27d016de4cad-kube-api-access-hs6nr\") pod \"image-registry-697d97f7c8-qlprf\" (UID: \"43fd9fa4-b232-4d49-8f52-27d016de4cad\") " pod="openshift-image-registry/image-registry-697d97f7c8-qlprf" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.847427 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5477c5d7-5d1a-46a7-bee2-eeb3d5ef2ade-auth-proxy-config\") pod \"machine-config-operator-74547568cd-wf8n4\" (UID: \"5477c5d7-5d1a-46a7-bee2-eeb3d5ef2ade\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-wf8n4" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.847449 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f0e1afff-d250-45de-bc85-3a20f622a52d-proxy-tls\") pod \"machine-config-controller-84d6567774-r5rsq\" (UID: \"f0e1afff-d250-45de-bc85-3a20f622a52d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-r5rsq" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.847481 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cc64ac2f-83b2-421c-b9e7-194a679b8653-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-xxcz8\" (UID: \"cc64ac2f-83b2-421c-b9e7-194a679b8653\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xxcz8" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.847505 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a1556a89-d5d8-4eca-bf26-6475efb42496-srv-cert\") pod \"catalog-operator-68c6474976-dr2qr\" (UID: \"a1556a89-d5d8-4eca-bf26-6475efb42496\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dr2qr" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.847531 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtwh9\" (UniqueName: \"kubernetes.io/projected/991f5f59-9d29-4a19-974a-2b358b8b38a0-kube-api-access-wtwh9\") pod \"etcd-operator-b45778765-l8h5h\" (UID: \"991f5f59-9d29-4a19-974a-2b358b8b38a0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-l8h5h" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.847550 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9189511e-af92-4fee-be70-e2baab592c98-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-6vnzn\" (UID: \"9189511e-af92-4fee-be70-e2baab592c98\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6vnzn" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.847569 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/baa7401d-bcad-4175-af1b-46414c003f9e-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-45xm2\" (UID: \"baa7401d-bcad-4175-af1b-46414c003f9e\") " pod="openshift-marketplace/marketplace-operator-79b997595-45xm2" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.847721 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/17342d35-ebda-45fa-8b7a-9be1064954a9-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-q7j4f\" (UID: \"17342d35-ebda-45fa-8b7a-9be1064954a9\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-q7j4f" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.847767 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/dd09f8c7-aace-4ca6-ac8c-aa4425391032-apiservice-cert\") pod \"packageserver-d55dfcdfc-js7zh\" (UID: \"dd09f8c7-aace-4ca6-ac8c-aa4425391032\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-js7zh" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.847814 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5477c5d7-5d1a-46a7-bee2-eeb3d5ef2ade-proxy-tls\") pod \"machine-config-operator-74547568cd-wf8n4\" (UID: \"5477c5d7-5d1a-46a7-bee2-eeb3d5ef2ade\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-wf8n4" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.847843 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a17bf464-23fd-475b-b25a-33e14cd9ced0-metrics-tls\") pod \"dns-default-5xn9m\" (UID: \"a17bf464-23fd-475b-b25a-33e14cd9ced0\") " pod="openshift-dns/dns-default-5xn9m" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.848128 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17342d35-ebda-45fa-8b7a-9be1064954a9-config\") pod \"kube-apiserver-operator-766d6c64bb-q7j4f\" (UID: \"17342d35-ebda-45fa-8b7a-9be1064954a9\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-q7j4f" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.848203 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/991f5f59-9d29-4a19-974a-2b358b8b38a0-config\") pod \"etcd-operator-b45778765-l8h5h\" (UID: \"991f5f59-9d29-4a19-974a-2b358b8b38a0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-l8h5h" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.885240 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-v52qb" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.914433 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-vj475" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.926902 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-w85nj" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.941213 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h46d6" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.950376 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.950815 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qfzvj\" (UniqueName: \"kubernetes.io/projected/06d5e1be-2c28-4e27-ba7e-e24b4e72401b-kube-api-access-qfzvj\") pod \"migrator-59844c95c7-9ds6c\" (UID: \"06d5e1be-2c28-4e27-ba7e-e24b4e72401b\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-9ds6c" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.950858 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/c88b976d-68fb-4ca7-8a36-b6d0c1022346-node-bootstrap-token\") pod \"machine-config-server-ng2lw\" (UID: \"c88b976d-68fb-4ca7-8a36-b6d0c1022346\") " pod="openshift-machine-config-operator/machine-config-server-ng2lw" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.950892 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bbgkk\" (UniqueName: \"kubernetes.io/projected/50568641-b008-4747-bdbc-f474cc35bf58-kube-api-access-bbgkk\") pod \"service-ca-operator-777779d784-qvsns\" (UID: \"50568641-b008-4747-bdbc-f474cc35bf58\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-qvsns" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.950918 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdrnm\" (UniqueName: \"kubernetes.io/projected/c88b976d-68fb-4ca7-8a36-b6d0c1022346-kube-api-access-cdrnm\") pod \"machine-config-server-ng2lw\" (UID: \"c88b976d-68fb-4ca7-8a36-b6d0c1022346\") " pod="openshift-machine-config-operator/machine-config-server-ng2lw" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.950941 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/43fd9fa4-b232-4d49-8f52-27d016de4cad-registry-tls\") pod \"image-registry-697d97f7c8-qlprf\" (UID: \"43fd9fa4-b232-4d49-8f52-27d016de4cad\") " pod="openshift-image-registry/image-registry-697d97f7c8-qlprf" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.950966 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a1a8ed06-fb86-479b-a5a1-0dbac195717a-bound-sa-token\") pod \"ingress-operator-5b745b69d9-s2bmd\" (UID: \"a1a8ed06-fb86-479b-a5a1-0dbac195717a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-s2bmd" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.950993 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6128d2c8-dc50-4e71-8093-c931f60b8495-cert\") pod \"ingress-canary-d4smc\" (UID: \"6128d2c8-dc50-4e71-8093-c931f60b8495\") " pod="openshift-ingress-canary/ingress-canary-d4smc" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.951020 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6rzr\" (UniqueName: \"kubernetes.io/projected/9957edac-db7a-4d39-8224-f3e24a16bb43-kube-api-access-q6rzr\") pod \"csi-hostpathplugin-8b9kp\" (UID: \"9957edac-db7a-4d39-8224-f3e24a16bb43\") " pod="hostpath-provisioner/csi-hostpathplugin-8b9kp" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.951085 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/9957edac-db7a-4d39-8224-f3e24a16bb43-socket-dir\") pod \"csi-hostpathplugin-8b9kp\" (UID: \"9957edac-db7a-4d39-8224-f3e24a16bb43\") " pod="hostpath-provisioner/csi-hostpathplugin-8b9kp" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.951111 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/9957edac-db7a-4d39-8224-f3e24a16bb43-registration-dir\") pod \"csi-hostpathplugin-8b9kp\" (UID: \"9957edac-db7a-4d39-8224-f3e24a16bb43\") " pod="hostpath-provisioner/csi-hostpathplugin-8b9kp" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.951136 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/9957edac-db7a-4d39-8224-f3e24a16bb43-mountpoint-dir\") pod \"csi-hostpathplugin-8b9kp\" (UID: \"9957edac-db7a-4d39-8224-f3e24a16bb43\") " pod="hostpath-provisioner/csi-hostpathplugin-8b9kp" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.951164 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/8041fd88-aa5f-41ea-acab-b694e78d4355-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-wp9j5\" (UID: \"8041fd88-aa5f-41ea-acab-b694e78d4355\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wp9j5" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.951187 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/43fd9fa4-b232-4d49-8f52-27d016de4cad-trusted-ca\") pod \"image-registry-697d97f7c8-qlprf\" (UID: \"43fd9fa4-b232-4d49-8f52-27d016de4cad\") " pod="openshift-image-registry/image-registry-697d97f7c8-qlprf" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.951242 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wtnjp\" (UniqueName: \"kubernetes.io/projected/5477c5d7-5d1a-46a7-bee2-eeb3d5ef2ade-kube-api-access-wtnjp\") pod \"machine-config-operator-74547568cd-wf8n4\" (UID: \"5477c5d7-5d1a-46a7-bee2-eeb3d5ef2ade\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-wf8n4" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.951269 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/baa7401d-bcad-4175-af1b-46414c003f9e-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-45xm2\" (UID: \"baa7401d-bcad-4175-af1b-46414c003f9e\") " pod="openshift-marketplace/marketplace-operator-79b997595-45xm2" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.951297 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a1a8ed06-fb86-479b-a5a1-0dbac195717a-metrics-tls\") pod \"ingress-operator-5b745b69d9-s2bmd\" (UID: \"a1a8ed06-fb86-479b-a5a1-0dbac195717a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-s2bmd" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.951322 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/b777a7db-a7b4-4fdd-b30c-fb2243b658a0-signing-cabundle\") pod \"service-ca-9c57cc56f-k5tl7\" (UID: \"b777a7db-a7b4-4fdd-b30c-fb2243b658a0\") " pod="openshift-service-ca/service-ca-9c57cc56f-k5tl7" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.951347 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zmssv\" (UniqueName: \"kubernetes.io/projected/a17bf464-23fd-475b-b25a-33e14cd9ced0-kube-api-access-zmssv\") pod \"dns-default-5xn9m\" (UID: \"a17bf464-23fd-475b-b25a-33e14cd9ced0\") " pod="openshift-dns/dns-default-5xn9m" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.951371 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9189511e-af92-4fee-be70-e2baab592c98-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-6vnzn\" (UID: \"9189511e-af92-4fee-be70-e2baab592c98\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6vnzn" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.951395 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/dd09f8c7-aace-4ca6-ac8c-aa4425391032-tmpfs\") pod \"packageserver-d55dfcdfc-js7zh\" (UID: \"dd09f8c7-aace-4ca6-ac8c-aa4425391032\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-js7zh" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.951420 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a1a8ed06-fb86-479b-a5a1-0dbac195717a-trusted-ca\") pod \"ingress-operator-5b745b69d9-s2bmd\" (UID: \"a1a8ed06-fb86-479b-a5a1-0dbac195717a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-s2bmd" Jan 27 15:10:45 crc kubenswrapper[4697]: E0127 15:10:45.951664 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:10:46.451646068 +0000 UTC m=+142.624045849 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.954852 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/dd09f8c7-aace-4ca6-ac8c-aa4425391032-tmpfs\") pod \"packageserver-d55dfcdfc-js7zh\" (UID: \"dd09f8c7-aace-4ca6-ac8c-aa4425391032\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-js7zh" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.955074 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/b777a7db-a7b4-4fdd-b30c-fb2243b658a0-signing-cabundle\") pod \"service-ca-9c57cc56f-k5tl7\" (UID: \"b777a7db-a7b4-4fdd-b30c-fb2243b658a0\") " pod="openshift-service-ca/service-ca-9c57cc56f-k5tl7" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.956519 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/43fd9fa4-b232-4d49-8f52-27d016de4cad-trusted-ca\") pod \"image-registry-697d97f7c8-qlprf\" (UID: \"43fd9fa4-b232-4d49-8f52-27d016de4cad\") " pod="openshift-image-registry/image-registry-697d97f7c8-qlprf" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.957635 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/baa7401d-bcad-4175-af1b-46414c003f9e-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-45xm2\" (UID: \"baa7401d-bcad-4175-af1b-46414c003f9e\") " pod="openshift-marketplace/marketplace-operator-79b997595-45xm2" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.957652 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tggvq" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.958350 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/43fd9fa4-b232-4d49-8f52-27d016de4cad-registry-tls\") pod \"image-registry-697d97f7c8-qlprf\" (UID: \"43fd9fa4-b232-4d49-8f52-27d016de4cad\") " pod="openshift-image-registry/image-registry-697d97f7c8-qlprf" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.958556 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a1a8ed06-fb86-479b-a5a1-0dbac195717a-trusted-ca\") pod \"ingress-operator-5b745b69d9-s2bmd\" (UID: \"a1a8ed06-fb86-479b-a5a1-0dbac195717a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-s2bmd" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.959498 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a1a8ed06-fb86-479b-a5a1-0dbac195717a-metrics-tls\") pod \"ingress-operator-5b745b69d9-s2bmd\" (UID: \"a1a8ed06-fb86-479b-a5a1-0dbac195717a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-s2bmd" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.959611 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mh57w\" (UniqueName: \"kubernetes.io/projected/f0e1afff-d250-45de-bc85-3a20f622a52d-kube-api-access-mh57w\") pod \"machine-config-controller-84d6567774-r5rsq\" (UID: \"f0e1afff-d250-45de-bc85-3a20f622a52d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-r5rsq" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.959664 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-msng2\" (UniqueName: \"kubernetes.io/projected/24381e71-6f3a-4c4c-b60d-a10c06aa12f7-kube-api-access-msng2\") pod \"kube-storage-version-migrator-operator-b67b599dd-sztfd\" (UID: \"24381e71-6f3a-4c4c-b60d-a10c06aa12f7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sztfd" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.959695 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cc64ac2f-83b2-421c-b9e7-194a679b8653-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-xxcz8\" (UID: \"cc64ac2f-83b2-421c-b9e7-194a679b8653\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xxcz8" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.959725 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxdxh\" (UniqueName: \"kubernetes.io/projected/baa7401d-bcad-4175-af1b-46414c003f9e-kube-api-access-kxdxh\") pod \"marketplace-operator-79b997595-45xm2\" (UID: \"baa7401d-bcad-4175-af1b-46414c003f9e\") " pod="openshift-marketplace/marketplace-operator-79b997595-45xm2" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.959756 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hs6nr\" (UniqueName: \"kubernetes.io/projected/43fd9fa4-b232-4d49-8f52-27d016de4cad-kube-api-access-hs6nr\") pod \"image-registry-697d97f7c8-qlprf\" (UID: \"43fd9fa4-b232-4d49-8f52-27d016de4cad\") " pod="openshift-image-registry/image-registry-697d97f7c8-qlprf" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.959818 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5477c5d7-5d1a-46a7-bee2-eeb3d5ef2ade-auth-proxy-config\") pod \"machine-config-operator-74547568cd-wf8n4\" (UID: \"5477c5d7-5d1a-46a7-bee2-eeb3d5ef2ade\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-wf8n4" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.959850 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f0e1afff-d250-45de-bc85-3a20f622a52d-proxy-tls\") pod \"machine-config-controller-84d6567774-r5rsq\" (UID: \"f0e1afff-d250-45de-bc85-3a20f622a52d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-r5rsq" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.959882 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cc64ac2f-83b2-421c-b9e7-194a679b8653-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-xxcz8\" (UID: \"cc64ac2f-83b2-421c-b9e7-194a679b8653\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xxcz8" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.959919 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a1556a89-d5d8-4eca-bf26-6475efb42496-srv-cert\") pod \"catalog-operator-68c6474976-dr2qr\" (UID: \"a1556a89-d5d8-4eca-bf26-6475efb42496\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dr2qr" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.959951 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/9957edac-db7a-4d39-8224-f3e24a16bb43-csi-data-dir\") pod \"csi-hostpathplugin-8b9kp\" (UID: \"9957edac-db7a-4d39-8224-f3e24a16bb43\") " pod="hostpath-provisioner/csi-hostpathplugin-8b9kp" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.959981 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/c88b976d-68fb-4ca7-8a36-b6d0c1022346-certs\") pod \"machine-config-server-ng2lw\" (UID: \"c88b976d-68fb-4ca7-8a36-b6d0c1022346\") " pod="openshift-machine-config-operator/machine-config-server-ng2lw" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.960011 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9189511e-af92-4fee-be70-e2baab592c98-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-6vnzn\" (UID: \"9189511e-af92-4fee-be70-e2baab592c98\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6vnzn" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.960041 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/baa7401d-bcad-4175-af1b-46414c003f9e-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-45xm2\" (UID: \"baa7401d-bcad-4175-af1b-46414c003f9e\") " pod="openshift-marketplace/marketplace-operator-79b997595-45xm2" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.960074 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wtwh9\" (UniqueName: \"kubernetes.io/projected/991f5f59-9d29-4a19-974a-2b358b8b38a0-kube-api-access-wtwh9\") pod \"etcd-operator-b45778765-l8h5h\" (UID: \"991f5f59-9d29-4a19-974a-2b358b8b38a0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-l8h5h" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.960100 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/dd09f8c7-aace-4ca6-ac8c-aa4425391032-apiservice-cert\") pod \"packageserver-d55dfcdfc-js7zh\" (UID: \"dd09f8c7-aace-4ca6-ac8c-aa4425391032\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-js7zh" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.960172 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/17342d35-ebda-45fa-8b7a-9be1064954a9-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-q7j4f\" (UID: \"17342d35-ebda-45fa-8b7a-9be1064954a9\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-q7j4f" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.960199 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5477c5d7-5d1a-46a7-bee2-eeb3d5ef2ade-proxy-tls\") pod \"machine-config-operator-74547568cd-wf8n4\" (UID: \"5477c5d7-5d1a-46a7-bee2-eeb3d5ef2ade\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-wf8n4" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.960219 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a17bf464-23fd-475b-b25a-33e14cd9ced0-metrics-tls\") pod \"dns-default-5xn9m\" (UID: \"a17bf464-23fd-475b-b25a-33e14cd9ced0\") " pod="openshift-dns/dns-default-5xn9m" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.960249 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17342d35-ebda-45fa-8b7a-9be1064954a9-config\") pod \"kube-apiserver-operator-766d6c64bb-q7j4f\" (UID: \"17342d35-ebda-45fa-8b7a-9be1064954a9\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-q7j4f" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.960279 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/991f5f59-9d29-4a19-974a-2b358b8b38a0-config\") pod \"etcd-operator-b45778765-l8h5h\" (UID: \"991f5f59-9d29-4a19-974a-2b358b8b38a0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-l8h5h" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.960332 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a17bf464-23fd-475b-b25a-33e14cd9ced0-config-volume\") pod \"dns-default-5xn9m\" (UID: \"a17bf464-23fd-475b-b25a-33e14cd9ced0\") " pod="openshift-dns/dns-default-5xn9m" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.960424 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24381e71-6f3a-4c4c-b60d-a10c06aa12f7-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-sztfd\" (UID: \"24381e71-6f3a-4c4c-b60d-a10c06aa12f7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sztfd" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.960466 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d50c0395-ec10-4463-92e4-29defdd47f62-service-ca-bundle\") pod \"router-default-5444994796-wmwsd\" (UID: \"d50c0395-ec10-4463-92e4-29defdd47f62\") " pod="openshift-ingress/router-default-5444994796-wmwsd" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.960495 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/50568641-b008-4747-bdbc-f474cc35bf58-serving-cert\") pod \"service-ca-operator-777779d784-qvsns\" (UID: \"50568641-b008-4747-bdbc-f474cc35bf58\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-qvsns" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.960515 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/dd09f8c7-aace-4ca6-ac8c-aa4425391032-webhook-cert\") pod \"packageserver-d55dfcdfc-js7zh\" (UID: \"dd09f8c7-aace-4ca6-ac8c-aa4425391032\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-js7zh" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.960543 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9189511e-af92-4fee-be70-e2baab592c98-config\") pod \"kube-controller-manager-operator-78b949d7b-6vnzn\" (UID: \"9189511e-af92-4fee-be70-e2baab592c98\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6vnzn" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.960605 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/9957edac-db7a-4d39-8224-f3e24a16bb43-plugins-dir\") pod \"csi-hostpathplugin-8b9kp\" (UID: \"9957edac-db7a-4d39-8224-f3e24a16bb43\") " pod="hostpath-provisioner/csi-hostpathplugin-8b9kp" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.960637 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/5477c5d7-5d1a-46a7-bee2-eeb3d5ef2ade-images\") pod \"machine-config-operator-74547568cd-wf8n4\" (UID: \"5477c5d7-5d1a-46a7-bee2-eeb3d5ef2ade\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-wf8n4" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.960832 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f0e1afff-d250-45de-bc85-3a20f622a52d-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-r5rsq\" (UID: \"f0e1afff-d250-45de-bc85-3a20f622a52d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-r5rsq" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.960870 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5k5gm\" (UniqueName: \"kubernetes.io/projected/a1556a89-d5d8-4eca-bf26-6475efb42496-kube-api-access-5k5gm\") pod \"catalog-operator-68c6474976-dr2qr\" (UID: \"a1556a89-d5d8-4eca-bf26-6475efb42496\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dr2qr" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.960900 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/991f5f59-9d29-4a19-974a-2b358b8b38a0-etcd-service-ca\") pod \"etcd-operator-b45778765-l8h5h\" (UID: \"991f5f59-9d29-4a19-974a-2b358b8b38a0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-l8h5h" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.960923 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/991f5f59-9d29-4a19-974a-2b358b8b38a0-etcd-client\") pod \"etcd-operator-b45778765-l8h5h\" (UID: \"991f5f59-9d29-4a19-974a-2b358b8b38a0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-l8h5h" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.960965 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kz6g9\" (UniqueName: \"kubernetes.io/projected/79f6280f-8dc0-42b8-be4c-cbbc6528bf58-kube-api-access-kz6g9\") pod \"collect-profiles-29492100-sfd69\" (UID: \"79f6280f-8dc0-42b8-be4c-cbbc6528bf58\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492100-sfd69" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.960993 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cpztt\" (UniqueName: \"kubernetes.io/projected/ecacf3dd-ae8b-4d81-87b5-0bfbf0575e24-kube-api-access-cpztt\") pod \"olm-operator-6b444d44fb-qkkgm\" (UID: \"ecacf3dd-ae8b-4d81-87b5-0bfbf0575e24\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qkkgm" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.961017 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4tgn\" (UniqueName: \"kubernetes.io/projected/6128d2c8-dc50-4e71-8093-c931f60b8495-kube-api-access-z4tgn\") pod \"ingress-canary-d4smc\" (UID: \"6128d2c8-dc50-4e71-8093-c931f60b8495\") " pod="openshift-ingress-canary/ingress-canary-d4smc" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.961045 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/43fd9fa4-b232-4d49-8f52-27d016de4cad-bound-sa-token\") pod \"image-registry-697d97f7c8-qlprf\" (UID: \"43fd9fa4-b232-4d49-8f52-27d016de4cad\") " pod="openshift-image-registry/image-registry-697d97f7c8-qlprf" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.961072 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/991f5f59-9d29-4a19-974a-2b358b8b38a0-serving-cert\") pod \"etcd-operator-b45778765-l8h5h\" (UID: \"991f5f59-9d29-4a19-974a-2b358b8b38a0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-l8h5h" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.961102 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d50c0395-ec10-4463-92e4-29defdd47f62-metrics-certs\") pod \"router-default-5444994796-wmwsd\" (UID: \"d50c0395-ec10-4463-92e4-29defdd47f62\") " pod="openshift-ingress/router-default-5444994796-wmwsd" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.961651 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rgmtq\" (UniqueName: \"kubernetes.io/projected/49dd977b-6315-4446-8804-242e7e94a375-kube-api-access-rgmtq\") pod \"control-plane-machine-set-operator-78cbb6b69f-4tnq9\" (UID: \"49dd977b-6315-4446-8804-242e7e94a375\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-4tnq9" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.961694 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/43fd9fa4-b232-4d49-8f52-27d016de4cad-ca-trust-extracted\") pod \"image-registry-697d97f7c8-qlprf\" (UID: \"43fd9fa4-b232-4d49-8f52-27d016de4cad\") " pod="openshift-image-registry/image-registry-697d97f7c8-qlprf" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.961725 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/43fd9fa4-b232-4d49-8f52-27d016de4cad-registry-certificates\") pod \"image-registry-697d97f7c8-qlprf\" (UID: \"43fd9fa4-b232-4d49-8f52-27d016de4cad\") " pod="openshift-image-registry/image-registry-697d97f7c8-qlprf" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.961824 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc64ac2f-83b2-421c-b9e7-194a679b8653-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-xxcz8\" (UID: \"cc64ac2f-83b2-421c-b9e7-194a679b8653\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xxcz8" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.961858 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/d50c0395-ec10-4463-92e4-29defdd47f62-stats-auth\") pod \"router-default-5444994796-wmwsd\" (UID: \"d50c0395-ec10-4463-92e4-29defdd47f62\") " pod="openshift-ingress/router-default-5444994796-wmwsd" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.961885 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/43fd9fa4-b232-4d49-8f52-27d016de4cad-installation-pull-secrets\") pod \"image-registry-697d97f7c8-qlprf\" (UID: \"43fd9fa4-b232-4d49-8f52-27d016de4cad\") " pod="openshift-image-registry/image-registry-697d97f7c8-qlprf" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.961915 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/49dd977b-6315-4446-8804-242e7e94a375-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-4tnq9\" (UID: \"49dd977b-6315-4446-8804-242e7e94a375\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-4tnq9" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.961962 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/b777a7db-a7b4-4fdd-b30c-fb2243b658a0-signing-key\") pod \"service-ca-9c57cc56f-k5tl7\" (UID: \"b777a7db-a7b4-4fdd-b30c-fb2243b658a0\") " pod="openshift-service-ca/service-ca-9c57cc56f-k5tl7" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.961991 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/24381e71-6f3a-4c4c-b60d-a10c06aa12f7-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-sztfd\" (UID: \"24381e71-6f3a-4c4c-b60d-a10c06aa12f7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sztfd" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.962062 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qlprf\" (UID: \"43fd9fa4-b232-4d49-8f52-27d016de4cad\") " pod="openshift-image-registry/image-registry-697d97f7c8-qlprf" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.962117 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50568641-b008-4747-bdbc-f474cc35bf58-config\") pod \"service-ca-operator-777779d784-qvsns\" (UID: \"50568641-b008-4747-bdbc-f474cc35bf58\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-qvsns" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.962175 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/79f6280f-8dc0-42b8-be4c-cbbc6528bf58-config-volume\") pod \"collect-profiles-29492100-sfd69\" (UID: \"79f6280f-8dc0-42b8-be4c-cbbc6528bf58\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492100-sfd69" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.962209 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h7sx5\" (UniqueName: \"kubernetes.io/projected/a1a8ed06-fb86-479b-a5a1-0dbac195717a-kube-api-access-h7sx5\") pod \"ingress-operator-5b745b69d9-s2bmd\" (UID: \"a1a8ed06-fb86-479b-a5a1-0dbac195717a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-s2bmd" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.962240 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mgksq\" (UniqueName: \"kubernetes.io/projected/8041fd88-aa5f-41ea-acab-b694e78d4355-kube-api-access-mgksq\") pod \"package-server-manager-789f6589d5-wp9j5\" (UID: \"8041fd88-aa5f-41ea-acab-b694e78d4355\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wp9j5" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.962267 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-td428\" (UniqueName: \"kubernetes.io/projected/25c9b0f1-c6a8-4521-a480-9f46238e3a22-kube-api-access-td428\") pod \"multus-admission-controller-857f4d67dd-fxbvr\" (UID: \"25c9b0f1-c6a8-4521-a480-9f46238e3a22\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-fxbvr" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.962368 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/79f6280f-8dc0-42b8-be4c-cbbc6528bf58-secret-volume\") pod \"collect-profiles-29492100-sfd69\" (UID: \"79f6280f-8dc0-42b8-be4c-cbbc6528bf58\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492100-sfd69" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.962422 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ecacf3dd-ae8b-4d81-87b5-0bfbf0575e24-srv-cert\") pod \"olm-operator-6b444d44fb-qkkgm\" (UID: \"ecacf3dd-ae8b-4d81-87b5-0bfbf0575e24\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qkkgm" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.962463 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qfr9l\" (UniqueName: \"kubernetes.io/projected/d50c0395-ec10-4463-92e4-29defdd47f62-kube-api-access-qfr9l\") pod \"router-default-5444994796-wmwsd\" (UID: \"d50c0395-ec10-4463-92e4-29defdd47f62\") " pod="openshift-ingress/router-default-5444994796-wmwsd" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.962492 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/a1556a89-d5d8-4eca-bf26-6475efb42496-profile-collector-cert\") pod \"catalog-operator-68c6474976-dr2qr\" (UID: \"a1556a89-d5d8-4eca-bf26-6475efb42496\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dr2qr" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.962685 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tc6l2\" (UniqueName: \"kubernetes.io/projected/dd09f8c7-aace-4ca6-ac8c-aa4425391032-kube-api-access-tc6l2\") pod \"packageserver-d55dfcdfc-js7zh\" (UID: \"dd09f8c7-aace-4ca6-ac8c-aa4425391032\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-js7zh" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.964302 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/25c9b0f1-c6a8-4521-a480-9f46238e3a22-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-fxbvr\" (UID: \"25c9b0f1-c6a8-4521-a480-9f46238e3a22\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-fxbvr" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.964353 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/17342d35-ebda-45fa-8b7a-9be1064954a9-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-q7j4f\" (UID: \"17342d35-ebda-45fa-8b7a-9be1064954a9\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-q7j4f" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.964391 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/ecacf3dd-ae8b-4d81-87b5-0bfbf0575e24-profile-collector-cert\") pod \"olm-operator-6b444d44fb-qkkgm\" (UID: \"ecacf3dd-ae8b-4d81-87b5-0bfbf0575e24\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qkkgm" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.964416 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/d50c0395-ec10-4463-92e4-29defdd47f62-default-certificate\") pod \"router-default-5444994796-wmwsd\" (UID: \"d50c0395-ec10-4463-92e4-29defdd47f62\") " pod="openshift-ingress/router-default-5444994796-wmwsd" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.964465 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gkkvj\" (UniqueName: \"kubernetes.io/projected/b777a7db-a7b4-4fdd-b30c-fb2243b658a0-kube-api-access-gkkvj\") pod \"service-ca-9c57cc56f-k5tl7\" (UID: \"b777a7db-a7b4-4fdd-b30c-fb2243b658a0\") " pod="openshift-service-ca/service-ca-9c57cc56f-k5tl7" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.964498 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/991f5f59-9d29-4a19-974a-2b358b8b38a0-etcd-ca\") pod \"etcd-operator-b45778765-l8h5h\" (UID: \"991f5f59-9d29-4a19-974a-2b358b8b38a0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-l8h5h" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.965333 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/991f5f59-9d29-4a19-974a-2b358b8b38a0-etcd-ca\") pod \"etcd-operator-b45778765-l8h5h\" (UID: \"991f5f59-9d29-4a19-974a-2b358b8b38a0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-l8h5h" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.967451 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5477c5d7-5d1a-46a7-bee2-eeb3d5ef2ade-proxy-tls\") pod \"machine-config-operator-74547568cd-wf8n4\" (UID: \"5477c5d7-5d1a-46a7-bee2-eeb3d5ef2ade\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-wf8n4" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.971891 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/baa7401d-bcad-4175-af1b-46414c003f9e-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-45xm2\" (UID: \"baa7401d-bcad-4175-af1b-46414c003f9e\") " pod="openshift-marketplace/marketplace-operator-79b997595-45xm2" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.973104 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f0e1afff-d250-45de-bc85-3a20f622a52d-proxy-tls\") pod \"machine-config-controller-84d6567774-r5rsq\" (UID: \"f0e1afff-d250-45de-bc85-3a20f622a52d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-r5rsq" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.975288 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/8041fd88-aa5f-41ea-acab-b694e78d4355-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-wp9j5\" (UID: \"8041fd88-aa5f-41ea-acab-b694e78d4355\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wp9j5" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.960100 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9189511e-af92-4fee-be70-e2baab592c98-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-6vnzn\" (UID: \"9189511e-af92-4fee-be70-e2baab592c98\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6vnzn" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.976554 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a1556a89-d5d8-4eca-bf26-6475efb42496-srv-cert\") pod \"catalog-operator-68c6474976-dr2qr\" (UID: \"a1556a89-d5d8-4eca-bf26-6475efb42496\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dr2qr" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.977944 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17342d35-ebda-45fa-8b7a-9be1064954a9-config\") pod \"kube-apiserver-operator-766d6c64bb-q7j4f\" (UID: \"17342d35-ebda-45fa-8b7a-9be1064954a9\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-q7j4f" Jan 27 15:10:45 crc kubenswrapper[4697]: E0127 15:10:45.978385 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:10:46.478359641 +0000 UTC m=+142.650759432 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qlprf" (UID: "43fd9fa4-b232-4d49-8f52-27d016de4cad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.979452 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50568641-b008-4747-bdbc-f474cc35bf58-config\") pod \"service-ca-operator-777779d784-qvsns\" (UID: \"50568641-b008-4747-bdbc-f474cc35bf58\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-qvsns" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.979494 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/5477c5d7-5d1a-46a7-bee2-eeb3d5ef2ade-images\") pod \"machine-config-operator-74547568cd-wf8n4\" (UID: \"5477c5d7-5d1a-46a7-bee2-eeb3d5ef2ade\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-wf8n4" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.980912 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f0e1afff-d250-45de-bc85-3a20f622a52d-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-r5rsq\" (UID: \"f0e1afff-d250-45de-bc85-3a20f622a52d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-r5rsq" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.982588 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ecacf3dd-ae8b-4d81-87b5-0bfbf0575e24-srv-cert\") pod \"olm-operator-6b444d44fb-qkkgm\" (UID: \"ecacf3dd-ae8b-4d81-87b5-0bfbf0575e24\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qkkgm" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.984343 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/b777a7db-a7b4-4fdd-b30c-fb2243b658a0-signing-key\") pod \"service-ca-9c57cc56f-k5tl7\" (UID: \"b777a7db-a7b4-4fdd-b30c-fb2243b658a0\") " pod="openshift-service-ca/service-ca-9c57cc56f-k5tl7" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.984662 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/991f5f59-9d29-4a19-974a-2b358b8b38a0-etcd-service-ca\") pod \"etcd-operator-b45778765-l8h5h\" (UID: \"991f5f59-9d29-4a19-974a-2b358b8b38a0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-l8h5h" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.985339 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/991f5f59-9d29-4a19-974a-2b358b8b38a0-config\") pod \"etcd-operator-b45778765-l8h5h\" (UID: \"991f5f59-9d29-4a19-974a-2b358b8b38a0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-l8h5h" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.985464 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/25c9b0f1-c6a8-4521-a480-9f46238e3a22-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-fxbvr\" (UID: \"25c9b0f1-c6a8-4521-a480-9f46238e3a22\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-fxbvr" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.986058 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/24381e71-6f3a-4c4c-b60d-a10c06aa12f7-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-sztfd\" (UID: \"24381e71-6f3a-4c4c-b60d-a10c06aa12f7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sztfd" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.986154 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a17bf464-23fd-475b-b25a-33e14cd9ced0-config-volume\") pod \"dns-default-5xn9m\" (UID: \"a17bf464-23fd-475b-b25a-33e14cd9ced0\") " pod="openshift-dns/dns-default-5xn9m" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.988139 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/49dd977b-6315-4446-8804-242e7e94a375-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-4tnq9\" (UID: \"49dd977b-6315-4446-8804-242e7e94a375\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-4tnq9" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.989939 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mh57w\" (UniqueName: \"kubernetes.io/projected/f0e1afff-d250-45de-bc85-3a20f622a52d-kube-api-access-mh57w\") pod \"machine-config-controller-84d6567774-r5rsq\" (UID: \"f0e1afff-d250-45de-bc85-3a20f622a52d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-r5rsq" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.988822 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/dd09f8c7-aace-4ca6-ac8c-aa4425391032-apiservice-cert\") pod \"packageserver-d55dfcdfc-js7zh\" (UID: \"dd09f8c7-aace-4ca6-ac8c-aa4425391032\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-js7zh" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.995291 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/43fd9fa4-b232-4d49-8f52-27d016de4cad-ca-trust-extracted\") pod \"image-registry-697d97f7c8-qlprf\" (UID: \"43fd9fa4-b232-4d49-8f52-27d016de4cad\") " pod="openshift-image-registry/image-registry-697d97f7c8-qlprf" Jan 27 15:10:45 crc kubenswrapper[4697]: I0127 15:10:45.997414 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/43fd9fa4-b232-4d49-8f52-27d016de4cad-registry-certificates\") pod \"image-registry-697d97f7c8-qlprf\" (UID: \"43fd9fa4-b232-4d49-8f52-27d016de4cad\") " pod="openshift-image-registry/image-registry-697d97f7c8-qlprf" Jan 27 15:10:46 crc kubenswrapper[4697]: I0127 15:10:46.000001 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-tgccn"] Jan 27 15:10:46 crc kubenswrapper[4697]: I0127 15:10:46.001399 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/dd09f8c7-aace-4ca6-ac8c-aa4425391032-webhook-cert\") pod \"packageserver-d55dfcdfc-js7zh\" (UID: \"dd09f8c7-aace-4ca6-ac8c-aa4425391032\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-js7zh" Jan 27 15:10:46 crc kubenswrapper[4697]: I0127 15:10:46.003265 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/991f5f59-9d29-4a19-974a-2b358b8b38a0-etcd-client\") pod \"etcd-operator-b45778765-l8h5h\" (UID: \"991f5f59-9d29-4a19-974a-2b358b8b38a0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-l8h5h" Jan 27 15:10:46 crc kubenswrapper[4697]: I0127 15:10:46.003692 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/991f5f59-9d29-4a19-974a-2b358b8b38a0-serving-cert\") pod \"etcd-operator-b45778765-l8h5h\" (UID: \"991f5f59-9d29-4a19-974a-2b358b8b38a0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-l8h5h" Jan 27 15:10:46 crc kubenswrapper[4697]: I0127 15:10:46.004211 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/17342d35-ebda-45fa-8b7a-9be1064954a9-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-q7j4f\" (UID: \"17342d35-ebda-45fa-8b7a-9be1064954a9\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-q7j4f" Jan 27 15:10:46 crc kubenswrapper[4697]: I0127 15:10:46.005520 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/50568641-b008-4747-bdbc-f474cc35bf58-serving-cert\") pod \"service-ca-operator-777779d784-qvsns\" (UID: \"50568641-b008-4747-bdbc-f474cc35bf58\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-qvsns" Jan 27 15:10:46 crc kubenswrapper[4697]: I0127 15:10:46.009275 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qfzvj\" (UniqueName: \"kubernetes.io/projected/06d5e1be-2c28-4e27-ba7e-e24b4e72401b-kube-api-access-qfzvj\") pod \"migrator-59844c95c7-9ds6c\" (UID: \"06d5e1be-2c28-4e27-ba7e-e24b4e72401b\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-9ds6c" Jan 27 15:10:46 crc kubenswrapper[4697]: I0127 15:10:46.024707 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5477c5d7-5d1a-46a7-bee2-eeb3d5ef2ade-auth-proxy-config\") pod \"machine-config-operator-74547568cd-wf8n4\" (UID: \"5477c5d7-5d1a-46a7-bee2-eeb3d5ef2ade\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-wf8n4" Jan 27 15:10:46 crc kubenswrapper[4697]: I0127 15:10:46.025090 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/a1556a89-d5d8-4eca-bf26-6475efb42496-profile-collector-cert\") pod \"catalog-operator-68c6474976-dr2qr\" (UID: \"a1556a89-d5d8-4eca-bf26-6475efb42496\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dr2qr" Jan 27 15:10:46 crc kubenswrapper[4697]: I0127 15:10:46.025113 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/ecacf3dd-ae8b-4d81-87b5-0bfbf0575e24-profile-collector-cert\") pod \"olm-operator-6b444d44fb-qkkgm\" (UID: \"ecacf3dd-ae8b-4d81-87b5-0bfbf0575e24\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qkkgm" Jan 27 15:10:46 crc kubenswrapper[4697]: I0127 15:10:46.025596 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9189511e-af92-4fee-be70-e2baab592c98-config\") pod \"kube-controller-manager-operator-78b949d7b-6vnzn\" (UID: \"9189511e-af92-4fee-be70-e2baab592c98\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6vnzn" Jan 27 15:10:46 crc kubenswrapper[4697]: I0127 15:10:46.025671 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24381e71-6f3a-4c4c-b60d-a10c06aa12f7-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-sztfd\" (UID: \"24381e71-6f3a-4c4c-b60d-a10c06aa12f7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sztfd" Jan 27 15:10:46 crc kubenswrapper[4697]: I0127 15:10:46.026347 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d50c0395-ec10-4463-92e4-29defdd47f62-service-ca-bundle\") pod \"router-default-5444994796-wmwsd\" (UID: \"d50c0395-ec10-4463-92e4-29defdd47f62\") " pod="openshift-ingress/router-default-5444994796-wmwsd" Jan 27 15:10:46 crc kubenswrapper[4697]: I0127 15:10:46.026596 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bbgkk\" (UniqueName: \"kubernetes.io/projected/50568641-b008-4747-bdbc-f474cc35bf58-kube-api-access-bbgkk\") pod \"service-ca-operator-777779d784-qvsns\" (UID: \"50568641-b008-4747-bdbc-f474cc35bf58\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-qvsns" Jan 27 15:10:46 crc kubenswrapper[4697]: I0127 15:10:46.027445 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc64ac2f-83b2-421c-b9e7-194a679b8653-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-xxcz8\" (UID: \"cc64ac2f-83b2-421c-b9e7-194a679b8653\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xxcz8" Jan 27 15:10:46 crc kubenswrapper[4697]: I0127 15:10:46.033117 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a17bf464-23fd-475b-b25a-33e14cd9ced0-metrics-tls\") pod \"dns-default-5xn9m\" (UID: \"a17bf464-23fd-475b-b25a-33e14cd9ced0\") " pod="openshift-dns/dns-default-5xn9m" Jan 27 15:10:46 crc kubenswrapper[4697]: I0127 15:10:46.033439 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cc64ac2f-83b2-421c-b9e7-194a679b8653-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-xxcz8\" (UID: \"cc64ac2f-83b2-421c-b9e7-194a679b8653\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xxcz8" Jan 27 15:10:46 crc kubenswrapper[4697]: I0127 15:10:46.034517 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/d50c0395-ec10-4463-92e4-29defdd47f62-stats-auth\") pod \"router-default-5444994796-wmwsd\" (UID: \"d50c0395-ec10-4463-92e4-29defdd47f62\") " pod="openshift-ingress/router-default-5444994796-wmwsd" Jan 27 15:10:46 crc kubenswrapper[4697]: I0127 15:10:46.034730 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mwhvk" Jan 27 15:10:46 crc kubenswrapper[4697]: I0127 15:10:46.040034 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/43fd9fa4-b232-4d49-8f52-27d016de4cad-installation-pull-secrets\") pod \"image-registry-697d97f7c8-qlprf\" (UID: \"43fd9fa4-b232-4d49-8f52-27d016de4cad\") " pod="openshift-image-registry/image-registry-697d97f7c8-qlprf" Jan 27 15:10:46 crc kubenswrapper[4697]: I0127 15:10:46.041731 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d50c0395-ec10-4463-92e4-29defdd47f62-metrics-certs\") pod \"router-default-5444994796-wmwsd\" (UID: \"d50c0395-ec10-4463-92e4-29defdd47f62\") " pod="openshift-ingress/router-default-5444994796-wmwsd" Jan 27 15:10:46 crc kubenswrapper[4697]: I0127 15:10:46.042285 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/d50c0395-ec10-4463-92e4-29defdd47f62-default-certificate\") pod \"router-default-5444994796-wmwsd\" (UID: \"d50c0395-ec10-4463-92e4-29defdd47f62\") " pod="openshift-ingress/router-default-5444994796-wmwsd" Jan 27 15:10:46 crc kubenswrapper[4697]: I0127 15:10:46.044881 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zmssv\" (UniqueName: \"kubernetes.io/projected/a17bf464-23fd-475b-b25a-33e14cd9ced0-kube-api-access-zmssv\") pod \"dns-default-5xn9m\" (UID: \"a17bf464-23fd-475b-b25a-33e14cd9ced0\") " pod="openshift-dns/dns-default-5xn9m" Jan 27 15:10:46 crc kubenswrapper[4697]: W0127 15:10:46.044999 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1d0b154_f221_4132_9d6f_a17173841b1f.slice/crio-f2f5b03e3fb3d7b9298e4628c6ddb631d95c3419b930c6c372fefbbbbb37ec85 WatchSource:0}: Error finding container f2f5b03e3fb3d7b9298e4628c6ddb631d95c3419b930c6c372fefbbbbb37ec85: Status 404 returned error can't find the container with id f2f5b03e3fb3d7b9298e4628c6ddb631d95c3419b930c6c372fefbbbbb37ec85 Jan 27 15:10:46 crc kubenswrapper[4697]: I0127 15:10:46.047352 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-2p4pk"] Jan 27 15:10:46 crc kubenswrapper[4697]: I0127 15:10:46.064748 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a1a8ed06-fb86-479b-a5a1-0dbac195717a-bound-sa-token\") pod \"ingress-operator-5b745b69d9-s2bmd\" (UID: \"a1a8ed06-fb86-479b-a5a1-0dbac195717a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-s2bmd" Jan 27 15:10:46 crc kubenswrapper[4697]: I0127 15:10:46.065479 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:10:46 crc kubenswrapper[4697]: E0127 15:10:46.065685 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:10:46.565637574 +0000 UTC m=+142.738037355 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:10:46 crc kubenswrapper[4697]: I0127 15:10:46.065851 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/c88b976d-68fb-4ca7-8a36-b6d0c1022346-node-bootstrap-token\") pod \"machine-config-server-ng2lw\" (UID: \"c88b976d-68fb-4ca7-8a36-b6d0c1022346\") " pod="openshift-machine-config-operator/machine-config-server-ng2lw" Jan 27 15:10:46 crc kubenswrapper[4697]: I0127 15:10:46.065880 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cdrnm\" (UniqueName: \"kubernetes.io/projected/c88b976d-68fb-4ca7-8a36-b6d0c1022346-kube-api-access-cdrnm\") pod \"machine-config-server-ng2lw\" (UID: \"c88b976d-68fb-4ca7-8a36-b6d0c1022346\") " pod="openshift-machine-config-operator/machine-config-server-ng2lw" Jan 27 15:10:46 crc kubenswrapper[4697]: I0127 15:10:46.065900 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6128d2c8-dc50-4e71-8093-c931f60b8495-cert\") pod \"ingress-canary-d4smc\" (UID: \"6128d2c8-dc50-4e71-8093-c931f60b8495\") " pod="openshift-ingress-canary/ingress-canary-d4smc" Jan 27 15:10:46 crc kubenswrapper[4697]: I0127 15:10:46.065921 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q6rzr\" (UniqueName: \"kubernetes.io/projected/9957edac-db7a-4d39-8224-f3e24a16bb43-kube-api-access-q6rzr\") pod \"csi-hostpathplugin-8b9kp\" (UID: \"9957edac-db7a-4d39-8224-f3e24a16bb43\") " pod="hostpath-provisioner/csi-hostpathplugin-8b9kp" Jan 27 15:10:46 crc kubenswrapper[4697]: I0127 15:10:46.065942 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/9957edac-db7a-4d39-8224-f3e24a16bb43-socket-dir\") pod \"csi-hostpathplugin-8b9kp\" (UID: \"9957edac-db7a-4d39-8224-f3e24a16bb43\") " pod="hostpath-provisioner/csi-hostpathplugin-8b9kp" Jan 27 15:10:46 crc kubenswrapper[4697]: I0127 15:10:46.065958 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/9957edac-db7a-4d39-8224-f3e24a16bb43-registration-dir\") pod \"csi-hostpathplugin-8b9kp\" (UID: \"9957edac-db7a-4d39-8224-f3e24a16bb43\") " pod="hostpath-provisioner/csi-hostpathplugin-8b9kp" Jan 27 15:10:46 crc kubenswrapper[4697]: I0127 15:10:46.065977 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/9957edac-db7a-4d39-8224-f3e24a16bb43-mountpoint-dir\") pod \"csi-hostpathplugin-8b9kp\" (UID: \"9957edac-db7a-4d39-8224-f3e24a16bb43\") " pod="hostpath-provisioner/csi-hostpathplugin-8b9kp" Jan 27 15:10:46 crc kubenswrapper[4697]: I0127 15:10:46.066051 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/9957edac-db7a-4d39-8224-f3e24a16bb43-csi-data-dir\") pod \"csi-hostpathplugin-8b9kp\" (UID: \"9957edac-db7a-4d39-8224-f3e24a16bb43\") " pod="hostpath-provisioner/csi-hostpathplugin-8b9kp" Jan 27 15:10:46 crc kubenswrapper[4697]: I0127 15:10:46.066071 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/c88b976d-68fb-4ca7-8a36-b6d0c1022346-certs\") pod \"machine-config-server-ng2lw\" (UID: \"c88b976d-68fb-4ca7-8a36-b6d0c1022346\") " pod="openshift-machine-config-operator/machine-config-server-ng2lw" Jan 27 15:10:46 crc kubenswrapper[4697]: I0127 15:10:46.066147 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/9957edac-db7a-4d39-8224-f3e24a16bb43-plugins-dir\") pod \"csi-hostpathplugin-8b9kp\" (UID: \"9957edac-db7a-4d39-8224-f3e24a16bb43\") " pod="hostpath-provisioner/csi-hostpathplugin-8b9kp" Jan 27 15:10:46 crc kubenswrapper[4697]: I0127 15:10:46.066194 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kz6g9\" (UniqueName: \"kubernetes.io/projected/79f6280f-8dc0-42b8-be4c-cbbc6528bf58-kube-api-access-kz6g9\") pod \"collect-profiles-29492100-sfd69\" (UID: \"79f6280f-8dc0-42b8-be4c-cbbc6528bf58\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492100-sfd69" Jan 27 15:10:46 crc kubenswrapper[4697]: I0127 15:10:46.066222 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4tgn\" (UniqueName: \"kubernetes.io/projected/6128d2c8-dc50-4e71-8093-c931f60b8495-kube-api-access-z4tgn\") pod \"ingress-canary-d4smc\" (UID: \"6128d2c8-dc50-4e71-8093-c931f60b8495\") " pod="openshift-ingress-canary/ingress-canary-d4smc" Jan 27 15:10:46 crc kubenswrapper[4697]: I0127 15:10:46.066265 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qlprf\" (UID: \"43fd9fa4-b232-4d49-8f52-27d016de4cad\") " pod="openshift-image-registry/image-registry-697d97f7c8-qlprf" Jan 27 15:10:46 crc kubenswrapper[4697]: I0127 15:10:46.066284 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/79f6280f-8dc0-42b8-be4c-cbbc6528bf58-config-volume\") pod \"collect-profiles-29492100-sfd69\" (UID: \"79f6280f-8dc0-42b8-be4c-cbbc6528bf58\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492100-sfd69" Jan 27 15:10:46 crc kubenswrapper[4697]: I0127 15:10:46.066321 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/79f6280f-8dc0-42b8-be4c-cbbc6528bf58-secret-volume\") pod \"collect-profiles-29492100-sfd69\" (UID: \"79f6280f-8dc0-42b8-be4c-cbbc6528bf58\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492100-sfd69" Jan 27 15:10:46 crc kubenswrapper[4697]: I0127 15:10:46.066368 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/9957edac-db7a-4d39-8224-f3e24a16bb43-socket-dir\") pod \"csi-hostpathplugin-8b9kp\" (UID: \"9957edac-db7a-4d39-8224-f3e24a16bb43\") " pod="hostpath-provisioner/csi-hostpathplugin-8b9kp" Jan 27 15:10:46 crc kubenswrapper[4697]: I0127 15:10:46.066479 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/9957edac-db7a-4d39-8224-f3e24a16bb43-mountpoint-dir\") pod \"csi-hostpathplugin-8b9kp\" (UID: \"9957edac-db7a-4d39-8224-f3e24a16bb43\") " pod="hostpath-provisioner/csi-hostpathplugin-8b9kp" Jan 27 15:10:46 crc kubenswrapper[4697]: I0127 15:10:46.066686 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/9957edac-db7a-4d39-8224-f3e24a16bb43-csi-data-dir\") pod \"csi-hostpathplugin-8b9kp\" (UID: \"9957edac-db7a-4d39-8224-f3e24a16bb43\") " pod="hostpath-provisioner/csi-hostpathplugin-8b9kp" Jan 27 15:10:46 crc kubenswrapper[4697]: E0127 15:10:46.066729 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:10:46.566716996 +0000 UTC m=+142.739116777 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qlprf" (UID: "43fd9fa4-b232-4d49-8f52-27d016de4cad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:10:46 crc kubenswrapper[4697]: I0127 15:10:46.066932 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/9957edac-db7a-4d39-8224-f3e24a16bb43-registration-dir\") pod \"csi-hostpathplugin-8b9kp\" (UID: \"9957edac-db7a-4d39-8224-f3e24a16bb43\") " pod="hostpath-provisioner/csi-hostpathplugin-8b9kp" Jan 27 15:10:46 crc kubenswrapper[4697]: I0127 15:10:46.067032 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/9957edac-db7a-4d39-8224-f3e24a16bb43-plugins-dir\") pod \"csi-hostpathplugin-8b9kp\" (UID: \"9957edac-db7a-4d39-8224-f3e24a16bb43\") " pod="hostpath-provisioner/csi-hostpathplugin-8b9kp" Jan 27 15:10:46 crc kubenswrapper[4697]: I0127 15:10:46.067768 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/79f6280f-8dc0-42b8-be4c-cbbc6528bf58-config-volume\") pod \"collect-profiles-29492100-sfd69\" (UID: \"79f6280f-8dc0-42b8-be4c-cbbc6528bf58\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492100-sfd69" Jan 27 15:10:46 crc kubenswrapper[4697]: I0127 15:10:46.069030 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/c88b976d-68fb-4ca7-8a36-b6d0c1022346-node-bootstrap-token\") pod \"machine-config-server-ng2lw\" (UID: \"c88b976d-68fb-4ca7-8a36-b6d0c1022346\") " pod="openshift-machine-config-operator/machine-config-server-ng2lw" Jan 27 15:10:46 crc kubenswrapper[4697]: I0127 15:10:46.069600 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/c88b976d-68fb-4ca7-8a36-b6d0c1022346-certs\") pod \"machine-config-server-ng2lw\" (UID: \"c88b976d-68fb-4ca7-8a36-b6d0c1022346\") " pod="openshift-machine-config-operator/machine-config-server-ng2lw" Jan 27 15:10:46 crc kubenswrapper[4697]: I0127 15:10:46.071506 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6128d2c8-dc50-4e71-8093-c931f60b8495-cert\") pod \"ingress-canary-d4smc\" (UID: \"6128d2c8-dc50-4e71-8093-c931f60b8495\") " pod="openshift-ingress-canary/ingress-canary-d4smc" Jan 27 15:10:46 crc kubenswrapper[4697]: I0127 15:10:46.089854 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/79f6280f-8dc0-42b8-be4c-cbbc6528bf58-secret-volume\") pod \"collect-profiles-29492100-sfd69\" (UID: \"79f6280f-8dc0-42b8-be4c-cbbc6528bf58\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492100-sfd69" Jan 27 15:10:46 crc kubenswrapper[4697]: I0127 15:10:46.092499 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-9ds6c" Jan 27 15:10:46 crc kubenswrapper[4697]: I0127 15:10:46.095290 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wtnjp\" (UniqueName: \"kubernetes.io/projected/5477c5d7-5d1a-46a7-bee2-eeb3d5ef2ade-kube-api-access-wtnjp\") pod \"machine-config-operator-74547568cd-wf8n4\" (UID: \"5477c5d7-5d1a-46a7-bee2-eeb3d5ef2ade\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-wf8n4" Jan 27 15:10:46 crc kubenswrapper[4697]: I0127 15:10:46.108182 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxdxh\" (UniqueName: \"kubernetes.io/projected/baa7401d-bcad-4175-af1b-46414c003f9e-kube-api-access-kxdxh\") pod \"marketplace-operator-79b997595-45xm2\" (UID: \"baa7401d-bcad-4175-af1b-46414c003f9e\") " pod="openshift-marketplace/marketplace-operator-79b997595-45xm2" Jan 27 15:10:46 crc kubenswrapper[4697]: W0127 15:10:46.110651 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod927094cf_5a33_4170_97f1_b9b2c4f5b519.slice/crio-e7123cbbb67d3bd3ef093bfca32caab7d9f7d1cfb8a78acc93ebf99156368f19 WatchSource:0}: Error finding container e7123cbbb67d3bd3ef093bfca32caab7d9f7d1cfb8a78acc93ebf99156368f19: Status 404 returned error can't find the container with id e7123cbbb67d3bd3ef093bfca32caab7d9f7d1cfb8a78acc93ebf99156368f19 Jan 27 15:10:46 crc kubenswrapper[4697]: I0127 15:10:46.130695 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hs6nr\" (UniqueName: \"kubernetes.io/projected/43fd9fa4-b232-4d49-8f52-27d016de4cad-kube-api-access-hs6nr\") pod \"image-registry-697d97f7c8-qlprf\" (UID: \"43fd9fa4-b232-4d49-8f52-27d016de4cad\") " pod="openshift-image-registry/image-registry-697d97f7c8-qlprf" Jan 27 15:10:46 crc kubenswrapper[4697]: I0127 15:10:46.146425 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9189511e-af92-4fee-be70-e2baab592c98-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-6vnzn\" (UID: \"9189511e-af92-4fee-be70-e2baab592c98\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6vnzn" Jan 27 15:10:46 crc kubenswrapper[4697]: I0127 15:10:46.146663 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-wf8n4" Jan 27 15:10:46 crc kubenswrapper[4697]: I0127 15:10:46.154383 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-r5rsq" Jan 27 15:10:46 crc kubenswrapper[4697]: I0127 15:10:46.165728 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wtwh9\" (UniqueName: \"kubernetes.io/projected/991f5f59-9d29-4a19-974a-2b358b8b38a0-kube-api-access-wtwh9\") pod \"etcd-operator-b45778765-l8h5h\" (UID: \"991f5f59-9d29-4a19-974a-2b358b8b38a0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-l8h5h" Jan 27 15:10:46 crc kubenswrapper[4697]: I0127 15:10:46.168109 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:10:46 crc kubenswrapper[4697]: E0127 15:10:46.169386 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:10:46.669363034 +0000 UTC m=+142.841762815 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:10:46 crc kubenswrapper[4697]: I0127 15:10:46.169600 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qlprf\" (UID: \"43fd9fa4-b232-4d49-8f52-27d016de4cad\") " pod="openshift-image-registry/image-registry-697d97f7c8-qlprf" Jan 27 15:10:46 crc kubenswrapper[4697]: E0127 15:10:46.169926 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:10:46.669915051 +0000 UTC m=+142.842314832 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qlprf" (UID: "43fd9fa4-b232-4d49-8f52-27d016de4cad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:10:46 crc kubenswrapper[4697]: I0127 15:10:46.181561 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-46h6b"] Jan 27 15:10:46 crc kubenswrapper[4697]: I0127 15:10:46.199550 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-78k6r"] Jan 27 15:10:46 crc kubenswrapper[4697]: I0127 15:10:46.202012 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-5xn9m" Jan 27 15:10:46 crc kubenswrapper[4697]: I0127 15:10:46.206599 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cc64ac2f-83b2-421c-b9e7-194a679b8653-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-xxcz8\" (UID: \"cc64ac2f-83b2-421c-b9e7-194a679b8653\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xxcz8" Jan 27 15:10:46 crc kubenswrapper[4697]: I0127 15:10:46.214685 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-qvsns" Jan 27 15:10:46 crc kubenswrapper[4697]: I0127 15:10:46.225385 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-msng2\" (UniqueName: \"kubernetes.io/projected/24381e71-6f3a-4c4c-b60d-a10c06aa12f7-kube-api-access-msng2\") pod \"kube-storage-version-migrator-operator-b67b599dd-sztfd\" (UID: \"24381e71-6f3a-4c4c-b60d-a10c06aa12f7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sztfd" Jan 27 15:10:46 crc kubenswrapper[4697]: I0127 15:10:46.228721 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-45xm2" Jan 27 15:10:46 crc kubenswrapper[4697]: I0127 15:10:46.254365 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/17342d35-ebda-45fa-8b7a-9be1064954a9-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-q7j4f\" (UID: \"17342d35-ebda-45fa-8b7a-9be1064954a9\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-q7j4f" Jan 27 15:10:46 crc kubenswrapper[4697]: I0127 15:10:46.270444 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:10:46 crc kubenswrapper[4697]: E0127 15:10:46.271174 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:10:46.771155799 +0000 UTC m=+142.943555580 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:10:46 crc kubenswrapper[4697]: I0127 15:10:46.272869 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h7sx5\" (UniqueName: \"kubernetes.io/projected/a1a8ed06-fb86-479b-a5a1-0dbac195717a-kube-api-access-h7sx5\") pod \"ingress-operator-5b745b69d9-s2bmd\" (UID: \"a1a8ed06-fb86-479b-a5a1-0dbac195717a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-s2bmd" Jan 27 15:10:46 crc kubenswrapper[4697]: W0127 15:10:46.273948 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4f8433f4_0c5f_40eb_b4c5_88c02b1595ad.slice/crio-a89827a0b161e6470c76e9f9bbc046b40d941bdcd81752a151e74751242decdf WatchSource:0}: Error finding container a89827a0b161e6470c76e9f9bbc046b40d941bdcd81752a151e74751242decdf: Status 404 returned error can't find the container with id a89827a0b161e6470c76e9f9bbc046b40d941bdcd81752a151e74751242decdf Jan 27 15:10:46 crc kubenswrapper[4697]: I0127 15:10:46.288722 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mgksq\" (UniqueName: \"kubernetes.io/projected/8041fd88-aa5f-41ea-acab-b694e78d4355-kube-api-access-mgksq\") pod \"package-server-manager-789f6589d5-wp9j5\" (UID: \"8041fd88-aa5f-41ea-acab-b694e78d4355\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wp9j5" Jan 27 15:10:46 crc kubenswrapper[4697]: W0127 15:10:46.302133 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod73d9ac28_74b0_4ead_b4e4_b270264feb05.slice/crio-fe4e040d8176bc800a76d350637f5c5aa6cc4dc50cb6701d65cca795bf09d949 WatchSource:0}: Error finding container fe4e040d8176bc800a76d350637f5c5aa6cc4dc50cb6701d65cca795bf09d949: Status 404 returned error can't find the container with id fe4e040d8176bc800a76d350637f5c5aa6cc4dc50cb6701d65cca795bf09d949 Jan 27 15:10:46 crc kubenswrapper[4697]: I0127 15:10:46.308805 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qfr9l\" (UniqueName: \"kubernetes.io/projected/d50c0395-ec10-4463-92e4-29defdd47f62-kube-api-access-qfr9l\") pod \"router-default-5444994796-wmwsd\" (UID: \"d50c0395-ec10-4463-92e4-29defdd47f62\") " pod="openshift-ingress/router-default-5444994796-wmwsd" Jan 27 15:10:46 crc kubenswrapper[4697]: I0127 15:10:46.325832 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5k5gm\" (UniqueName: \"kubernetes.io/projected/a1556a89-d5d8-4eca-bf26-6475efb42496-kube-api-access-5k5gm\") pod \"catalog-operator-68c6474976-dr2qr\" (UID: \"a1556a89-d5d8-4eca-bf26-6475efb42496\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dr2qr" Jan 27 15:10:46 crc kubenswrapper[4697]: I0127 15:10:46.344896 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gkkvj\" (UniqueName: \"kubernetes.io/projected/b777a7db-a7b4-4fdd-b30c-fb2243b658a0-kube-api-access-gkkvj\") pod \"service-ca-9c57cc56f-k5tl7\" (UID: \"b777a7db-a7b4-4fdd-b30c-fb2243b658a0\") " pod="openshift-service-ca/service-ca-9c57cc56f-k5tl7" Jan 27 15:10:46 crc kubenswrapper[4697]: I0127 15:10:46.347137 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-9p5q5" event={"ID":"cbd9208d-08ed-47af-a7cf-b9ee3973b964","Type":"ContainerStarted","Data":"abc06315e62eec4230e597a3282470ba1b3e378e3825b14d4dbe7bcd2a5a5c31"} Jan 27 15:10:46 crc kubenswrapper[4697]: I0127 15:10:46.349685 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-2p4pk" event={"ID":"927094cf-5a33-4170-97f1-b9b2c4f5b519","Type":"ContainerStarted","Data":"e7123cbbb67d3bd3ef093bfca32caab7d9f7d1cfb8a78acc93ebf99156368f19"} Jan 27 15:10:46 crc kubenswrapper[4697]: I0127 15:10:46.350338 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-wjd95" event={"ID":"0f95124d-8a5d-4a0d-b4cd-906d0341a6a2","Type":"ContainerStarted","Data":"29d5c9787502e9669abbe96eaf4ab7794727491f9681e13bbc4697ee3d0371ce"} Jan 27 15:10:46 crc kubenswrapper[4697]: I0127 15:10:46.354022 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-46h6b" event={"ID":"4f8433f4-0c5f-40eb-b4c5-88c02b1595ad","Type":"ContainerStarted","Data":"a89827a0b161e6470c76e9f9bbc046b40d941bdcd81752a151e74751242decdf"} Jan 27 15:10:46 crc kubenswrapper[4697]: I0127 15:10:46.355163 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-z6snp" event={"ID":"420813d8-71d5-401e-9af6-61296b8a25ba","Type":"ContainerStarted","Data":"08b03c9aea7b1f1eca33d059e3ec751e0bbbf798972f80ce0a75abb08f75a240"} Jan 27 15:10:46 crc kubenswrapper[4697]: I0127 15:10:46.355922 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-7ddjp" event={"ID":"894a6339-d208-46db-8769-ac9153cb1ba0","Type":"ContainerStarted","Data":"885ac78ad4178301d09e55851c0a8ba33a024a355890367aa811516fdf404619"} Jan 27 15:10:46 crc kubenswrapper[4697]: I0127 15:10:46.356860 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-nmrvs" event={"ID":"f9c4aaa3-b53b-4b3f-8d6a-b9b7eef37362","Type":"ContainerStarted","Data":"3f541973973951c10c5ad73ee11a60fcd2c7b1a343ad4ce5c52f74f80e817225"} Jan 27 15:10:46 crc kubenswrapper[4697]: I0127 15:10:46.360571 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-tgccn" event={"ID":"d1d0b154-f221-4132-9d6f-a17173841b1f","Type":"ContainerStarted","Data":"f2f5b03e3fb3d7b9298e4628c6ddb631d95c3419b930c6c372fefbbbbb37ec85"} Jan 27 15:10:46 crc kubenswrapper[4697]: I0127 15:10:46.372084 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qlprf\" (UID: \"43fd9fa4-b232-4d49-8f52-27d016de4cad\") " pod="openshift-image-registry/image-registry-697d97f7c8-qlprf" Jan 27 15:10:46 crc kubenswrapper[4697]: I0127 15:10:46.372113 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-td428\" (UniqueName: \"kubernetes.io/projected/25c9b0f1-c6a8-4521-a480-9f46238e3a22-kube-api-access-td428\") pod \"multus-admission-controller-857f4d67dd-fxbvr\" (UID: \"25c9b0f1-c6a8-4521-a480-9f46238e3a22\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-fxbvr" Jan 27 15:10:46 crc kubenswrapper[4697]: E0127 15:10:46.372417 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:10:46.872403547 +0000 UTC m=+143.044803328 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qlprf" (UID: "43fd9fa4-b232-4d49-8f52-27d016de4cad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:10:46 crc kubenswrapper[4697]: I0127 15:10:46.376082 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-l8h5h" Jan 27 15:10:46 crc kubenswrapper[4697]: I0127 15:10:46.390961 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tc6l2\" (UniqueName: \"kubernetes.io/projected/dd09f8c7-aace-4ca6-ac8c-aa4425391032-kube-api-access-tc6l2\") pod \"packageserver-d55dfcdfc-js7zh\" (UID: \"dd09f8c7-aace-4ca6-ac8c-aa4425391032\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-js7zh" Jan 27 15:10:46 crc kubenswrapper[4697]: I0127 15:10:46.394989 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6vnzn" Jan 27 15:10:46 crc kubenswrapper[4697]: I0127 15:10:46.401137 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-zdp8l"] Jan 27 15:10:46 crc kubenswrapper[4697]: I0127 15:10:46.429306 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sztfd" Jan 27 15:10:46 crc kubenswrapper[4697]: I0127 15:10:46.432503 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/43fd9fa4-b232-4d49-8f52-27d016de4cad-bound-sa-token\") pod \"image-registry-697d97f7c8-qlprf\" (UID: \"43fd9fa4-b232-4d49-8f52-27d016de4cad\") " pod="openshift-image-registry/image-registry-697d97f7c8-qlprf" Jan 27 15:10:46 crc kubenswrapper[4697]: I0127 15:10:46.433979 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rx9c4"] Jan 27 15:10:46 crc kubenswrapper[4697]: I0127 15:10:46.435084 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xxcz8" Jan 27 15:10:46 crc kubenswrapper[4697]: I0127 15:10:46.441586 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-wmwsd" Jan 27 15:10:46 crc kubenswrapper[4697]: I0127 15:10:46.451373 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rgmtq\" (UniqueName: \"kubernetes.io/projected/49dd977b-6315-4446-8804-242e7e94a375-kube-api-access-rgmtq\") pod \"control-plane-machine-set-operator-78cbb6b69f-4tnq9\" (UID: \"49dd977b-6315-4446-8804-242e7e94a375\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-4tnq9" Jan 27 15:10:46 crc kubenswrapper[4697]: I0127 15:10:46.452227 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cpztt\" (UniqueName: \"kubernetes.io/projected/ecacf3dd-ae8b-4d81-87b5-0bfbf0575e24-kube-api-access-cpztt\") pod \"olm-operator-6b444d44fb-qkkgm\" (UID: \"ecacf3dd-ae8b-4d81-87b5-0bfbf0575e24\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qkkgm" Jan 27 15:10:46 crc kubenswrapper[4697]: I0127 15:10:46.461941 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-q7j4f" Jan 27 15:10:46 crc kubenswrapper[4697]: I0127 15:10:46.465565 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-fxbvr" Jan 27 15:10:46 crc kubenswrapper[4697]: I0127 15:10:46.469595 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4tgn\" (UniqueName: \"kubernetes.io/projected/6128d2c8-dc50-4e71-8093-c931f60b8495-kube-api-access-z4tgn\") pod \"ingress-canary-d4smc\" (UID: \"6128d2c8-dc50-4e71-8093-c931f60b8495\") " pod="openshift-ingress-canary/ingress-canary-d4smc" Jan 27 15:10:46 crc kubenswrapper[4697]: I0127 15:10:46.473337 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qkkgm" Jan 27 15:10:46 crc kubenswrapper[4697]: I0127 15:10:46.473593 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:10:46 crc kubenswrapper[4697]: E0127 15:10:46.473865 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:10:46.973846312 +0000 UTC m=+143.146246093 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:10:46 crc kubenswrapper[4697]: I0127 15:10:46.473946 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qlprf\" (UID: \"43fd9fa4-b232-4d49-8f52-27d016de4cad\") " pod="openshift-image-registry/image-registry-697d97f7c8-qlprf" Jan 27 15:10:46 crc kubenswrapper[4697]: E0127 15:10:46.474332 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:10:46.974320925 +0000 UTC m=+143.146720706 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qlprf" (UID: "43fd9fa4-b232-4d49-8f52-27d016de4cad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:10:46 crc kubenswrapper[4697]: I0127 15:10:46.478975 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wp9j5" Jan 27 15:10:46 crc kubenswrapper[4697]: W0127 15:10:46.484757 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode2b1acef_4d0c_4ce0_aa5a_0e5b28ae08c1.slice/crio-e23196980250cbcac47cf8d5c4f546803468e7180f0177014c3865e6803f8782 WatchSource:0}: Error finding container e23196980250cbcac47cf8d5c4f546803468e7180f0177014c3865e6803f8782: Status 404 returned error can't find the container with id e23196980250cbcac47cf8d5c4f546803468e7180f0177014c3865e6803f8782 Jan 27 15:10:46 crc kubenswrapper[4697]: I0127 15:10:46.485370 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-js7zh" Jan 27 15:10:46 crc kubenswrapper[4697]: I0127 15:10:46.492862 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-s2bmd" Jan 27 15:10:46 crc kubenswrapper[4697]: I0127 15:10:46.497761 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q6rzr\" (UniqueName: \"kubernetes.io/projected/9957edac-db7a-4d39-8224-f3e24a16bb43-kube-api-access-q6rzr\") pod \"csi-hostpathplugin-8b9kp\" (UID: \"9957edac-db7a-4d39-8224-f3e24a16bb43\") " pod="hostpath-provisioner/csi-hostpathplugin-8b9kp" Jan 27 15:10:46 crc kubenswrapper[4697]: I0127 15:10:46.512822 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-4tnq9" Jan 27 15:10:46 crc kubenswrapper[4697]: I0127 15:10:46.516375 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cdrnm\" (UniqueName: \"kubernetes.io/projected/c88b976d-68fb-4ca7-8a36-b6d0c1022346-kube-api-access-cdrnm\") pod \"machine-config-server-ng2lw\" (UID: \"c88b976d-68fb-4ca7-8a36-b6d0c1022346\") " pod="openshift-machine-config-operator/machine-config-server-ng2lw" Jan 27 15:10:46 crc kubenswrapper[4697]: I0127 15:10:46.522729 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-k5tl7" Jan 27 15:10:46 crc kubenswrapper[4697]: I0127 15:10:46.537569 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kz6g9\" (UniqueName: \"kubernetes.io/projected/79f6280f-8dc0-42b8-be4c-cbbc6528bf58-kube-api-access-kz6g9\") pod \"collect-profiles-29492100-sfd69\" (UID: \"79f6280f-8dc0-42b8-be4c-cbbc6528bf58\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492100-sfd69" Jan 27 15:10:46 crc kubenswrapper[4697]: I0127 15:10:46.537808 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dr2qr" Jan 27 15:10:46 crc kubenswrapper[4697]: I0127 15:10:46.544837 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-ng2lw" Jan 27 15:10:46 crc kubenswrapper[4697]: I0127 15:10:46.553689 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492100-sfd69" Jan 27 15:10:46 crc kubenswrapper[4697]: I0127 15:10:46.575929 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-8b9kp" Jan 27 15:10:46 crc kubenswrapper[4697]: I0127 15:10:46.576199 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:10:46 crc kubenswrapper[4697]: E0127 15:10:46.576595 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:10:47.076535721 +0000 UTC m=+143.248935512 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:10:46 crc kubenswrapper[4697]: I0127 15:10:46.590114 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-d4smc" Jan 27 15:10:46 crc kubenswrapper[4697]: W0127 15:10:46.605186 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode93d54dd_4445_4bfd_a9fb_914d3b06e049.slice/crio-529bf6beb6f7cbc34d68266c68f44d8c61c5422f0d89fef00a6f27c8d4992a1e WatchSource:0}: Error finding container 529bf6beb6f7cbc34d68266c68f44d8c61c5422f0d89fef00a6f27c8d4992a1e: Status 404 returned error can't find the container with id 529bf6beb6f7cbc34d68266c68f44d8c61c5422f0d89fef00a6f27c8d4992a1e Jan 27 15:10:46 crc kubenswrapper[4697]: I0127 15:10:46.659655 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tggvq"] Jan 27 15:10:46 crc kubenswrapper[4697]: I0127 15:10:46.677293 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qlprf\" (UID: \"43fd9fa4-b232-4d49-8f52-27d016de4cad\") " pod="openshift-image-registry/image-registry-697d97f7c8-qlprf" Jan 27 15:10:46 crc kubenswrapper[4697]: E0127 15:10:46.677608 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:10:47.177597354 +0000 UTC m=+143.349997135 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qlprf" (UID: "43fd9fa4-b232-4d49-8f52-27d016de4cad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:10:46 crc kubenswrapper[4697]: I0127 15:10:46.786650 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:10:46 crc kubenswrapper[4697]: E0127 15:10:46.787372 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:10:47.287353779 +0000 UTC m=+143.459753560 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:10:46 crc kubenswrapper[4697]: I0127 15:10:46.798285 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-v52qb"] Jan 27 15:10:46 crc kubenswrapper[4697]: I0127 15:10:46.848533 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-w85nj"] Jan 27 15:10:46 crc kubenswrapper[4697]: I0127 15:10:46.889244 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qlprf\" (UID: \"43fd9fa4-b232-4d49-8f52-27d016de4cad\") " pod="openshift-image-registry/image-registry-697d97f7c8-qlprf" Jan 27 15:10:46 crc kubenswrapper[4697]: E0127 15:10:46.890227 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:10:47.390212924 +0000 UTC m=+143.562612705 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qlprf" (UID: "43fd9fa4-b232-4d49-8f52-27d016de4cad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:10:46 crc kubenswrapper[4697]: I0127 15:10:46.946699 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-vj475"] Jan 27 15:10:46 crc kubenswrapper[4697]: I0127 15:10:46.990412 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:10:46 crc kubenswrapper[4697]: E0127 15:10:46.990751 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:10:47.490724891 +0000 UTC m=+143.663124672 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:10:46 crc kubenswrapper[4697]: I0127 15:10:46.991700 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qlprf\" (UID: \"43fd9fa4-b232-4d49-8f52-27d016de4cad\") " pod="openshift-image-registry/image-registry-697d97f7c8-qlprf" Jan 27 15:10:46 crc kubenswrapper[4697]: E0127 15:10:46.992058 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:10:47.49204671 +0000 UTC m=+143.664446491 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qlprf" (UID: "43fd9fa4-b232-4d49-8f52-27d016de4cad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:10:47 crc kubenswrapper[4697]: I0127 15:10:47.040279 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mwhvk"] Jan 27 15:10:47 crc kubenswrapper[4697]: I0127 15:10:47.040321 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-wf8n4"] Jan 27 15:10:47 crc kubenswrapper[4697]: I0127 15:10:47.052228 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-h46d6"] Jan 27 15:10:47 crc kubenswrapper[4697]: I0127 15:10:47.081883 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6vnzn"] Jan 27 15:10:47 crc kubenswrapper[4697]: I0127 15:10:47.092363 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:10:47 crc kubenswrapper[4697]: E0127 15:10:47.092913 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:10:47.592898357 +0000 UTC m=+143.765298138 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:10:47 crc kubenswrapper[4697]: I0127 15:10:47.130093 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-5xn9m"] Jan 27 15:10:47 crc kubenswrapper[4697]: I0127 15:10:47.193921 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qlprf\" (UID: \"43fd9fa4-b232-4d49-8f52-27d016de4cad\") " pod="openshift-image-registry/image-registry-697d97f7c8-qlprf" Jan 27 15:10:47 crc kubenswrapper[4697]: E0127 15:10:47.194402 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:10:47.694389942 +0000 UTC m=+143.866789713 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qlprf" (UID: "43fd9fa4-b232-4d49-8f52-27d016de4cad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:10:47 crc kubenswrapper[4697]: I0127 15:10:47.265040 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-r5rsq"] Jan 27 15:10:47 crc kubenswrapper[4697]: E0127 15:10:47.296568 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:10:47.796522496 +0000 UTC m=+143.968922277 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:10:47 crc kubenswrapper[4697]: I0127 15:10:47.296616 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:10:47 crc kubenswrapper[4697]: I0127 15:10:47.297032 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qlprf\" (UID: \"43fd9fa4-b232-4d49-8f52-27d016de4cad\") " pod="openshift-image-registry/image-registry-697d97f7c8-qlprf" Jan 27 15:10:47 crc kubenswrapper[4697]: E0127 15:10:47.297323 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:10:47.797313029 +0000 UTC m=+143.969712810 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qlprf" (UID: "43fd9fa4-b232-4d49-8f52-27d016de4cad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:10:47 crc kubenswrapper[4697]: I0127 15:10:47.389484 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-wf8n4" event={"ID":"5477c5d7-5d1a-46a7-bee2-eeb3d5ef2ade","Type":"ContainerStarted","Data":"fb802409e8768c268614c898567723aa20e966aa6ba47d4197ed98ae55fae634"} Jan 27 15:10:47 crc kubenswrapper[4697]: W0127 15:10:47.390524 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda17bf464_23fd_475b_b25a_33e14cd9ced0.slice/crio-22526bfe0a499a46aac7fff5e8767cbe3101e291f7119f3aa27dd578c471c1aa WatchSource:0}: Error finding container 22526bfe0a499a46aac7fff5e8767cbe3101e291f7119f3aa27dd578c471c1aa: Status 404 returned error can't find the container with id 22526bfe0a499a46aac7fff5e8767cbe3101e291f7119f3aa27dd578c471c1aa Jan 27 15:10:47 crc kubenswrapper[4697]: W0127 15:10:47.396321 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd50c0395_ec10_4463_92e4_29defdd47f62.slice/crio-8cce9f0abc09b9ceec7abd7a77156034aa8838d3fd36677b69fe064db38698fb WatchSource:0}: Error finding container 8cce9f0abc09b9ceec7abd7a77156034aa8838d3fd36677b69fe064db38698fb: Status 404 returned error can't find the container with id 8cce9f0abc09b9ceec7abd7a77156034aa8838d3fd36677b69fe064db38698fb Jan 27 15:10:47 crc kubenswrapper[4697]: I0127 15:10:47.398918 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:10:47 crc kubenswrapper[4697]: E0127 15:10:47.399228 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:10:47.899213996 +0000 UTC m=+144.071613777 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:10:47 crc kubenswrapper[4697]: I0127 15:10:47.402595 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-qvsns"] Jan 27 15:10:47 crc kubenswrapper[4697]: I0127 15:10:47.440337 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-z6snp" event={"ID":"420813d8-71d5-401e-9af6-61296b8a25ba","Type":"ContainerStarted","Data":"58e201753242b53762d3b3c1d0de54b924062bcb8b35c90d76fc0460ed9956b0"} Jan 27 15:10:47 crc kubenswrapper[4697]: I0127 15:10:47.450552 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-w85nj" event={"ID":"24828dfa-ec12-4de9-aaba-96716e62d49a","Type":"ContainerStarted","Data":"63288257e7188d729dff08ce0da5c1a00adc5fd00a1eceb002fb2582c18e1592"} Jan 27 15:10:47 crc kubenswrapper[4697]: I0127 15:10:47.456555 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-9p5q5" event={"ID":"cbd9208d-08ed-47af-a7cf-b9ee3973b964","Type":"ContainerStarted","Data":"dd011026c6e000f1050a565ac3cfe355c9349908e8237a3478dfa4287a17ed9e"} Jan 27 15:10:47 crc kubenswrapper[4697]: I0127 15:10:47.483008 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-46h6b" event={"ID":"4f8433f4-0c5f-40eb-b4c5-88c02b1595ad","Type":"ContainerStarted","Data":"eb5568381e20269a33887233dc9944c29be95db3b1c59cf56841c758252d9dc9"} Jan 27 15:10:47 crc kubenswrapper[4697]: I0127 15:10:47.500799 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-9ds6c"] Jan 27 15:10:47 crc kubenswrapper[4697]: I0127 15:10:47.501721 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qlprf\" (UID: \"43fd9fa4-b232-4d49-8f52-27d016de4cad\") " pod="openshift-image-registry/image-registry-697d97f7c8-qlprf" Jan 27 15:10:47 crc kubenswrapper[4697]: E0127 15:10:47.502930 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:10:48.002906525 +0000 UTC m=+144.175306316 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qlprf" (UID: "43fd9fa4-b232-4d49-8f52-27d016de4cad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:10:47 crc kubenswrapper[4697]: I0127 15:10:47.504477 4697 generic.go:334] "Generic (PLEG): container finished" podID="f9c4aaa3-b53b-4b3f-8d6a-b9b7eef37362" containerID="72f6da5968ac873401e5690f4cdd5cf78880ecef671594667531ca1c4728bacb" exitCode=0 Jan 27 15:10:47 crc kubenswrapper[4697]: I0127 15:10:47.504568 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-nmrvs" event={"ID":"f9c4aaa3-b53b-4b3f-8d6a-b9b7eef37362","Type":"ContainerDied","Data":"72f6da5968ac873401e5690f4cdd5cf78880ecef671594667531ca1c4728bacb"} Jan 27 15:10:47 crc kubenswrapper[4697]: I0127 15:10:47.510086 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-7ddjp" event={"ID":"894a6339-d208-46db-8769-ac9153cb1ba0","Type":"ContainerStarted","Data":"2fc1f82f5af8a5feb11e657399a7ee2576c5e556ea67cadb25f04438e85c53ca"} Jan 27 15:10:47 crc kubenswrapper[4697]: I0127 15:10:47.510705 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-7ddjp" Jan 27 15:10:47 crc kubenswrapper[4697]: I0127 15:10:47.513324 4697 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-7ddjp container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Jan 27 15:10:47 crc kubenswrapper[4697]: I0127 15:10:47.513363 4697 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-7ddjp" podUID="894a6339-d208-46db-8769-ac9153cb1ba0" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Jan 27 15:10:47 crc kubenswrapper[4697]: I0127 15:10:47.515817 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rx9c4" event={"ID":"e93d54dd-4445-4bfd-a9fb-914d3b06e049","Type":"ContainerStarted","Data":"529bf6beb6f7cbc34d68266c68f44d8c61c5422f0d89fef00a6f27c8d4992a1e"} Jan 27 15:10:47 crc kubenswrapper[4697]: I0127 15:10:47.520833 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-vj475" event={"ID":"cef607c4-e16c-4ae6-9b66-3206b100267c","Type":"ContainerStarted","Data":"95462d2aa6b0e3c762e0dc569003a98dfc305b94a43ae2b046e0d5934e6e49d5"} Jan 27 15:10:47 crc kubenswrapper[4697]: I0127 15:10:47.522712 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tggvq" event={"ID":"8ab80f79-35d6-42cc-8480-e2a778d41da7","Type":"ContainerStarted","Data":"e83174264762b49bccd9e63ff6e5b67ecbf204fc11d517b7d3570b6168f305ca"} Jan 27 15:10:47 crc kubenswrapper[4697]: I0127 15:10:47.523934 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-wjd95" event={"ID":"0f95124d-8a5d-4a0d-b4cd-906d0341a6a2","Type":"ContainerStarted","Data":"a25b1887a26bd7bc33aea2fca0f7d247f941b26bafd9a4e6afe65513d5c220fd"} Jan 27 15:10:47 crc kubenswrapper[4697]: I0127 15:10:47.543278 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-zdp8l" event={"ID":"e2b1acef-4d0c-4ce0-aa5a-0e5b28ae08c1","Type":"ContainerStarted","Data":"e23196980250cbcac47cf8d5c4f546803468e7180f0177014c3865e6803f8782"} Jan 27 15:10:47 crc kubenswrapper[4697]: I0127 15:10:47.548126 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-v52qb" event={"ID":"c9f06c8d-2fe6-44f7-8870-0142002dfae8","Type":"ContainerStarted","Data":"253847f17702bdbb4395f6d2fbdc4b5953b931ed3092ff9ce8fafe05c85737bb"} Jan 27 15:10:47 crc kubenswrapper[4697]: I0127 15:10:47.550030 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-78k6r" event={"ID":"73d9ac28-74b0-4ead-b4e4-b270264feb05","Type":"ContainerStarted","Data":"fe4e040d8176bc800a76d350637f5c5aa6cc4dc50cb6701d65cca795bf09d949"} Jan 27 15:10:47 crc kubenswrapper[4697]: I0127 15:10:47.551453 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h46d6" event={"ID":"a6dbb097-e288-4cf9-8aa5-f35c997358df","Type":"ContainerStarted","Data":"b5fb4c5d888a65ae171938abc2f4ed135611f7923fd56a19997f8227613f7c48"} Jan 27 15:10:47 crc kubenswrapper[4697]: I0127 15:10:47.564317 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mwhvk" event={"ID":"0ac2a583-d3dc-433a-8b0f-92c9984f6b20","Type":"ContainerStarted","Data":"4c264e0ff81961608feaa90f30138397feadb1a38d4a420e11d5bc4cd9423a66"} Jan 27 15:10:47 crc kubenswrapper[4697]: I0127 15:10:47.601045 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-45xm2"] Jan 27 15:10:47 crc kubenswrapper[4697]: I0127 15:10:47.619229 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:10:47 crc kubenswrapper[4697]: E0127 15:10:47.619475 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:10:48.119452516 +0000 UTC m=+144.291852297 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:10:47 crc kubenswrapper[4697]: I0127 15:10:47.619829 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qlprf\" (UID: \"43fd9fa4-b232-4d49-8f52-27d016de4cad\") " pod="openshift-image-registry/image-registry-697d97f7c8-qlprf" Jan 27 15:10:47 crc kubenswrapper[4697]: E0127 15:10:47.622933 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:10:48.122913656 +0000 UTC m=+144.295313527 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qlprf" (UID: "43fd9fa4-b232-4d49-8f52-27d016de4cad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:10:47 crc kubenswrapper[4697]: I0127 15:10:47.638762 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sztfd"] Jan 27 15:10:47 crc kubenswrapper[4697]: I0127 15:10:47.722584 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:10:47 crc kubenswrapper[4697]: E0127 15:10:47.723130 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:10:48.223110724 +0000 UTC m=+144.395510505 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:10:47 crc kubenswrapper[4697]: W0127 15:10:47.724498 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod06d5e1be_2c28_4e27_ba7e_e24b4e72401b.slice/crio-35a614a37caeb47ddff4ab95c4e58911389b48a0bbd950dcb23d694aeb770a00 WatchSource:0}: Error finding container 35a614a37caeb47ddff4ab95c4e58911389b48a0bbd950dcb23d694aeb770a00: Status 404 returned error can't find the container with id 35a614a37caeb47ddff4ab95c4e58911389b48a0bbd950dcb23d694aeb770a00 Jan 27 15:10:47 crc kubenswrapper[4697]: W0127 15:10:47.775254 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc88b976d_68fb_4ca7_8a36_b6d0c1022346.slice/crio-808073119d3c06337c1e58c76928dc3c9a650a389fa049c557068715533392e5 WatchSource:0}: Error finding container 808073119d3c06337c1e58c76928dc3c9a650a389fa049c557068715533392e5: Status 404 returned error can't find the container with id 808073119d3c06337c1e58c76928dc3c9a650a389fa049c557068715533392e5 Jan 27 15:10:47 crc kubenswrapper[4697]: I0127 15:10:47.824178 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qlprf\" (UID: \"43fd9fa4-b232-4d49-8f52-27d016de4cad\") " pod="openshift-image-registry/image-registry-697d97f7c8-qlprf" Jan 27 15:10:47 crc kubenswrapper[4697]: E0127 15:10:47.824554 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:10:48.324540967 +0000 UTC m=+144.496940748 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qlprf" (UID: "43fd9fa4-b232-4d49-8f52-27d016de4cad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:10:47 crc kubenswrapper[4697]: I0127 15:10:47.890541 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-fxbvr"] Jan 27 15:10:47 crc kubenswrapper[4697]: I0127 15:10:47.928098 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:10:47 crc kubenswrapper[4697]: E0127 15:10:47.928406 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:10:48.428392512 +0000 UTC m=+144.600792293 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:10:47 crc kubenswrapper[4697]: I0127 15:10:47.929646 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-l8h5h"] Jan 27 15:10:47 crc kubenswrapper[4697]: I0127 15:10:47.939388 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xxcz8"] Jan 27 15:10:48 crc kubenswrapper[4697]: I0127 15:10:48.029229 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qlprf\" (UID: \"43fd9fa4-b232-4d49-8f52-27d016de4cad\") " pod="openshift-image-registry/image-registry-697d97f7c8-qlprf" Jan 27 15:10:48 crc kubenswrapper[4697]: E0127 15:10:48.029885 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:10:48.529871177 +0000 UTC m=+144.702270958 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qlprf" (UID: "43fd9fa4-b232-4d49-8f52-27d016de4cad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:10:48 crc kubenswrapper[4697]: W0127 15:10:48.064687 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcc64ac2f_83b2_421c_b9e7_194a679b8653.slice/crio-782f504f393a3ef2579a6f9722b69d5ccc6c38941d47abdbe8df6fcb052e7ff0 WatchSource:0}: Error finding container 782f504f393a3ef2579a6f9722b69d5ccc6c38941d47abdbe8df6fcb052e7ff0: Status 404 returned error can't find the container with id 782f504f393a3ef2579a6f9722b69d5ccc6c38941d47abdbe8df6fcb052e7ff0 Jan 27 15:10:48 crc kubenswrapper[4697]: I0127 15:10:48.072499 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-wjd95" podStartSLOduration=122.07248281 podStartE2EDuration="2m2.07248281s" podCreationTimestamp="2026-01-27 15:08:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:10:48.069233505 +0000 UTC m=+144.241633276" watchObservedRunningTime="2026-01-27 15:10:48.07248281 +0000 UTC m=+144.244882591" Jan 27 15:10:48 crc kubenswrapper[4697]: I0127 15:10:48.129923 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:10:48 crc kubenswrapper[4697]: E0127 15:10:48.130385 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:10:48.630368634 +0000 UTC m=+144.802768405 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:10:48 crc kubenswrapper[4697]: I0127 15:10:48.153165 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-46h6b" podStartSLOduration=122.153139212 podStartE2EDuration="2m2.153139212s" podCreationTimestamp="2026-01-27 15:08:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:10:48.146092468 +0000 UTC m=+144.318492259" watchObservedRunningTime="2026-01-27 15:10:48.153139212 +0000 UTC m=+144.325538993" Jan 27 15:10:48 crc kubenswrapper[4697]: I0127 15:10:48.154110 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-7ddjp" podStartSLOduration=122.15410203 podStartE2EDuration="2m2.15410203s" podCreationTimestamp="2026-01-27 15:08:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:10:48.109394317 +0000 UTC m=+144.281794128" watchObservedRunningTime="2026-01-27 15:10:48.15410203 +0000 UTC m=+144.326501821" Jan 27 15:10:48 crc kubenswrapper[4697]: I0127 15:10:48.233445 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-s2bmd"] Jan 27 15:10:48 crc kubenswrapper[4697]: I0127 15:10:48.235144 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qlprf\" (UID: \"43fd9fa4-b232-4d49-8f52-27d016de4cad\") " pod="openshift-image-registry/image-registry-697d97f7c8-qlprf" Jan 27 15:10:48 crc kubenswrapper[4697]: E0127 15:10:48.235472 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:10:48.735450373 +0000 UTC m=+144.907850154 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qlprf" (UID: "43fd9fa4-b232-4d49-8f52-27d016de4cad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:10:48 crc kubenswrapper[4697]: I0127 15:10:48.247387 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492100-sfd69"] Jan 27 15:10:48 crc kubenswrapper[4697]: I0127 15:10:48.290061 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-q7j4f"] Jan 27 15:10:48 crc kubenswrapper[4697]: I0127 15:10:48.337120 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:10:48 crc kubenswrapper[4697]: E0127 15:10:48.337532 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:10:48.837516685 +0000 UTC m=+145.009916456 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:10:48 crc kubenswrapper[4697]: I0127 15:10:48.339755 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qkkgm"] Jan 27 15:10:48 crc kubenswrapper[4697]: I0127 15:10:48.361579 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-k5tl7"] Jan 27 15:10:48 crc kubenswrapper[4697]: I0127 15:10:48.372376 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-js7zh"] Jan 27 15:10:48 crc kubenswrapper[4697]: I0127 15:10:48.397170 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-4tnq9"] Jan 27 15:10:48 crc kubenswrapper[4697]: W0127 15:10:48.420331 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod17342d35_ebda_45fa_8b7a_9be1064954a9.slice/crio-d2865b1a814cea9ab5c18293220c8308ca21846a52054998daa394c02a2b1c1e WatchSource:0}: Error finding container d2865b1a814cea9ab5c18293220c8308ca21846a52054998daa394c02a2b1c1e: Status 404 returned error can't find the container with id d2865b1a814cea9ab5c18293220c8308ca21846a52054998daa394c02a2b1c1e Jan 27 15:10:48 crc kubenswrapper[4697]: I0127 15:10:48.441344 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qlprf\" (UID: \"43fd9fa4-b232-4d49-8f52-27d016de4cad\") " pod="openshift-image-registry/image-registry-697d97f7c8-qlprf" Jan 27 15:10:48 crc kubenswrapper[4697]: E0127 15:10:48.441399 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:10:48.941386009 +0000 UTC m=+145.113785790 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qlprf" (UID: "43fd9fa4-b232-4d49-8f52-27d016de4cad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:10:48 crc kubenswrapper[4697]: W0127 15:10:48.444248 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podecacf3dd_ae8b_4d81_87b5_0bfbf0575e24.slice/crio-365a6e60ec50bd0802808b3d6e679432cb48146584a4f51489d46db9969bac65 WatchSource:0}: Error finding container 365a6e60ec50bd0802808b3d6e679432cb48146584a4f51489d46db9969bac65: Status 404 returned error can't find the container with id 365a6e60ec50bd0802808b3d6e679432cb48146584a4f51489d46db9969bac65 Jan 27 15:10:48 crc kubenswrapper[4697]: I0127 15:10:48.487344 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wp9j5"] Jan 27 15:10:48 crc kubenswrapper[4697]: I0127 15:10:48.494037 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dr2qr"] Jan 27 15:10:48 crc kubenswrapper[4697]: I0127 15:10:48.521811 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-8b9kp"] Jan 27 15:10:48 crc kubenswrapper[4697]: I0127 15:10:48.563806 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:10:48 crc kubenswrapper[4697]: E0127 15:10:48.563888 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:10:49.063872849 +0000 UTC m=+145.236272630 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:10:48 crc kubenswrapper[4697]: I0127 15:10:48.564356 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qlprf\" (UID: \"43fd9fa4-b232-4d49-8f52-27d016de4cad\") " pod="openshift-image-registry/image-registry-697d97f7c8-qlprf" Jan 27 15:10:48 crc kubenswrapper[4697]: E0127 15:10:48.564681 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:10:49.064672193 +0000 UTC m=+145.237071974 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qlprf" (UID: "43fd9fa4-b232-4d49-8f52-27d016de4cad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:10:48 crc kubenswrapper[4697]: I0127 15:10:48.602732 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-d4smc"] Jan 27 15:10:48 crc kubenswrapper[4697]: I0127 15:10:48.633229 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-wmwsd" event={"ID":"d50c0395-ec10-4463-92e4-29defdd47f62","Type":"ContainerStarted","Data":"8cce9f0abc09b9ceec7abd7a77156034aa8838d3fd36677b69fe064db38698fb"} Jan 27 15:10:48 crc kubenswrapper[4697]: I0127 15:10:48.656622 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-r5rsq" event={"ID":"f0e1afff-d250-45de-bc85-3a20f622a52d","Type":"ContainerStarted","Data":"73373789d6ff6bfbb6f2f6cb4cd2921199564bf5882611c5c94e3b72436036bc"} Jan 27 15:10:48 crc kubenswrapper[4697]: I0127 15:10:48.656698 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-r5rsq" event={"ID":"f0e1afff-d250-45de-bc85-3a20f622a52d","Type":"ContainerStarted","Data":"5f47145c69dad5d38a4a59008b2732833cf23b12ebbfcd654c5e06f51efbda08"} Jan 27 15:10:48 crc kubenswrapper[4697]: I0127 15:10:48.660859 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-ng2lw" event={"ID":"c88b976d-68fb-4ca7-8a36-b6d0c1022346","Type":"ContainerStarted","Data":"f8237ea5456d22c577dcf34717e85497c629cb7b36509a9ea645b15d130f75c2"} Jan 27 15:10:48 crc kubenswrapper[4697]: I0127 15:10:48.660908 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-ng2lw" event={"ID":"c88b976d-68fb-4ca7-8a36-b6d0c1022346","Type":"ContainerStarted","Data":"808073119d3c06337c1e58c76928dc3c9a650a389fa049c557068715533392e5"} Jan 27 15:10:48 crc kubenswrapper[4697]: W0127 15:10:48.664728 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9957edac_db7a_4d39_8224_f3e24a16bb43.slice/crio-b7f4d1e914e44dc845817b878c818abdfc18da70f3f2e9d76e680a1322a8d545 WatchSource:0}: Error finding container b7f4d1e914e44dc845817b878c818abdfc18da70f3f2e9d76e680a1322a8d545: Status 404 returned error can't find the container with id b7f4d1e914e44dc845817b878c818abdfc18da70f3f2e9d76e680a1322a8d545 Jan 27 15:10:48 crc kubenswrapper[4697]: I0127 15:10:48.666130 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:10:48 crc kubenswrapper[4697]: E0127 15:10:48.666638 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:10:49.166620804 +0000 UTC m=+145.339020585 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:10:48 crc kubenswrapper[4697]: I0127 15:10:48.668355 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-z6snp" event={"ID":"420813d8-71d5-401e-9af6-61296b8a25ba","Type":"ContainerStarted","Data":"b429bd49545ea6761b113fba777d27ec1d756293f71958d01455dde4e5c5e116"} Jan 27 15:10:48 crc kubenswrapper[4697]: I0127 15:10:48.669852 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6vnzn" event={"ID":"9189511e-af92-4fee-be70-e2baab592c98","Type":"ContainerStarted","Data":"61ac57ffd46e7184576b93f3a473e48ba7d1523ef71eaf0e180e4220320e4711"} Jan 27 15:10:48 crc kubenswrapper[4697]: I0127 15:10:48.673145 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tggvq" event={"ID":"8ab80f79-35d6-42cc-8480-e2a778d41da7","Type":"ContainerStarted","Data":"930f61ed7c9c914d98ebde003eab8a740c6b440cec8205243d181b3c0dee857f"} Jan 27 15:10:48 crc kubenswrapper[4697]: I0127 15:10:48.674649 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-2p4pk" event={"ID":"927094cf-5a33-4170-97f1-b9b2c4f5b519","Type":"ContainerStarted","Data":"969d45bafcac5dcfdff6d7ca0fd4b4b087755daf3f664e2b2fb7bc382c8e8611"} Jan 27 15:10:48 crc kubenswrapper[4697]: I0127 15:10:48.678930 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-ng2lw" podStartSLOduration=5.67891622 podStartE2EDuration="5.67891622s" podCreationTimestamp="2026-01-27 15:10:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:10:48.677723445 +0000 UTC m=+144.850123236" watchObservedRunningTime="2026-01-27 15:10:48.67891622 +0000 UTC m=+144.851316001" Jan 27 15:10:48 crc kubenswrapper[4697]: I0127 15:10:48.696037 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-z6snp" podStartSLOduration=123.696019154 podStartE2EDuration="2m3.696019154s" podCreationTimestamp="2026-01-27 15:08:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:10:48.695333444 +0000 UTC m=+144.867733235" watchObservedRunningTime="2026-01-27 15:10:48.696019154 +0000 UTC m=+144.868418935" Jan 27 15:10:48 crc kubenswrapper[4697]: I0127 15:10:48.725472 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-5xn9m" event={"ID":"a17bf464-23fd-475b-b25a-33e14cd9ced0","Type":"ContainerStarted","Data":"22526bfe0a499a46aac7fff5e8767cbe3101e291f7119f3aa27dd578c471c1aa"} Jan 27 15:10:48 crc kubenswrapper[4697]: I0127 15:10:48.736537 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-vj475" event={"ID":"cef607c4-e16c-4ae6-9b66-3206b100267c","Type":"ContainerStarted","Data":"adeb1751f848eb5334388d1b013ff473d2ee39141c041117c5ce8b457424d9fe"} Jan 27 15:10:48 crc kubenswrapper[4697]: I0127 15:10:48.751517 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-9ds6c" event={"ID":"06d5e1be-2c28-4e27-ba7e-e24b4e72401b","Type":"ContainerStarted","Data":"35a614a37caeb47ddff4ab95c4e58911389b48a0bbd950dcb23d694aeb770a00"} Jan 27 15:10:48 crc kubenswrapper[4697]: I0127 15:10:48.761390 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-js7zh" event={"ID":"dd09f8c7-aace-4ca6-ac8c-aa4425391032","Type":"ContainerStarted","Data":"bf2251e5b02b247ebdf18613a0b2b8cf89c4637ed6b5103d85b3499dbf54d8e4"} Jan 27 15:10:48 crc kubenswrapper[4697]: I0127 15:10:48.764847 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-s2bmd" event={"ID":"a1a8ed06-fb86-479b-a5a1-0dbac195717a","Type":"ContainerStarted","Data":"b9f0a8c8406b6e5be04d5462e422aadfb3fe52a8fa181d82066e9f3c68abbbb7"} Jan 27 15:10:48 crc kubenswrapper[4697]: I0127 15:10:48.767326 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qlprf\" (UID: \"43fd9fa4-b232-4d49-8f52-27d016de4cad\") " pod="openshift-image-registry/image-registry-697d97f7c8-qlprf" Jan 27 15:10:48 crc kubenswrapper[4697]: E0127 15:10:48.767956 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:10:49.267941964 +0000 UTC m=+145.440341745 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qlprf" (UID: "43fd9fa4-b232-4d49-8f52-27d016de4cad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:10:48 crc kubenswrapper[4697]: I0127 15:10:48.777270 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-fxbvr" event={"ID":"25c9b0f1-c6a8-4521-a480-9f46238e3a22","Type":"ContainerStarted","Data":"e464fae1eefcbe65e40d3b34dfbd6e3746cabdce35d28f62d9409d7ee2748466"} Jan 27 15:10:48 crc kubenswrapper[4697]: I0127 15:10:48.778419 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qkkgm" event={"ID":"ecacf3dd-ae8b-4d81-87b5-0bfbf0575e24","Type":"ContainerStarted","Data":"365a6e60ec50bd0802808b3d6e679432cb48146584a4f51489d46db9969bac65"} Jan 27 15:10:48 crc kubenswrapper[4697]: I0127 15:10:48.781448 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-zdp8l" event={"ID":"e2b1acef-4d0c-4ce0-aa5a-0e5b28ae08c1","Type":"ContainerStarted","Data":"bf4fff06a967a5ca01602c9b2c1e048b315bd37e51238d72681eab4760085e79"} Jan 27 15:10:48 crc kubenswrapper[4697]: I0127 15:10:48.784691 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-l8h5h" event={"ID":"991f5f59-9d29-4a19-974a-2b358b8b38a0","Type":"ContainerStarted","Data":"d42025a239b02567a5d236d788381a76239aca1dc5055bc8ed497c74459d522f"} Jan 27 15:10:48 crc kubenswrapper[4697]: I0127 15:10:48.786118 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492100-sfd69" event={"ID":"79f6280f-8dc0-42b8-be4c-cbbc6528bf58","Type":"ContainerStarted","Data":"68c9cc7a37facf9e7d739a8819d0899e5dd90a151d0e7075a4ba5bad7ecd91f4"} Jan 27 15:10:48 crc kubenswrapper[4697]: I0127 15:10:48.787716 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sztfd" event={"ID":"24381e71-6f3a-4c4c-b60d-a10c06aa12f7","Type":"ContainerStarted","Data":"827a842056ffeaf976a05c8ef356008b70c30148e4e933ebb6a057d2e8770d47"} Jan 27 15:10:48 crc kubenswrapper[4697]: I0127 15:10:48.789331 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-w85nj" event={"ID":"24828dfa-ec12-4de9-aaba-96716e62d49a","Type":"ContainerStarted","Data":"4d7ec758b6907fa68f890c141ed29a5d149c6698b620aedc7be76c3588e23169"} Jan 27 15:10:48 crc kubenswrapper[4697]: I0127 15:10:48.790131 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-w85nj" Jan 27 15:10:48 crc kubenswrapper[4697]: I0127 15:10:48.791479 4697 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-w85nj container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Jan 27 15:10:48 crc kubenswrapper[4697]: I0127 15:10:48.791516 4697 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-w85nj" podUID="24828dfa-ec12-4de9-aaba-96716e62d49a" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" Jan 27 15:10:48 crc kubenswrapper[4697]: I0127 15:10:48.797036 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-45xm2" event={"ID":"baa7401d-bcad-4175-af1b-46414c003f9e","Type":"ContainerStarted","Data":"815121f673a37302628072b384cf0f76b903303b6ed6a35e95c05b89d9509010"} Jan 27 15:10:48 crc kubenswrapper[4697]: I0127 15:10:48.798261 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xxcz8" event={"ID":"cc64ac2f-83b2-421c-b9e7-194a679b8653","Type":"ContainerStarted","Data":"782f504f393a3ef2579a6f9722b69d5ccc6c38941d47abdbe8df6fcb052e7ff0"} Jan 27 15:10:48 crc kubenswrapper[4697]: I0127 15:10:48.799278 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-k5tl7" event={"ID":"b777a7db-a7b4-4fdd-b30c-fb2243b658a0","Type":"ContainerStarted","Data":"0d33c953945575d7b68468d8efd90f9482a887154d08eda55db349217f1ca5be"} Jan 27 15:10:48 crc kubenswrapper[4697]: I0127 15:10:48.801476 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-tgccn" event={"ID":"d1d0b154-f221-4132-9d6f-a17173841b1f","Type":"ContainerStarted","Data":"b8e00742347289d7c9777900c7b68957702d88e6f88a26acc7ec4d270d416efe"} Jan 27 15:10:48 crc kubenswrapper[4697]: I0127 15:10:48.801506 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-tgccn" Jan 27 15:10:48 crc kubenswrapper[4697]: I0127 15:10:48.803889 4697 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-tgccn container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.17:6443/healthz\": dial tcp 10.217.0.17:6443: connect: connection refused" start-of-body= Jan 27 15:10:48 crc kubenswrapper[4697]: I0127 15:10:48.803926 4697 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-tgccn" podUID="d1d0b154-f221-4132-9d6f-a17173841b1f" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.17:6443/healthz\": dial tcp 10.217.0.17:6443: connect: connection refused" Jan 27 15:10:48 crc kubenswrapper[4697]: I0127 15:10:48.805396 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-4tnq9" event={"ID":"49dd977b-6315-4446-8804-242e7e94a375","Type":"ContainerStarted","Data":"63fb5b224d031ce73277a7748620ea97677af1c34e4629f96e59652d3e597922"} Jan 27 15:10:48 crc kubenswrapper[4697]: I0127 15:10:48.807221 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-q7j4f" event={"ID":"17342d35-ebda-45fa-8b7a-9be1064954a9","Type":"ContainerStarted","Data":"d2865b1a814cea9ab5c18293220c8308ca21846a52054998daa394c02a2b1c1e"} Jan 27 15:10:48 crc kubenswrapper[4697]: I0127 15:10:48.814145 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-zdp8l" podStartSLOduration=122.81412718 podStartE2EDuration="2m2.81412718s" podCreationTimestamp="2026-01-27 15:08:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:10:48.811928687 +0000 UTC m=+144.984328468" watchObservedRunningTime="2026-01-27 15:10:48.81412718 +0000 UTC m=+144.986526961" Jan 27 15:10:48 crc kubenswrapper[4697]: I0127 15:10:48.857550 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-9p5q5" event={"ID":"cbd9208d-08ed-47af-a7cf-b9ee3973b964","Type":"ContainerStarted","Data":"206c8b0dc1da5c6442f87635c8f63bcc056c943a61639dc1357dd4300130be2e"} Jan 27 15:10:48 crc kubenswrapper[4697]: I0127 15:10:48.868323 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-w85nj" podStartSLOduration=121.868294727 podStartE2EDuration="2m1.868294727s" podCreationTimestamp="2026-01-27 15:08:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:10:48.835198999 +0000 UTC m=+145.007598800" watchObservedRunningTime="2026-01-27 15:10:48.868294727 +0000 UTC m=+145.040694518" Jan 27 15:10:48 crc kubenswrapper[4697]: I0127 15:10:48.869557 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:10:48 crc kubenswrapper[4697]: E0127 15:10:48.870208 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:10:49.370176351 +0000 UTC m=+145.542576132 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:10:48 crc kubenswrapper[4697]: I0127 15:10:48.870512 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qlprf\" (UID: \"43fd9fa4-b232-4d49-8f52-27d016de4cad\") " pod="openshift-image-registry/image-registry-697d97f7c8-qlprf" Jan 27 15:10:48 crc kubenswrapper[4697]: I0127 15:10:48.872145 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-tgccn" podStartSLOduration=122.872132898 podStartE2EDuration="2m2.872132898s" podCreationTimestamp="2026-01-27 15:08:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:10:48.869271085 +0000 UTC m=+145.041670866" watchObservedRunningTime="2026-01-27 15:10:48.872132898 +0000 UTC m=+145.044532679" Jan 27 15:10:48 crc kubenswrapper[4697]: E0127 15:10:48.878572 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:10:49.378533793 +0000 UTC m=+145.550933574 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qlprf" (UID: "43fd9fa4-b232-4d49-8f52-27d016de4cad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:10:48 crc kubenswrapper[4697]: I0127 15:10:48.902352 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rx9c4" event={"ID":"e93d54dd-4445-4bfd-a9fb-914d3b06e049","Type":"ContainerStarted","Data":"003f7025269f8f2f0640a14ba87d91cb74f0952df67652e2690edeee62c7c1ae"} Jan 27 15:10:48 crc kubenswrapper[4697]: I0127 15:10:48.925935 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rx9c4" podStartSLOduration=122.925915643 podStartE2EDuration="2m2.925915643s" podCreationTimestamp="2026-01-27 15:08:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:10:48.924692168 +0000 UTC m=+145.097091949" watchObservedRunningTime="2026-01-27 15:10:48.925915643 +0000 UTC m=+145.098315424" Jan 27 15:10:48 crc kubenswrapper[4697]: I0127 15:10:48.926660 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-9p5q5" podStartSLOduration=121.926654015 podStartE2EDuration="2m1.926654015s" podCreationTimestamp="2026-01-27 15:08:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:10:48.901770715 +0000 UTC m=+145.074170506" watchObservedRunningTime="2026-01-27 15:10:48.926654015 +0000 UTC m=+145.099053796" Jan 27 15:10:48 crc kubenswrapper[4697]: I0127 15:10:48.929323 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-qvsns" event={"ID":"50568641-b008-4747-bdbc-f474cc35bf58","Type":"ContainerStarted","Data":"bb54f89fd760f5f64bbe41a68c48a17add7025ee2b9727bd9c53026cf31f30b2"} Jan 27 15:10:48 crc kubenswrapper[4697]: I0127 15:10:48.950737 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-78k6r" event={"ID":"73d9ac28-74b0-4ead-b4e4-b270264feb05","Type":"ContainerStarted","Data":"e33bd7cbf3778228ca190ab911c0b0638a3e181005a1e63447946ef07e9c92da"} Jan 27 15:10:48 crc kubenswrapper[4697]: I0127 15:10:48.951506 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-78k6r" Jan 27 15:10:48 crc kubenswrapper[4697]: I0127 15:10:48.956492 4697 patch_prober.go:28] interesting pod/downloads-7954f5f757-78k6r container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Jan 27 15:10:48 crc kubenswrapper[4697]: I0127 15:10:48.956546 4697 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-78k6r" podUID="73d9ac28-74b0-4ead-b4e4-b270264feb05" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Jan 27 15:10:48 crc kubenswrapper[4697]: I0127 15:10:48.957433 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-v52qb" event={"ID":"c9f06c8d-2fe6-44f7-8870-0142002dfae8","Type":"ContainerStarted","Data":"f14f8fa81cb0b9c462639c247bb65c6567b238c8f08635a6c67425c895e437cb"} Jan 27 15:10:48 crc kubenswrapper[4697]: I0127 15:10:48.958193 4697 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-7ddjp container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Jan 27 15:10:48 crc kubenswrapper[4697]: I0127 15:10:48.958307 4697 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-7ddjp" podUID="894a6339-d208-46db-8769-ac9153cb1ba0" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Jan 27 15:10:48 crc kubenswrapper[4697]: I0127 15:10:48.958752 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-v52qb" Jan 27 15:10:48 crc kubenswrapper[4697]: I0127 15:10:48.960722 4697 patch_prober.go:28] interesting pod/console-operator-58897d9998-v52qb container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/readyz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Jan 27 15:10:48 crc kubenswrapper[4697]: I0127 15:10:48.960773 4697 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-v52qb" podUID="c9f06c8d-2fe6-44f7-8870-0142002dfae8" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.9:8443/readyz\": dial tcp 10.217.0.9:8443: connect: connection refused" Jan 27 15:10:48 crc kubenswrapper[4697]: I0127 15:10:48.972615 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:10:48 crc kubenswrapper[4697]: E0127 15:10:48.975840 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:10:49.475813597 +0000 UTC m=+145.648213378 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:10:48 crc kubenswrapper[4697]: I0127 15:10:48.976826 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-78k6r" podStartSLOduration=122.971953775 podStartE2EDuration="2m2.971953775s" podCreationTimestamp="2026-01-27 15:08:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:10:48.966684112 +0000 UTC m=+145.139083893" watchObservedRunningTime="2026-01-27 15:10:48.971953775 +0000 UTC m=+145.144353556" Jan 27 15:10:48 crc kubenswrapper[4697]: I0127 15:10:48.987962 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-v52qb" podStartSLOduration=122.987943067 podStartE2EDuration="2m2.987943067s" podCreationTimestamp="2026-01-27 15:08:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:10:48.987269978 +0000 UTC m=+145.159669759" watchObservedRunningTime="2026-01-27 15:10:48.987943067 +0000 UTC m=+145.160342848" Jan 27 15:10:49 crc kubenswrapper[4697]: I0127 15:10:49.074848 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qlprf\" (UID: \"43fd9fa4-b232-4d49-8f52-27d016de4cad\") " pod="openshift-image-registry/image-registry-697d97f7c8-qlprf" Jan 27 15:10:49 crc kubenswrapper[4697]: E0127 15:10:49.075368 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:10:49.575353366 +0000 UTC m=+145.747753147 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qlprf" (UID: "43fd9fa4-b232-4d49-8f52-27d016de4cad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:10:49 crc kubenswrapper[4697]: I0127 15:10:49.176180 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:10:49 crc kubenswrapper[4697]: E0127 15:10:49.176629 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:10:49.676605534 +0000 UTC m=+145.849005325 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:10:49 crc kubenswrapper[4697]: I0127 15:10:49.176760 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qlprf\" (UID: \"43fd9fa4-b232-4d49-8f52-27d016de4cad\") " pod="openshift-image-registry/image-registry-697d97f7c8-qlprf" Jan 27 15:10:49 crc kubenswrapper[4697]: E0127 15:10:49.177057 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:10:49.677046906 +0000 UTC m=+145.849446687 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qlprf" (UID: "43fd9fa4-b232-4d49-8f52-27d016de4cad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:10:49 crc kubenswrapper[4697]: I0127 15:10:49.277882 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:10:49 crc kubenswrapper[4697]: E0127 15:10:49.278311 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:10:49.778284145 +0000 UTC m=+145.950683926 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:10:49 crc kubenswrapper[4697]: I0127 15:10:49.278442 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qlprf\" (UID: \"43fd9fa4-b232-4d49-8f52-27d016de4cad\") " pod="openshift-image-registry/image-registry-697d97f7c8-qlprf" Jan 27 15:10:49 crc kubenswrapper[4697]: E0127 15:10:49.278924 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:10:49.778911303 +0000 UTC m=+145.951311084 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qlprf" (UID: "43fd9fa4-b232-4d49-8f52-27d016de4cad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:10:49 crc kubenswrapper[4697]: I0127 15:10:49.379432 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:10:49 crc kubenswrapper[4697]: E0127 15:10:49.379865 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:10:49.879845552 +0000 UTC m=+146.052245333 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:10:49 crc kubenswrapper[4697]: I0127 15:10:49.481041 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qlprf\" (UID: \"43fd9fa4-b232-4d49-8f52-27d016de4cad\") " pod="openshift-image-registry/image-registry-697d97f7c8-qlprf" Jan 27 15:10:49 crc kubenswrapper[4697]: E0127 15:10:49.481613 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:10:49.981591845 +0000 UTC m=+146.153991716 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qlprf" (UID: "43fd9fa4-b232-4d49-8f52-27d016de4cad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:10:49 crc kubenswrapper[4697]: I0127 15:10:49.583440 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:10:49 crc kubenswrapper[4697]: E0127 15:10:49.584443 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:10:50.084427569 +0000 UTC m=+146.256827340 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:10:49 crc kubenswrapper[4697]: I0127 15:10:49.685235 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qlprf\" (UID: \"43fd9fa4-b232-4d49-8f52-27d016de4cad\") " pod="openshift-image-registry/image-registry-697d97f7c8-qlprf" Jan 27 15:10:49 crc kubenswrapper[4697]: E0127 15:10:49.685506 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:10:50.185494983 +0000 UTC m=+146.357894764 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qlprf" (UID: "43fd9fa4-b232-4d49-8f52-27d016de4cad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:10:49 crc kubenswrapper[4697]: I0127 15:10:49.787014 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:10:49 crc kubenswrapper[4697]: E0127 15:10:49.787227 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:10:50.287201714 +0000 UTC m=+146.459601495 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:10:49 crc kubenswrapper[4697]: I0127 15:10:49.787341 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qlprf\" (UID: \"43fd9fa4-b232-4d49-8f52-27d016de4cad\") " pod="openshift-image-registry/image-registry-697d97f7c8-qlprf" Jan 27 15:10:49 crc kubenswrapper[4697]: E0127 15:10:49.787614 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:10:50.287606086 +0000 UTC m=+146.460005867 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qlprf" (UID: "43fd9fa4-b232-4d49-8f52-27d016de4cad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:10:49 crc kubenswrapper[4697]: I0127 15:10:49.888888 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:10:49 crc kubenswrapper[4697]: E0127 15:10:49.889068 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:10:50.38903852 +0000 UTC m=+146.561438291 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:10:49 crc kubenswrapper[4697]: I0127 15:10:49.889469 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qlprf\" (UID: \"43fd9fa4-b232-4d49-8f52-27d016de4cad\") " pod="openshift-image-registry/image-registry-697d97f7c8-qlprf" Jan 27 15:10:49 crc kubenswrapper[4697]: E0127 15:10:49.889843 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:10:50.389833423 +0000 UTC m=+146.562233204 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qlprf" (UID: "43fd9fa4-b232-4d49-8f52-27d016de4cad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:10:49 crc kubenswrapper[4697]: I0127 15:10:49.964351 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sztfd" event={"ID":"24381e71-6f3a-4c4c-b60d-a10c06aa12f7","Type":"ContainerStarted","Data":"13fee8d65082adda2fa7fbf0cfe0e438bc2d9f5c34f02a2ae223248755b07598"} Jan 27 15:10:49 crc kubenswrapper[4697]: I0127 15:10:49.970204 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-d4smc" event={"ID":"6128d2c8-dc50-4e71-8093-c931f60b8495","Type":"ContainerStarted","Data":"cfce51272abcf557932b980761871b0b4609158b79eec25b575be7afb25cac2a"} Jan 27 15:10:49 crc kubenswrapper[4697]: I0127 15:10:49.970259 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-d4smc" event={"ID":"6128d2c8-dc50-4e71-8093-c931f60b8495","Type":"ContainerStarted","Data":"26067f08789b3a88f3e834423f069dd40908eeffd6ce2fea06ae484aa2c8598d"} Jan 27 15:10:49 crc kubenswrapper[4697]: I0127 15:10:49.972477 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wp9j5" event={"ID":"8041fd88-aa5f-41ea-acab-b694e78d4355","Type":"ContainerStarted","Data":"5dee5ae9bc19930492b09d2b7915b9c505efdbdf7df31262fd89591172e2c095"} Jan 27 15:10:49 crc kubenswrapper[4697]: I0127 15:10:49.972515 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wp9j5" event={"ID":"8041fd88-aa5f-41ea-acab-b694e78d4355","Type":"ContainerStarted","Data":"d8d9f8bb3694a3904dd7b80c150d2c1c04794d2e7d3a0d82ce8cd138598765d2"} Jan 27 15:10:49 crc kubenswrapper[4697]: I0127 15:10:49.974364 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-s2bmd" event={"ID":"a1a8ed06-fb86-479b-a5a1-0dbac195717a","Type":"ContainerStarted","Data":"f4a04d06b5810ff8519d54fcc3e7b832aa0f72e86a0f295f078fe334680a7d53"} Jan 27 15:10:49 crc kubenswrapper[4697]: I0127 15:10:49.976262 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-fxbvr" event={"ID":"25c9b0f1-c6a8-4521-a480-9f46238e3a22","Type":"ContainerStarted","Data":"334cdbf1b89ab45caf49df46b1dac3b414c4ac6c5c8aef09dfcfad11b17b1fef"} Jan 27 15:10:49 crc kubenswrapper[4697]: I0127 15:10:49.981111 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-js7zh" event={"ID":"dd09f8c7-aace-4ca6-ac8c-aa4425391032","Type":"ContainerStarted","Data":"743a1be335628474f47c18a5fc1820e04ef3e5baf36d37866bc2ff5eb133d7ee"} Jan 27 15:10:49 crc kubenswrapper[4697]: I0127 15:10:49.982730 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-wmwsd" event={"ID":"d50c0395-ec10-4463-92e4-29defdd47f62","Type":"ContainerStarted","Data":"0c1e748ada0cbd169ffde3e9e43f874ac79ce083e0872116674d8c5c065d51d7"} Jan 27 15:10:49 crc kubenswrapper[4697]: I0127 15:10:49.990929 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:10:49 crc kubenswrapper[4697]: E0127 15:10:49.991335 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:10:50.491319498 +0000 UTC m=+146.663719279 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:10:49 crc kubenswrapper[4697]: I0127 15:10:49.992364 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-4tnq9" event={"ID":"49dd977b-6315-4446-8804-242e7e94a375","Type":"ContainerStarted","Data":"90b2d58be352279c1e64dffc56b80a5ebbc9d81af1e1635554ed6caf9bf0a19d"} Jan 27 15:10:49 crc kubenswrapper[4697]: I0127 15:10:49.995251 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sztfd" podStartSLOduration=122.995239302 podStartE2EDuration="2m2.995239302s" podCreationTimestamp="2026-01-27 15:08:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:10:49.994442078 +0000 UTC m=+146.166841859" watchObservedRunningTime="2026-01-27 15:10:49.995239302 +0000 UTC m=+146.167639083" Jan 27 15:10:49 crc kubenswrapper[4697]: I0127 15:10:49.999520 4697 generic.go:334] "Generic (PLEG): container finished" podID="a6dbb097-e288-4cf9-8aa5-f35c997358df" containerID="abb22d93eeb477b13960e84b5724084d9e7751def3a13b07c41c888b10a5dd50" exitCode=0 Jan 27 15:10:49 crc kubenswrapper[4697]: I0127 15:10:49.999802 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h46d6" event={"ID":"a6dbb097-e288-4cf9-8aa5-f35c997358df","Type":"ContainerDied","Data":"abb22d93eeb477b13960e84b5724084d9e7751def3a13b07c41c888b10a5dd50"} Jan 27 15:10:50 crc kubenswrapper[4697]: I0127 15:10:50.006241 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tggvq" event={"ID":"8ab80f79-35d6-42cc-8480-e2a778d41da7","Type":"ContainerStarted","Data":"e1423562e4bb7802df5d9dfd73b2419ab8c9c61b7be9aae76fd1519e22e9fe09"} Jan 27 15:10:50 crc kubenswrapper[4697]: I0127 15:10:50.011529 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-2p4pk" event={"ID":"927094cf-5a33-4170-97f1-b9b2c4f5b519","Type":"ContainerStarted","Data":"caa2c33c92f4219f68c62fe79f5fdabaf70c7ae68c6947c01f36dd5e20bdd1d7"} Jan 27 15:10:50 crc kubenswrapper[4697]: I0127 15:10:50.013201 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-r5rsq" event={"ID":"f0e1afff-d250-45de-bc85-3a20f622a52d","Type":"ContainerStarted","Data":"72e68c45d232c9fd519d0917ff518a6d56aa99520ed2dfe55dc404822724aeba"} Jan 27 15:10:50 crc kubenswrapper[4697]: I0127 15:10:50.015684 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-qvsns" event={"ID":"50568641-b008-4747-bdbc-f474cc35bf58","Type":"ContainerStarted","Data":"7e741894f5cb5df01d60e4fcda7dff59fa4bdcc1de9efb802b34159f3a84f206"} Jan 27 15:10:50 crc kubenswrapper[4697]: I0127 15:10:50.023262 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-d4smc" podStartSLOduration=7.023248052 podStartE2EDuration="7.023248052s" podCreationTimestamp="2026-01-27 15:10:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:10:50.022004966 +0000 UTC m=+146.194404747" watchObservedRunningTime="2026-01-27 15:10:50.023248052 +0000 UTC m=+146.195647833" Jan 27 15:10:50 crc kubenswrapper[4697]: I0127 15:10:50.027840 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-8b9kp" event={"ID":"9957edac-db7a-4d39-8224-f3e24a16bb43","Type":"ContainerStarted","Data":"b7f4d1e914e44dc845817b878c818abdfc18da70f3f2e9d76e680a1322a8d545"} Jan 27 15:10:50 crc kubenswrapper[4697]: I0127 15:10:50.040681 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mwhvk" event={"ID":"0ac2a583-d3dc-433a-8b0f-92c9984f6b20","Type":"ContainerStarted","Data":"c4431a31b91ad616364cd99008e8829320f004601a73cf4c5d2d375b945d6c98"} Jan 27 15:10:50 crc kubenswrapper[4697]: I0127 15:10:50.042736 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-45xm2" event={"ID":"baa7401d-bcad-4175-af1b-46414c003f9e","Type":"ContainerStarted","Data":"31c0ab84a5b388f859f6c45a8b145c924146deefae5e970ec7cb5133aa40ab83"} Jan 27 15:10:50 crc kubenswrapper[4697]: I0127 15:10:50.043939 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-45xm2" Jan 27 15:10:50 crc kubenswrapper[4697]: I0127 15:10:50.046504 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-l8h5h" event={"ID":"991f5f59-9d29-4a19-974a-2b358b8b38a0","Type":"ContainerStarted","Data":"9be0b1bd3ea84cb3e2d4f208e64a91ce3e0172969f4a2678ae1f57ae6c25332a"} Jan 27 15:10:50 crc kubenswrapper[4697]: I0127 15:10:50.046811 4697 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-45xm2 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.41:8080/healthz\": dial tcp 10.217.0.41:8080: connect: connection refused" start-of-body= Jan 27 15:10:50 crc kubenswrapper[4697]: I0127 15:10:50.046854 4697 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-45xm2" podUID="baa7401d-bcad-4175-af1b-46414c003f9e" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.41:8080/healthz\": dial tcp 10.217.0.41:8080: connect: connection refused" Jan 27 15:10:50 crc kubenswrapper[4697]: I0127 15:10:50.069766 4697 generic.go:334] "Generic (PLEG): container finished" podID="cef607c4-e16c-4ae6-9b66-3206b100267c" containerID="adeb1751f848eb5334388d1b013ff473d2ee39141c041117c5ce8b457424d9fe" exitCode=0 Jan 27 15:10:50 crc kubenswrapper[4697]: I0127 15:10:50.061048 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492100-sfd69" event={"ID":"79f6280f-8dc0-42b8-be4c-cbbc6528bf58","Type":"ContainerStarted","Data":"fa905afa8cf330172707edf7a50b0996520776d36a6e83c10c9788272644f84f"} Jan 27 15:10:50 crc kubenswrapper[4697]: I0127 15:10:50.075027 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-vj475" event={"ID":"cef607c4-e16c-4ae6-9b66-3206b100267c","Type":"ContainerDied","Data":"adeb1751f848eb5334388d1b013ff473d2ee39141c041117c5ce8b457424d9fe"} Jan 27 15:10:50 crc kubenswrapper[4697]: I0127 15:10:50.087433 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-wmwsd" podStartSLOduration=123.087401378 podStartE2EDuration="2m3.087401378s" podCreationTimestamp="2026-01-27 15:08:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:10:50.087059267 +0000 UTC m=+146.259459048" watchObservedRunningTime="2026-01-27 15:10:50.087401378 +0000 UTC m=+146.259801159" Jan 27 15:10:50 crc kubenswrapper[4697]: I0127 15:10:50.087877 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-q7j4f" event={"ID":"17342d35-ebda-45fa-8b7a-9be1064954a9","Type":"ContainerStarted","Data":"ef6772f00138285abf055b0174dc244d36e9716a4e31f0a9c13e418df1bd6a09"} Jan 27 15:10:50 crc kubenswrapper[4697]: I0127 15:10:50.093989 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qlprf\" (UID: \"43fd9fa4-b232-4d49-8f52-27d016de4cad\") " pod="openshift-image-registry/image-registry-697d97f7c8-qlprf" Jan 27 15:10:50 crc kubenswrapper[4697]: E0127 15:10:50.099167 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:10:50.599152147 +0000 UTC m=+146.771552038 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qlprf" (UID: "43fd9fa4-b232-4d49-8f52-27d016de4cad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:10:50 crc kubenswrapper[4697]: I0127 15:10:50.102955 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-nmrvs" event={"ID":"f9c4aaa3-b53b-4b3f-8d6a-b9b7eef37362","Type":"ContainerStarted","Data":"797040738bdc16bb382a2c113041256eed2904e30a911c892860ca26f89c9a3a"} Jan 27 15:10:50 crc kubenswrapper[4697]: I0127 15:10:50.105126 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xxcz8" event={"ID":"cc64ac2f-83b2-421c-b9e7-194a679b8653","Type":"ContainerStarted","Data":"d1c062f75c043e0227b31c45934ada3e4d26ef48e8f2687431fe32b345ce4f4f"} Jan 27 15:10:50 crc kubenswrapper[4697]: I0127 15:10:50.117473 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-45xm2" podStartSLOduration=123.117452596 podStartE2EDuration="2m3.117452596s" podCreationTimestamp="2026-01-27 15:08:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:10:50.115723467 +0000 UTC m=+146.288123268" watchObservedRunningTime="2026-01-27 15:10:50.117452596 +0000 UTC m=+146.289852387" Jan 27 15:10:50 crc kubenswrapper[4697]: I0127 15:10:50.118109 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-5xn9m" event={"ID":"a17bf464-23fd-475b-b25a-33e14cd9ced0","Type":"ContainerStarted","Data":"783476fd727b03b17e673aea0dda4827ee906faa333e497306623dec103b75c6"} Jan 27 15:10:50 crc kubenswrapper[4697]: I0127 15:10:50.128958 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6vnzn" event={"ID":"9189511e-af92-4fee-be70-e2baab592c98","Type":"ContainerStarted","Data":"add153b0f3ae9d5b8a2857f10d1a39c5208e13873fe42d9b0339d3ab6bbffb05"} Jan 27 15:10:50 crc kubenswrapper[4697]: I0127 15:10:50.134738 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-wf8n4" event={"ID":"5477c5d7-5d1a-46a7-bee2-eeb3d5ef2ade","Type":"ContainerStarted","Data":"7fea9a2d42c219340059ec8d76c0623d6a437d463608b70f7306dcbbf408e6e0"} Jan 27 15:10:50 crc kubenswrapper[4697]: I0127 15:10:50.134808 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-wf8n4" event={"ID":"5477c5d7-5d1a-46a7-bee2-eeb3d5ef2ade","Type":"ContainerStarted","Data":"6d15dcc128c14c52aac47a1d8ed5345993eaaeb1e0bdf23eb50cd5b5ce49461c"} Jan 27 15:10:50 crc kubenswrapper[4697]: I0127 15:10:50.141933 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qkkgm" event={"ID":"ecacf3dd-ae8b-4d81-87b5-0bfbf0575e24","Type":"ContainerStarted","Data":"372bc14ad5afcb7b4928c0bc6927a1cd8323f863670a268a6e16ec552964cc8a"} Jan 27 15:10:50 crc kubenswrapper[4697]: I0127 15:10:50.142313 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29492100-sfd69" podStartSLOduration=124.142290685 podStartE2EDuration="2m4.142290685s" podCreationTimestamp="2026-01-27 15:08:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:10:50.134908492 +0000 UTC m=+146.307308273" watchObservedRunningTime="2026-01-27 15:10:50.142290685 +0000 UTC m=+146.314690456" Jan 27 15:10:50 crc kubenswrapper[4697]: I0127 15:10:50.143117 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qkkgm" Jan 27 15:10:50 crc kubenswrapper[4697]: I0127 15:10:50.147224 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-k5tl7" event={"ID":"b777a7db-a7b4-4fdd-b30c-fb2243b658a0","Type":"ContainerStarted","Data":"8c825950f74ed8b7719e6431f9b0eb6fb3b5d42ead262c0b1541892bbc82b62d"} Jan 27 15:10:50 crc kubenswrapper[4697]: I0127 15:10:50.149110 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-9ds6c" event={"ID":"06d5e1be-2c28-4e27-ba7e-e24b4e72401b","Type":"ContainerStarted","Data":"f8ba4d90c9f16f66f8f30cea41ad12c3656b52e933c1c4ccb49039ca9ebbb70e"} Jan 27 15:10:50 crc kubenswrapper[4697]: I0127 15:10:50.149252 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-9ds6c" event={"ID":"06d5e1be-2c28-4e27-ba7e-e24b4e72401b","Type":"ContainerStarted","Data":"09e5de16f17a8a94f4867eca1cde3767e0052fae5250405191b28249244f3aaf"} Jan 27 15:10:50 crc kubenswrapper[4697]: I0127 15:10:50.153211 4697 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-qkkgm container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" start-of-body= Jan 27 15:10:50 crc kubenswrapper[4697]: I0127 15:10:50.153333 4697 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qkkgm" podUID="ecacf3dd-ae8b-4d81-87b5-0bfbf0575e24" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" Jan 27 15:10:50 crc kubenswrapper[4697]: I0127 15:10:50.164197 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dr2qr" event={"ID":"a1556a89-d5d8-4eca-bf26-6475efb42496","Type":"ContainerStarted","Data":"481c96a1b6a45509745f89e9b204b54380a746454dfedb37a6275fd97eb369db"} Jan 27 15:10:50 crc kubenswrapper[4697]: I0127 15:10:50.165210 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dr2qr" event={"ID":"a1556a89-d5d8-4eca-bf26-6475efb42496","Type":"ContainerStarted","Data":"f980c0fbf87533a9884c6d16f0ee43694153b577053f2eea4d2a3ef4f47714e2"} Jan 27 15:10:50 crc kubenswrapper[4697]: I0127 15:10:50.166394 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dr2qr" Jan 27 15:10:50 crc kubenswrapper[4697]: I0127 15:10:50.166465 4697 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-w85nj container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Jan 27 15:10:50 crc kubenswrapper[4697]: I0127 15:10:50.166491 4697 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-w85nj" podUID="24828dfa-ec12-4de9-aaba-96716e62d49a" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" Jan 27 15:10:50 crc kubenswrapper[4697]: I0127 15:10:50.166763 4697 patch_prober.go:28] interesting pod/downloads-7954f5f757-78k6r container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Jan 27 15:10:50 crc kubenswrapper[4697]: I0127 15:10:50.166813 4697 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-78k6r" podUID="73d9ac28-74b0-4ead-b4e4-b270264feb05" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Jan 27 15:10:50 crc kubenswrapper[4697]: I0127 15:10:50.180992 4697 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-dr2qr container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" start-of-body= Jan 27 15:10:50 crc kubenswrapper[4697]: I0127 15:10:50.181047 4697 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dr2qr" podUID="a1556a89-d5d8-4eca-bf26-6475efb42496" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" Jan 27 15:10:50 crc kubenswrapper[4697]: I0127 15:10:50.181513 4697 patch_prober.go:28] interesting pod/console-operator-58897d9998-v52qb container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/readyz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Jan 27 15:10:50 crc kubenswrapper[4697]: I0127 15:10:50.181691 4697 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-v52qb" podUID="c9f06c8d-2fe6-44f7-8870-0142002dfae8" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.9:8443/readyz\": dial tcp 10.217.0.9:8443: connect: connection refused" Jan 27 15:10:50 crc kubenswrapper[4697]: I0127 15:10:50.199421 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:10:50 crc kubenswrapper[4697]: E0127 15:10:50.200234 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:10:50.70020861 +0000 UTC m=+146.872608391 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:10:50 crc kubenswrapper[4697]: I0127 15:10:50.220866 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-7ddjp" Jan 27 15:10:50 crc kubenswrapper[4697]: I0127 15:10:50.235182 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mwhvk" podStartSLOduration=124.235164741 podStartE2EDuration="2m4.235164741s" podCreationTimestamp="2026-01-27 15:08:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:10:50.234124241 +0000 UTC m=+146.406524022" watchObservedRunningTime="2026-01-27 15:10:50.235164741 +0000 UTC m=+146.407564522" Jan 27 15:10:50 crc kubenswrapper[4697]: I0127 15:10:50.292428 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-l8h5h" podStartSLOduration=124.292410317 podStartE2EDuration="2m4.292410317s" podCreationTimestamp="2026-01-27 15:08:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:10:50.28940363 +0000 UTC m=+146.461803421" watchObservedRunningTime="2026-01-27 15:10:50.292410317 +0000 UTC m=+146.464810098" Jan 27 15:10:50 crc kubenswrapper[4697]: I0127 15:10:50.317659 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qlprf\" (UID: \"43fd9fa4-b232-4d49-8f52-27d016de4cad\") " pod="openshift-image-registry/image-registry-697d97f7c8-qlprf" Jan 27 15:10:50 crc kubenswrapper[4697]: E0127 15:10:50.323342 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:10:50.823325481 +0000 UTC m=+146.995725262 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qlprf" (UID: "43fd9fa4-b232-4d49-8f52-27d016de4cad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:10:50 crc kubenswrapper[4697]: I0127 15:10:50.344578 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-r5rsq" podStartSLOduration=123.344561765 podStartE2EDuration="2m3.344561765s" podCreationTimestamp="2026-01-27 15:08:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:10:50.342976159 +0000 UTC m=+146.515375940" watchObservedRunningTime="2026-01-27 15:10:50.344561765 +0000 UTC m=+146.516961546" Jan 27 15:10:50 crc kubenswrapper[4697]: I0127 15:10:50.381308 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-qvsns" podStartSLOduration=123.381289258 podStartE2EDuration="2m3.381289258s" podCreationTimestamp="2026-01-27 15:08:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:10:50.378975071 +0000 UTC m=+146.551374862" watchObservedRunningTime="2026-01-27 15:10:50.381289258 +0000 UTC m=+146.553689039" Jan 27 15:10:50 crc kubenswrapper[4697]: I0127 15:10:50.422387 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:10:50 crc kubenswrapper[4697]: E0127 15:10:50.422760 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:10:50.922741246 +0000 UTC m=+147.095141027 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:10:50 crc kubenswrapper[4697]: I0127 15:10:50.440729 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6vnzn" podStartSLOduration=123.440709406 podStartE2EDuration="2m3.440709406s" podCreationTimestamp="2026-01-27 15:08:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:10:50.438891234 +0000 UTC m=+146.611291025" watchObservedRunningTime="2026-01-27 15:10:50.440709406 +0000 UTC m=+146.613109187" Jan 27 15:10:50 crc kubenswrapper[4697]: I0127 15:10:50.445049 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-wmwsd" Jan 27 15:10:50 crc kubenswrapper[4697]: I0127 15:10:50.445229 4697 patch_prober.go:28] interesting pod/router-default-5444994796-wmwsd container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Jan 27 15:10:50 crc kubenswrapper[4697]: I0127 15:10:50.445270 4697 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wmwsd" podUID="d50c0395-ec10-4463-92e4-29defdd47f62" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Jan 27 15:10:50 crc kubenswrapper[4697]: I0127 15:10:50.524617 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qlprf\" (UID: \"43fd9fa4-b232-4d49-8f52-27d016de4cad\") " pod="openshift-image-registry/image-registry-697d97f7c8-qlprf" Jan 27 15:10:50 crc kubenswrapper[4697]: I0127 15:10:50.524989 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qkkgm" podStartSLOduration=123.524972233 podStartE2EDuration="2m3.524972233s" podCreationTimestamp="2026-01-27 15:08:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:10:50.524437328 +0000 UTC m=+146.696837099" watchObservedRunningTime="2026-01-27 15:10:50.524972233 +0000 UTC m=+146.697372014" Jan 27 15:10:50 crc kubenswrapper[4697]: E0127 15:10:50.525045 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:10:51.025032085 +0000 UTC m=+147.197431866 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qlprf" (UID: "43fd9fa4-b232-4d49-8f52-27d016de4cad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:10:50 crc kubenswrapper[4697]: I0127 15:10:50.549514 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-k5tl7" podStartSLOduration=123.549491753 podStartE2EDuration="2m3.549491753s" podCreationTimestamp="2026-01-27 15:08:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:10:50.548120883 +0000 UTC m=+146.720520674" watchObservedRunningTime="2026-01-27 15:10:50.549491753 +0000 UTC m=+146.721891534" Jan 27 15:10:50 crc kubenswrapper[4697]: I0127 15:10:50.628909 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:10:50 crc kubenswrapper[4697]: E0127 15:10:50.629459 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:10:51.129437565 +0000 UTC m=+147.301837346 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:10:50 crc kubenswrapper[4697]: I0127 15:10:50.640923 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qlprf\" (UID: \"43fd9fa4-b232-4d49-8f52-27d016de4cad\") " pod="openshift-image-registry/image-registry-697d97f7c8-qlprf" Jan 27 15:10:50 crc kubenswrapper[4697]: E0127 15:10:50.641494 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:10:51.141482303 +0000 UTC m=+147.313882084 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qlprf" (UID: "43fd9fa4-b232-4d49-8f52-27d016de4cad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:10:50 crc kubenswrapper[4697]: I0127 15:10:50.741903 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:10:50 crc kubenswrapper[4697]: E0127 15:10:50.742317 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:10:51.242300269 +0000 UTC m=+147.414700050 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:10:50 crc kubenswrapper[4697]: I0127 15:10:50.843404 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qlprf\" (UID: \"43fd9fa4-b232-4d49-8f52-27d016de4cad\") " pod="openshift-image-registry/image-registry-697d97f7c8-qlprf" Jan 27 15:10:50 crc kubenswrapper[4697]: E0127 15:10:50.843871 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:10:51.343852727 +0000 UTC m=+147.516252588 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qlprf" (UID: "43fd9fa4-b232-4d49-8f52-27d016de4cad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:10:50 crc kubenswrapper[4697]: I0127 15:10:50.944170 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:10:50 crc kubenswrapper[4697]: E0127 15:10:50.944548 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:10:51.444525968 +0000 UTC m=+147.616925749 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:10:50 crc kubenswrapper[4697]: I0127 15:10:50.944682 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qlprf\" (UID: \"43fd9fa4-b232-4d49-8f52-27d016de4cad\") " pod="openshift-image-registry/image-registry-697d97f7c8-qlprf" Jan 27 15:10:50 crc kubenswrapper[4697]: E0127 15:10:50.945076 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:10:51.445061144 +0000 UTC m=+147.617460925 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qlprf" (UID: "43fd9fa4-b232-4d49-8f52-27d016de4cad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:10:51 crc kubenswrapper[4697]: I0127 15:10:51.046298 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:10:51 crc kubenswrapper[4697]: E0127 15:10:51.046468 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:10:51.546442076 +0000 UTC m=+147.718841857 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:10:51 crc kubenswrapper[4697]: I0127 15:10:51.046601 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qlprf\" (UID: \"43fd9fa4-b232-4d49-8f52-27d016de4cad\") " pod="openshift-image-registry/image-registry-697d97f7c8-qlprf" Jan 27 15:10:51 crc kubenswrapper[4697]: E0127 15:10:51.046979 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:10:51.546965431 +0000 UTC m=+147.719365212 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qlprf" (UID: "43fd9fa4-b232-4d49-8f52-27d016de4cad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:10:51 crc kubenswrapper[4697]: I0127 15:10:51.052000 4697 csr.go:261] certificate signing request csr-9n9xg is approved, waiting to be issued Jan 27 15:10:51 crc kubenswrapper[4697]: I0127 15:10:51.063414 4697 csr.go:257] certificate signing request csr-9n9xg is issued Jan 27 15:10:51 crc kubenswrapper[4697]: I0127 15:10:51.147419 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:10:51 crc kubenswrapper[4697]: E0127 15:10:51.147585 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:10:51.64754126 +0000 UTC m=+147.819941051 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:10:51 crc kubenswrapper[4697]: I0127 15:10:51.148115 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qlprf\" (UID: \"43fd9fa4-b232-4d49-8f52-27d016de4cad\") " pod="openshift-image-registry/image-registry-697d97f7c8-qlprf" Jan 27 15:10:51 crc kubenswrapper[4697]: E0127 15:10:51.148395 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:10:51.648383654 +0000 UTC m=+147.820783435 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qlprf" (UID: "43fd9fa4-b232-4d49-8f52-27d016de4cad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:10:51 crc kubenswrapper[4697]: I0127 15:10:51.165825 4697 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-tgccn container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.17:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 15:10:51 crc kubenswrapper[4697]: I0127 15:10:51.165897 4697 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-tgccn" podUID="d1d0b154-f221-4132-9d6f-a17173841b1f" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.17:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 15:10:51 crc kubenswrapper[4697]: I0127 15:10:51.171291 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wp9j5" event={"ID":"8041fd88-aa5f-41ea-acab-b694e78d4355","Type":"ContainerStarted","Data":"031a144c3d6a2b1f0b5a2e467e669233e43c7a0eb0830c412585f7167b91e91b"} Jan 27 15:10:51 crc kubenswrapper[4697]: I0127 15:10:51.171436 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wp9j5" Jan 27 15:10:51 crc kubenswrapper[4697]: I0127 15:10:51.179716 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h46d6" event={"ID":"a6dbb097-e288-4cf9-8aa5-f35c997358df","Type":"ContainerStarted","Data":"3622b64c1112fb6b34a057f93e0af220e472d0b23285bae3155489a4805489cc"} Jan 27 15:10:51 crc kubenswrapper[4697]: I0127 15:10:51.181675 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-vj475" event={"ID":"cef607c4-e16c-4ae6-9b66-3206b100267c","Type":"ContainerStarted","Data":"9084477cc6d20324b96ee631a655a7b7bbecfae378e6116ac2057cbbdc80761b"} Jan 27 15:10:51 crc kubenswrapper[4697]: I0127 15:10:51.181858 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-vj475" Jan 27 15:10:51 crc kubenswrapper[4697]: I0127 15:10:51.184512 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-s2bmd" event={"ID":"a1a8ed06-fb86-479b-a5a1-0dbac195717a","Type":"ContainerStarted","Data":"7271baf816b32934f92460f097b4283bb0e6bccabe5637c754fc6b64c7da4aff"} Jan 27 15:10:51 crc kubenswrapper[4697]: I0127 15:10:51.187287 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-fxbvr" event={"ID":"25c9b0f1-c6a8-4521-a480-9f46238e3a22","Type":"ContainerStarted","Data":"1f2866bfadf8462509453ae9d5a1558e802f26a4a2712603baa8d8744094079d"} Jan 27 15:10:51 crc kubenswrapper[4697]: I0127 15:10:51.189511 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-nmrvs" event={"ID":"f9c4aaa3-b53b-4b3f-8d6a-b9b7eef37362","Type":"ContainerStarted","Data":"5ba41595754312dde491670414dfbbc2f1c781a031cdafb3831cc6980c3ecb39"} Jan 27 15:10:51 crc kubenswrapper[4697]: I0127 15:10:51.194325 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-5xn9m" event={"ID":"a17bf464-23fd-475b-b25a-33e14cd9ced0","Type":"ContainerStarted","Data":"b3936d77f199e9e9c03646885953ead1980fcd334ce765d1ed04ff7e674159ce"} Jan 27 15:10:51 crc kubenswrapper[4697]: I0127 15:10:51.194863 4697 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-w85nj container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Jan 27 15:10:51 crc kubenswrapper[4697]: I0127 15:10:51.194907 4697 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-w85nj" podUID="24828dfa-ec12-4de9-aaba-96716e62d49a" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" Jan 27 15:10:51 crc kubenswrapper[4697]: I0127 15:10:51.195002 4697 patch_prober.go:28] interesting pod/console-operator-58897d9998-v52qb container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/readyz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Jan 27 15:10:51 crc kubenswrapper[4697]: I0127 15:10:51.195040 4697 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-v52qb" podUID="c9f06c8d-2fe6-44f7-8870-0142002dfae8" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.9:8443/readyz\": dial tcp 10.217.0.9:8443: connect: connection refused" Jan 27 15:10:51 crc kubenswrapper[4697]: I0127 15:10:51.195207 4697 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-dr2qr container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" start-of-body= Jan 27 15:10:51 crc kubenswrapper[4697]: I0127 15:10:51.195227 4697 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dr2qr" podUID="a1556a89-d5d8-4eca-bf26-6475efb42496" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" Jan 27 15:10:51 crc kubenswrapper[4697]: I0127 15:10:51.195474 4697 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-45xm2 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.41:8080/healthz\": dial tcp 10.217.0.41:8080: connect: connection refused" start-of-body= Jan 27 15:10:51 crc kubenswrapper[4697]: I0127 15:10:51.195493 4697 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-45xm2" podUID="baa7401d-bcad-4175-af1b-46414c003f9e" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.41:8080/healthz\": dial tcp 10.217.0.41:8080: connect: connection refused" Jan 27 15:10:51 crc kubenswrapper[4697]: I0127 15:10:51.196235 4697 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-qkkgm container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" start-of-body= Jan 27 15:10:51 crc kubenswrapper[4697]: I0127 15:10:51.196268 4697 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qkkgm" podUID="ecacf3dd-ae8b-4d81-87b5-0bfbf0575e24" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" Jan 27 15:10:51 crc kubenswrapper[4697]: I0127 15:10:51.230035 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dr2qr" podStartSLOduration=124.230020176 podStartE2EDuration="2m4.230020176s" podCreationTimestamp="2026-01-27 15:08:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:10:50.612356571 +0000 UTC m=+146.784756352" watchObservedRunningTime="2026-01-27 15:10:51.230020176 +0000 UTC m=+147.402419957" Jan 27 15:10:51 crc kubenswrapper[4697]: I0127 15:10:51.231108 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wp9j5" podStartSLOduration=124.231102736 podStartE2EDuration="2m4.231102736s" podCreationTimestamp="2026-01-27 15:08:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:10:51.220081769 +0000 UTC m=+147.392481560" watchObservedRunningTime="2026-01-27 15:10:51.231102736 +0000 UTC m=+147.403502517" Jan 27 15:10:51 crc kubenswrapper[4697]: I0127 15:10:51.249620 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:10:51 crc kubenswrapper[4697]: E0127 15:10:51.249875 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:10:51.749810268 +0000 UTC m=+147.922210049 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:10:51 crc kubenswrapper[4697]: I0127 15:10:51.249985 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qlprf\" (UID: \"43fd9fa4-b232-4d49-8f52-27d016de4cad\") " pod="openshift-image-registry/image-registry-697d97f7c8-qlprf" Jan 27 15:10:51 crc kubenswrapper[4697]: E0127 15:10:51.250376 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:10:51.750365194 +0000 UTC m=+147.922765055 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qlprf" (UID: "43fd9fa4-b232-4d49-8f52-27d016de4cad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:10:51 crc kubenswrapper[4697]: I0127 15:10:51.351540 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:10:51 crc kubenswrapper[4697]: E0127 15:10:51.351695 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:10:51.851668755 +0000 UTC m=+148.024068536 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:10:51 crc kubenswrapper[4697]: I0127 15:10:51.352906 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qlprf\" (UID: \"43fd9fa4-b232-4d49-8f52-27d016de4cad\") " pod="openshift-image-registry/image-registry-697d97f7c8-qlprf" Jan 27 15:10:51 crc kubenswrapper[4697]: E0127 15:10:51.355929 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:10:51.855918157 +0000 UTC m=+148.028317938 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qlprf" (UID: "43fd9fa4-b232-4d49-8f52-27d016de4cad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:10:51 crc kubenswrapper[4697]: I0127 15:10:51.446707 4697 patch_prober.go:28] interesting pod/router-default-5444994796-wmwsd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 15:10:51 crc kubenswrapper[4697]: [-]has-synced failed: reason withheld Jan 27 15:10:51 crc kubenswrapper[4697]: [+]process-running ok Jan 27 15:10:51 crc kubenswrapper[4697]: healthz check failed Jan 27 15:10:51 crc kubenswrapper[4697]: I0127 15:10:51.446769 4697 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wmwsd" podUID="d50c0395-ec10-4463-92e4-29defdd47f62" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 15:10:51 crc kubenswrapper[4697]: I0127 15:10:51.455143 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:10:51 crc kubenswrapper[4697]: E0127 15:10:51.455332 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:10:51.955306592 +0000 UTC m=+148.127706373 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:10:51 crc kubenswrapper[4697]: I0127 15:10:51.455418 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qlprf\" (UID: \"43fd9fa4-b232-4d49-8f52-27d016de4cad\") " pod="openshift-image-registry/image-registry-697d97f7c8-qlprf" Jan 27 15:10:51 crc kubenswrapper[4697]: E0127 15:10:51.455759 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:10:51.955746934 +0000 UTC m=+148.128146715 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qlprf" (UID: "43fd9fa4-b232-4d49-8f52-27d016de4cad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:10:51 crc kubenswrapper[4697]: I0127 15:10:51.544943 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-9ds6c" podStartSLOduration=124.544927604 podStartE2EDuration="2m4.544927604s" podCreationTimestamp="2026-01-27 15:08:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:10:51.298760864 +0000 UTC m=+147.471160665" watchObservedRunningTime="2026-01-27 15:10:51.544927604 +0000 UTC m=+147.717327385" Jan 27 15:10:51 crc kubenswrapper[4697]: I0127 15:10:51.556008 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:10:51 crc kubenswrapper[4697]: E0127 15:10:51.556409 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:10:52.056390725 +0000 UTC m=+148.228790516 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:10:51 crc kubenswrapper[4697]: I0127 15:10:51.621943 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-s2bmd" podStartSLOduration=124.621927481 podStartE2EDuration="2m4.621927481s" podCreationTimestamp="2026-01-27 15:08:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:10:51.551438912 +0000 UTC m=+147.723838693" watchObservedRunningTime="2026-01-27 15:10:51.621927481 +0000 UTC m=+147.794327262" Jan 27 15:10:51 crc kubenswrapper[4697]: I0127 15:10:51.657324 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qlprf\" (UID: \"43fd9fa4-b232-4d49-8f52-27d016de4cad\") " pod="openshift-image-registry/image-registry-697d97f7c8-qlprf" Jan 27 15:10:51 crc kubenswrapper[4697]: E0127 15:10:51.657575 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:10:52.157563961 +0000 UTC m=+148.329963742 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qlprf" (UID: "43fd9fa4-b232-4d49-8f52-27d016de4cad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:10:51 crc kubenswrapper[4697]: I0127 15:10:51.663966 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h46d6" podStartSLOduration=124.663945647 podStartE2EDuration="2m4.663945647s" podCreationTimestamp="2026-01-27 15:08:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:10:51.658737316 +0000 UTC m=+147.831137107" watchObservedRunningTime="2026-01-27 15:10:51.663945647 +0000 UTC m=+147.836345428" Jan 27 15:10:51 crc kubenswrapper[4697]: I0127 15:10:51.664377 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-vj475" podStartSLOduration=125.664353458 podStartE2EDuration="2m5.664353458s" podCreationTimestamp="2026-01-27 15:08:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:10:51.623937629 +0000 UTC m=+147.796337410" watchObservedRunningTime="2026-01-27 15:10:51.664353458 +0000 UTC m=+147.836753239" Jan 27 15:10:51 crc kubenswrapper[4697]: I0127 15:10:51.758635 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:10:51 crc kubenswrapper[4697]: E0127 15:10:51.758851 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:10:52.25882658 +0000 UTC m=+148.431226361 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:10:51 crc kubenswrapper[4697]: I0127 15:10:51.758957 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qlprf\" (UID: \"43fd9fa4-b232-4d49-8f52-27d016de4cad\") " pod="openshift-image-registry/image-registry-697d97f7c8-qlprf" Jan 27 15:10:51 crc kubenswrapper[4697]: E0127 15:10:51.759217 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:10:52.259199041 +0000 UTC m=+148.431598822 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qlprf" (UID: "43fd9fa4-b232-4d49-8f52-27d016de4cad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:10:51 crc kubenswrapper[4697]: I0127 15:10:51.788452 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-2p4pk" podStartSLOduration=125.788437357 podStartE2EDuration="2m5.788437357s" podCreationTimestamp="2026-01-27 15:08:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:10:51.723164819 +0000 UTC m=+147.895564590" watchObservedRunningTime="2026-01-27 15:10:51.788437357 +0000 UTC m=+147.960837138" Jan 27 15:10:51 crc kubenswrapper[4697]: I0127 15:10:51.860402 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:10:51 crc kubenswrapper[4697]: E0127 15:10:51.860561 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:10:52.360539222 +0000 UTC m=+148.532939013 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:10:51 crc kubenswrapper[4697]: I0127 15:10:51.860696 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qlprf\" (UID: \"43fd9fa4-b232-4d49-8f52-27d016de4cad\") " pod="openshift-image-registry/image-registry-697d97f7c8-qlprf" Jan 27 15:10:51 crc kubenswrapper[4697]: E0127 15:10:51.860984 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:10:52.360975905 +0000 UTC m=+148.533375676 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qlprf" (UID: "43fd9fa4-b232-4d49-8f52-27d016de4cad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:10:51 crc kubenswrapper[4697]: I0127 15:10:51.932690 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-q7j4f" podStartSLOduration=124.932673029 podStartE2EDuration="2m4.932673029s" podCreationTimestamp="2026-01-27 15:08:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:10:51.792226406 +0000 UTC m=+147.964626187" watchObservedRunningTime="2026-01-27 15:10:51.932673029 +0000 UTC m=+148.105072810" Jan 27 15:10:51 crc kubenswrapper[4697]: I0127 15:10:51.961245 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:10:51 crc kubenswrapper[4697]: E0127 15:10:51.961710 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:10:52.461690798 +0000 UTC m=+148.634090579 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:10:51 crc kubenswrapper[4697]: I0127 15:10:51.996376 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-4tnq9" podStartSLOduration=124.99635568 podStartE2EDuration="2m4.99635568s" podCreationTimestamp="2026-01-27 15:08:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:10:51.990875762 +0000 UTC m=+148.163275543" watchObservedRunningTime="2026-01-27 15:10:51.99635568 +0000 UTC m=+148.168755461" Jan 27 15:10:51 crc kubenswrapper[4697]: I0127 15:10:51.996807 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-fxbvr" podStartSLOduration=124.996801153 podStartE2EDuration="2m4.996801153s" podCreationTimestamp="2026-01-27 15:08:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:10:51.933209564 +0000 UTC m=+148.105609345" watchObservedRunningTime="2026-01-27 15:10:51.996801153 +0000 UTC m=+148.169200934" Jan 27 15:10:52 crc kubenswrapper[4697]: I0127 15:10:52.048664 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xxcz8" podStartSLOduration=125.048649033 podStartE2EDuration="2m5.048649033s" podCreationTimestamp="2026-01-27 15:08:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:10:52.0474751 +0000 UTC m=+148.219874881" watchObservedRunningTime="2026-01-27 15:10:52.048649033 +0000 UTC m=+148.221048814" Jan 27 15:10:52 crc kubenswrapper[4697]: I0127 15:10:52.062594 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qlprf\" (UID: \"43fd9fa4-b232-4d49-8f52-27d016de4cad\") " pod="openshift-image-registry/image-registry-697d97f7c8-qlprf" Jan 27 15:10:52 crc kubenswrapper[4697]: E0127 15:10:52.062980 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:10:52.562965157 +0000 UTC m=+148.735364938 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qlprf" (UID: "43fd9fa4-b232-4d49-8f52-27d016de4cad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:10:52 crc kubenswrapper[4697]: I0127 15:10:52.064345 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-27 15:05:51 +0000 UTC, rotation deadline is 2026-11-16 16:38:42.56245882 +0000 UTC Jan 27 15:10:52 crc kubenswrapper[4697]: I0127 15:10:52.064398 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 7033h27m50.498062831s for next certificate rotation Jan 27 15:10:52 crc kubenswrapper[4697]: I0127 15:10:52.101022 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-wf8n4" podStartSLOduration=125.101006848 podStartE2EDuration="2m5.101006848s" podCreationTimestamp="2026-01-27 15:08:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:10:52.100955086 +0000 UTC m=+148.273354867" watchObservedRunningTime="2026-01-27 15:10:52.101006848 +0000 UTC m=+148.273406629" Jan 27 15:10:52 crc kubenswrapper[4697]: I0127 15:10:52.145498 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-5xn9m" podStartSLOduration=9.145482104 podStartE2EDuration="9.145482104s" podCreationTimestamp="2026-01-27 15:10:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:10:52.145308388 +0000 UTC m=+148.317708169" watchObservedRunningTime="2026-01-27 15:10:52.145482104 +0000 UTC m=+148.317881885" Jan 27 15:10:52 crc kubenswrapper[4697]: I0127 15:10:52.163596 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:10:52 crc kubenswrapper[4697]: E0127 15:10:52.164055 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:10:52.664035131 +0000 UTC m=+148.836434912 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:10:52 crc kubenswrapper[4697]: I0127 15:10:52.201797 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-8b9kp" event={"ID":"9957edac-db7a-4d39-8224-f3e24a16bb43","Type":"ContainerStarted","Data":"c53f6955e3509bcfe4ff9251878beb4acf62baf84842b2dd2a0b3681ca543704"} Jan 27 15:10:52 crc kubenswrapper[4697]: I0127 15:10:52.202452 4697 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-qkkgm container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" start-of-body= Jan 27 15:10:52 crc kubenswrapper[4697]: I0127 15:10:52.202498 4697 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qkkgm" podUID="ecacf3dd-ae8b-4d81-87b5-0bfbf0575e24" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" Jan 27 15:10:52 crc kubenswrapper[4697]: I0127 15:10:52.202512 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-5xn9m" Jan 27 15:10:52 crc kubenswrapper[4697]: I0127 15:10:52.203264 4697 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-dr2qr container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" start-of-body= Jan 27 15:10:52 crc kubenswrapper[4697]: I0127 15:10:52.203260 4697 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-45xm2 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.41:8080/healthz\": dial tcp 10.217.0.41:8080: connect: connection refused" start-of-body= Jan 27 15:10:52 crc kubenswrapper[4697]: I0127 15:10:52.203312 4697 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-45xm2" podUID="baa7401d-bcad-4175-af1b-46414c003f9e" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.41:8080/healthz\": dial tcp 10.217.0.41:8080: connect: connection refused" Jan 27 15:10:52 crc kubenswrapper[4697]: I0127 15:10:52.203285 4697 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dr2qr" podUID="a1556a89-d5d8-4eca-bf26-6475efb42496" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" Jan 27 15:10:52 crc kubenswrapper[4697]: I0127 15:10:52.238844 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-nmrvs" podStartSLOduration=126.238829503 podStartE2EDuration="2m6.238829503s" podCreationTimestamp="2026-01-27 15:08:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:10:52.233608062 +0000 UTC m=+148.406007853" watchObservedRunningTime="2026-01-27 15:10:52.238829503 +0000 UTC m=+148.411229284" Jan 27 15:10:52 crc kubenswrapper[4697]: I0127 15:10:52.265199 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qlprf\" (UID: \"43fd9fa4-b232-4d49-8f52-27d016de4cad\") " pod="openshift-image-registry/image-registry-697d97f7c8-qlprf" Jan 27 15:10:52 crc kubenswrapper[4697]: E0127 15:10:52.265633 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:10:52.765614058 +0000 UTC m=+148.938013909 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qlprf" (UID: "43fd9fa4-b232-4d49-8f52-27d016de4cad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:10:52 crc kubenswrapper[4697]: I0127 15:10:52.301935 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tggvq" podStartSLOduration=126.301919558 podStartE2EDuration="2m6.301919558s" podCreationTimestamp="2026-01-27 15:08:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:10:52.300989062 +0000 UTC m=+148.473388843" watchObservedRunningTime="2026-01-27 15:10:52.301919558 +0000 UTC m=+148.474319339" Jan 27 15:10:52 crc kubenswrapper[4697]: I0127 15:10:52.366361 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:10:52 crc kubenswrapper[4697]: E0127 15:10:52.366750 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:10:52.866542847 +0000 UTC m=+149.038942628 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:10:52 crc kubenswrapper[4697]: I0127 15:10:52.367183 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qlprf\" (UID: \"43fd9fa4-b232-4d49-8f52-27d016de4cad\") " pod="openshift-image-registry/image-registry-697d97f7c8-qlprf" Jan 27 15:10:52 crc kubenswrapper[4697]: E0127 15:10:52.368722 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:10:52.86870054 +0000 UTC m=+149.041100431 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qlprf" (UID: "43fd9fa4-b232-4d49-8f52-27d016de4cad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:10:52 crc kubenswrapper[4697]: I0127 15:10:52.426630 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-js7zh" podStartSLOduration=125.426611815 podStartE2EDuration="2m5.426611815s" podCreationTimestamp="2026-01-27 15:08:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:10:52.402867068 +0000 UTC m=+148.575266849" watchObservedRunningTime="2026-01-27 15:10:52.426611815 +0000 UTC m=+148.599011596" Jan 27 15:10:52 crc kubenswrapper[4697]: I0127 15:10:52.444915 4697 patch_prober.go:28] interesting pod/router-default-5444994796-wmwsd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 15:10:52 crc kubenswrapper[4697]: [-]has-synced failed: reason withheld Jan 27 15:10:52 crc kubenswrapper[4697]: [+]process-running ok Jan 27 15:10:52 crc kubenswrapper[4697]: healthz check failed Jan 27 15:10:52 crc kubenswrapper[4697]: I0127 15:10:52.444974 4697 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wmwsd" podUID="d50c0395-ec10-4463-92e4-29defdd47f62" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 15:10:52 crc kubenswrapper[4697]: I0127 15:10:52.469709 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:10:52 crc kubenswrapper[4697]: E0127 15:10:52.469910 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:10:52.969884176 +0000 UTC m=+149.142283957 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:10:52 crc kubenswrapper[4697]: I0127 15:10:52.470086 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qlprf\" (UID: \"43fd9fa4-b232-4d49-8f52-27d016de4cad\") " pod="openshift-image-registry/image-registry-697d97f7c8-qlprf" Jan 27 15:10:52 crc kubenswrapper[4697]: E0127 15:10:52.470436 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:10:52.970419082 +0000 UTC m=+149.142818863 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qlprf" (UID: "43fd9fa4-b232-4d49-8f52-27d016de4cad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:10:52 crc kubenswrapper[4697]: I0127 15:10:52.571580 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:10:52 crc kubenswrapper[4697]: E0127 15:10:52.571876 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:10:53.071860896 +0000 UTC m=+149.244260667 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:10:52 crc kubenswrapper[4697]: I0127 15:10:52.673340 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qlprf\" (UID: \"43fd9fa4-b232-4d49-8f52-27d016de4cad\") " pod="openshift-image-registry/image-registry-697d97f7c8-qlprf" Jan 27 15:10:52 crc kubenswrapper[4697]: E0127 15:10:52.673747 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:10:53.173731083 +0000 UTC m=+149.346130864 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qlprf" (UID: "43fd9fa4-b232-4d49-8f52-27d016de4cad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:10:52 crc kubenswrapper[4697]: I0127 15:10:52.774905 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:10:52 crc kubenswrapper[4697]: E0127 15:10:52.775085 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:10:53.275057533 +0000 UTC m=+149.447457314 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:10:52 crc kubenswrapper[4697]: I0127 15:10:52.775250 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qlprf\" (UID: \"43fd9fa4-b232-4d49-8f52-27d016de4cad\") " pod="openshift-image-registry/image-registry-697d97f7c8-qlprf" Jan 27 15:10:52 crc kubenswrapper[4697]: E0127 15:10:52.775555 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:10:53.275542238 +0000 UTC m=+149.447942019 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qlprf" (UID: "43fd9fa4-b232-4d49-8f52-27d016de4cad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:10:52 crc kubenswrapper[4697]: I0127 15:10:52.876499 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:10:52 crc kubenswrapper[4697]: E0127 15:10:52.876892 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:10:53.376869868 +0000 UTC m=+149.549269659 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:10:52 crc kubenswrapper[4697]: I0127 15:10:52.877093 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qlprf\" (UID: \"43fd9fa4-b232-4d49-8f52-27d016de4cad\") " pod="openshift-image-registry/image-registry-697d97f7c8-qlprf" Jan 27 15:10:52 crc kubenswrapper[4697]: E0127 15:10:52.877425 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:10:53.377413673 +0000 UTC m=+149.549813454 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qlprf" (UID: "43fd9fa4-b232-4d49-8f52-27d016de4cad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:10:52 crc kubenswrapper[4697]: I0127 15:10:52.978613 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:10:52 crc kubenswrapper[4697]: E0127 15:10:52.978937 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:10:53.47892087 +0000 UTC m=+149.651320651 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:10:53 crc kubenswrapper[4697]: I0127 15:10:53.079608 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qlprf\" (UID: \"43fd9fa4-b232-4d49-8f52-27d016de4cad\") " pod="openshift-image-registry/image-registry-697d97f7c8-qlprf" Jan 27 15:10:53 crc kubenswrapper[4697]: E0127 15:10:53.079946 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:10:53.579929801 +0000 UTC m=+149.752329582 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qlprf" (UID: "43fd9fa4-b232-4d49-8f52-27d016de4cad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:10:53 crc kubenswrapper[4697]: I0127 15:10:53.180617 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:10:53 crc kubenswrapper[4697]: E0127 15:10:53.180986 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:10:53.680970783 +0000 UTC m=+149.853370564 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:10:53 crc kubenswrapper[4697]: I0127 15:10:53.281589 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qlprf\" (UID: \"43fd9fa4-b232-4d49-8f52-27d016de4cad\") " pod="openshift-image-registry/image-registry-697d97f7c8-qlprf" Jan 27 15:10:53 crc kubenswrapper[4697]: E0127 15:10:53.281891 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:10:53.781878792 +0000 UTC m=+149.954278573 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qlprf" (UID: "43fd9fa4-b232-4d49-8f52-27d016de4cad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:10:53 crc kubenswrapper[4697]: I0127 15:10:53.382905 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:10:53 crc kubenswrapper[4697]: E0127 15:10:53.383666 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:10:53.883649986 +0000 UTC m=+150.056049767 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:10:53 crc kubenswrapper[4697]: I0127 15:10:53.447352 4697 patch_prober.go:28] interesting pod/router-default-5444994796-wmwsd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 15:10:53 crc kubenswrapper[4697]: [-]has-synced failed: reason withheld Jan 27 15:10:53 crc kubenswrapper[4697]: [+]process-running ok Jan 27 15:10:53 crc kubenswrapper[4697]: healthz check failed Jan 27 15:10:53 crc kubenswrapper[4697]: I0127 15:10:53.447404 4697 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wmwsd" podUID="d50c0395-ec10-4463-92e4-29defdd47f62" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 15:10:53 crc kubenswrapper[4697]: I0127 15:10:53.484593 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:10:53 crc kubenswrapper[4697]: I0127 15:10:53.485741 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:10:53 crc kubenswrapper[4697]: I0127 15:10:53.485851 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:10:53 crc kubenswrapper[4697]: I0127 15:10:53.485950 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:10:53 crc kubenswrapper[4697]: I0127 15:10:53.486043 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qlprf\" (UID: \"43fd9fa4-b232-4d49-8f52-27d016de4cad\") " pod="openshift-image-registry/image-registry-697d97f7c8-qlprf" Jan 27 15:10:53 crc kubenswrapper[4697]: E0127 15:10:53.486353 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:10:53.986339646 +0000 UTC m=+150.158739427 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qlprf" (UID: "43fd9fa4-b232-4d49-8f52-27d016de4cad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:10:53 crc kubenswrapper[4697]: I0127 15:10:53.489904 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:10:53 crc kubenswrapper[4697]: I0127 15:10:53.496002 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:10:53 crc kubenswrapper[4697]: I0127 15:10:53.499395 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:10:53 crc kubenswrapper[4697]: I0127 15:10:53.510743 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:10:53 crc kubenswrapper[4697]: I0127 15:10:53.587330 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:10:53 crc kubenswrapper[4697]: I0127 15:10:53.587488 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:10:53 crc kubenswrapper[4697]: E0127 15:10:53.587691 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:10:54.087661987 +0000 UTC m=+150.260061768 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:10:53 crc kubenswrapper[4697]: I0127 15:10:53.587871 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qlprf\" (UID: \"43fd9fa4-b232-4d49-8f52-27d016de4cad\") " pod="openshift-image-registry/image-registry-697d97f7c8-qlprf" Jan 27 15:10:53 crc kubenswrapper[4697]: E0127 15:10:53.588199 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:10:54.088181971 +0000 UTC m=+150.260581752 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qlprf" (UID: "43fd9fa4-b232-4d49-8f52-27d016de4cad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:10:53 crc kubenswrapper[4697]: I0127 15:10:53.593742 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:10:53 crc kubenswrapper[4697]: I0127 15:10:53.689061 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:10:53 crc kubenswrapper[4697]: E0127 15:10:53.689259 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:10:54.189232794 +0000 UTC m=+150.361632575 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:10:53 crc kubenswrapper[4697]: I0127 15:10:53.689385 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qlprf\" (UID: \"43fd9fa4-b232-4d49-8f52-27d016de4cad\") " pod="openshift-image-registry/image-registry-697d97f7c8-qlprf" Jan 27 15:10:53 crc kubenswrapper[4697]: E0127 15:10:53.689688 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:10:54.189681137 +0000 UTC m=+150.362080918 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qlprf" (UID: "43fd9fa4-b232-4d49-8f52-27d016de4cad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:10:53 crc kubenswrapper[4697]: I0127 15:10:53.787955 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:10:53 crc kubenswrapper[4697]: I0127 15:10:53.789973 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:10:53 crc kubenswrapper[4697]: E0127 15:10:53.790285 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:10:54.290269796 +0000 UTC m=+150.462669577 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:10:53 crc kubenswrapper[4697]: I0127 15:10:53.891933 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qlprf\" (UID: \"43fd9fa4-b232-4d49-8f52-27d016de4cad\") " pod="openshift-image-registry/image-registry-697d97f7c8-qlprf" Jan 27 15:10:53 crc kubenswrapper[4697]: E0127 15:10:53.892386 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:10:54.392373729 +0000 UTC m=+150.564773510 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qlprf" (UID: "43fd9fa4-b232-4d49-8f52-27d016de4cad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:10:53 crc kubenswrapper[4697]: I0127 15:10:53.995306 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:10:53 crc kubenswrapper[4697]: E0127 15:10:53.995705 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:10:54.495687068 +0000 UTC m=+150.668086859 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:10:54 crc kubenswrapper[4697]: I0127 15:10:54.097925 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qlprf\" (UID: \"43fd9fa4-b232-4d49-8f52-27d016de4cad\") " pod="openshift-image-registry/image-registry-697d97f7c8-qlprf" Jan 27 15:10:54 crc kubenswrapper[4697]: E0127 15:10:54.098190 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:10:54.598179642 +0000 UTC m=+150.770579423 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qlprf" (UID: "43fd9fa4-b232-4d49-8f52-27d016de4cad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:10:54 crc kubenswrapper[4697]: I0127 15:10:54.193563 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 27 15:10:54 crc kubenswrapper[4697]: I0127 15:10:54.194315 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 15:10:54 crc kubenswrapper[4697]: I0127 15:10:54.199762 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:10:54 crc kubenswrapper[4697]: E0127 15:10:54.200219 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:10:54.700200633 +0000 UTC m=+150.872600414 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:10:54 crc kubenswrapper[4697]: I0127 15:10:54.219999 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 27 15:10:54 crc kubenswrapper[4697]: I0127 15:10:54.221137 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 27 15:10:54 crc kubenswrapper[4697]: I0127 15:10:54.221335 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 27 15:10:54 crc kubenswrapper[4697]: I0127 15:10:54.256316 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-8b9kp" event={"ID":"9957edac-db7a-4d39-8224-f3e24a16bb43","Type":"ContainerStarted","Data":"de41306f8f99e21ef438ea9765bf7a50952174d547b72510171c767ee44e9cd4"} Jan 27 15:10:54 crc kubenswrapper[4697]: I0127 15:10:54.301414 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cf07624a-74f5-4561-81f2-d1955c199a85-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"cf07624a-74f5-4561-81f2-d1955c199a85\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 15:10:54 crc kubenswrapper[4697]: I0127 15:10:54.301497 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qlprf\" (UID: \"43fd9fa4-b232-4d49-8f52-27d016de4cad\") " pod="openshift-image-registry/image-registry-697d97f7c8-qlprf" Jan 27 15:10:54 crc kubenswrapper[4697]: I0127 15:10:54.301539 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cf07624a-74f5-4561-81f2-d1955c199a85-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"cf07624a-74f5-4561-81f2-d1955c199a85\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 15:10:54 crc kubenswrapper[4697]: E0127 15:10:54.301871 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:10:54.801855184 +0000 UTC m=+150.974254965 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qlprf" (UID: "43fd9fa4-b232-4d49-8f52-27d016de4cad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:10:54 crc kubenswrapper[4697]: I0127 15:10:54.402810 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:10:54 crc kubenswrapper[4697]: E0127 15:10:54.402974 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:10:54.902950217 +0000 UTC m=+151.075349988 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:10:54 crc kubenswrapper[4697]: I0127 15:10:54.403035 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cf07624a-74f5-4561-81f2-d1955c199a85-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"cf07624a-74f5-4561-81f2-d1955c199a85\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 15:10:54 crc kubenswrapper[4697]: I0127 15:10:54.403085 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cf07624a-74f5-4561-81f2-d1955c199a85-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"cf07624a-74f5-4561-81f2-d1955c199a85\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 15:10:54 crc kubenswrapper[4697]: I0127 15:10:54.403136 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cf07624a-74f5-4561-81f2-d1955c199a85-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"cf07624a-74f5-4561-81f2-d1955c199a85\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 15:10:54 crc kubenswrapper[4697]: I0127 15:10:54.403140 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qlprf\" (UID: \"43fd9fa4-b232-4d49-8f52-27d016de4cad\") " pod="openshift-image-registry/image-registry-697d97f7c8-qlprf" Jan 27 15:10:54 crc kubenswrapper[4697]: E0127 15:10:54.403379 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:10:54.903367459 +0000 UTC m=+151.075767240 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qlprf" (UID: "43fd9fa4-b232-4d49-8f52-27d016de4cad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:10:54 crc kubenswrapper[4697]: I0127 15:10:54.459352 4697 patch_prober.go:28] interesting pod/router-default-5444994796-wmwsd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 15:10:54 crc kubenswrapper[4697]: [-]has-synced failed: reason withheld Jan 27 15:10:54 crc kubenswrapper[4697]: [+]process-running ok Jan 27 15:10:54 crc kubenswrapper[4697]: healthz check failed Jan 27 15:10:54 crc kubenswrapper[4697]: I0127 15:10:54.459644 4697 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wmwsd" podUID="d50c0395-ec10-4463-92e4-29defdd47f62" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 15:10:54 crc kubenswrapper[4697]: I0127 15:10:54.459868 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cf07624a-74f5-4561-81f2-d1955c199a85-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"cf07624a-74f5-4561-81f2-d1955c199a85\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 15:10:54 crc kubenswrapper[4697]: I0127 15:10:54.512987 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:10:54 crc kubenswrapper[4697]: E0127 15:10:54.513244 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:10:55.013229407 +0000 UTC m=+151.185629188 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:10:54 crc kubenswrapper[4697]: I0127 15:10:54.517047 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 15:10:54 crc kubenswrapper[4697]: I0127 15:10:54.628352 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qlprf\" (UID: \"43fd9fa4-b232-4d49-8f52-27d016de4cad\") " pod="openshift-image-registry/image-registry-697d97f7c8-qlprf" Jan 27 15:10:54 crc kubenswrapper[4697]: E0127 15:10:54.628880 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:10:55.128865621 +0000 UTC m=+151.301265412 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qlprf" (UID: "43fd9fa4-b232-4d49-8f52-27d016de4cad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:10:54 crc kubenswrapper[4697]: I0127 15:10:54.729098 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:10:54 crc kubenswrapper[4697]: E0127 15:10:54.729361 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:10:55.229345107 +0000 UTC m=+151.401744888 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:10:54 crc kubenswrapper[4697]: I0127 15:10:54.756180 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-56255"] Jan 27 15:10:54 crc kubenswrapper[4697]: I0127 15:10:54.767439 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-56255" Jan 27 15:10:54 crc kubenswrapper[4697]: I0127 15:10:54.772492 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 27 15:10:54 crc kubenswrapper[4697]: I0127 15:10:54.775699 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-56255"] Jan 27 15:10:54 crc kubenswrapper[4697]: I0127 15:10:54.830587 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qlprf\" (UID: \"43fd9fa4-b232-4d49-8f52-27d016de4cad\") " pod="openshift-image-registry/image-registry-697d97f7c8-qlprf" Jan 27 15:10:54 crc kubenswrapper[4697]: E0127 15:10:54.830995 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:10:55.330979167 +0000 UTC m=+151.503378948 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qlprf" (UID: "43fd9fa4-b232-4d49-8f52-27d016de4cad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:10:54 crc kubenswrapper[4697]: I0127 15:10:54.933147 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:10:54 crc kubenswrapper[4697]: I0127 15:10:54.933339 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/316f7102-a9a6-40c4-b38b-ba9c7736526a-catalog-content\") pod \"community-operators-56255\" (UID: \"316f7102-a9a6-40c4-b38b-ba9c7736526a\") " pod="openshift-marketplace/community-operators-56255" Jan 27 15:10:54 crc kubenswrapper[4697]: I0127 15:10:54.933408 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/316f7102-a9a6-40c4-b38b-ba9c7736526a-utilities\") pod \"community-operators-56255\" (UID: \"316f7102-a9a6-40c4-b38b-ba9c7736526a\") " pod="openshift-marketplace/community-operators-56255" Jan 27 15:10:54 crc kubenswrapper[4697]: I0127 15:10:54.933439 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvtd9\" (UniqueName: \"kubernetes.io/projected/316f7102-a9a6-40c4-b38b-ba9c7736526a-kube-api-access-gvtd9\") pod \"community-operators-56255\" (UID: \"316f7102-a9a6-40c4-b38b-ba9c7736526a\") " pod="openshift-marketplace/community-operators-56255" Jan 27 15:10:54 crc kubenswrapper[4697]: E0127 15:10:54.933539 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:10:55.433523743 +0000 UTC m=+151.605923524 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:10:55 crc kubenswrapper[4697]: I0127 15:10:55.036373 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/316f7102-a9a6-40c4-b38b-ba9c7736526a-utilities\") pod \"community-operators-56255\" (UID: \"316f7102-a9a6-40c4-b38b-ba9c7736526a\") " pod="openshift-marketplace/community-operators-56255" Jan 27 15:10:55 crc kubenswrapper[4697]: I0127 15:10:55.036456 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gvtd9\" (UniqueName: \"kubernetes.io/projected/316f7102-a9a6-40c4-b38b-ba9c7736526a-kube-api-access-gvtd9\") pod \"community-operators-56255\" (UID: \"316f7102-a9a6-40c4-b38b-ba9c7736526a\") " pod="openshift-marketplace/community-operators-56255" Jan 27 15:10:55 crc kubenswrapper[4697]: I0127 15:10:55.036536 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/316f7102-a9a6-40c4-b38b-ba9c7736526a-catalog-content\") pod \"community-operators-56255\" (UID: \"316f7102-a9a6-40c4-b38b-ba9c7736526a\") " pod="openshift-marketplace/community-operators-56255" Jan 27 15:10:55 crc kubenswrapper[4697]: I0127 15:10:55.036577 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qlprf\" (UID: \"43fd9fa4-b232-4d49-8f52-27d016de4cad\") " pod="openshift-image-registry/image-registry-697d97f7c8-qlprf" Jan 27 15:10:55 crc kubenswrapper[4697]: I0127 15:10:55.037549 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/316f7102-a9a6-40c4-b38b-ba9c7736526a-catalog-content\") pod \"community-operators-56255\" (UID: \"316f7102-a9a6-40c4-b38b-ba9c7736526a\") " pod="openshift-marketplace/community-operators-56255" Jan 27 15:10:55 crc kubenswrapper[4697]: I0127 15:10:55.037833 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/316f7102-a9a6-40c4-b38b-ba9c7736526a-utilities\") pod \"community-operators-56255\" (UID: \"316f7102-a9a6-40c4-b38b-ba9c7736526a\") " pod="openshift-marketplace/community-operators-56255" Jan 27 15:10:55 crc kubenswrapper[4697]: E0127 15:10:55.037853 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:10:55.53783791 +0000 UTC m=+151.710237691 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qlprf" (UID: "43fd9fa4-b232-4d49-8f52-27d016de4cad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:10:55 crc kubenswrapper[4697]: I0127 15:10:55.058588 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-59htg"] Jan 27 15:10:55 crc kubenswrapper[4697]: I0127 15:10:55.059446 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-59htg" Jan 27 15:10:55 crc kubenswrapper[4697]: I0127 15:10:55.062897 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 27 15:10:55 crc kubenswrapper[4697]: I0127 15:10:55.101051 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-59htg"] Jan 27 15:10:55 crc kubenswrapper[4697]: I0127 15:10:55.109702 4697 patch_prober.go:28] interesting pod/machine-config-daemon-wz495 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 15:10:55 crc kubenswrapper[4697]: I0127 15:10:55.109743 4697 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 15:10:55 crc kubenswrapper[4697]: I0127 15:10:55.135008 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gvtd9\" (UniqueName: \"kubernetes.io/projected/316f7102-a9a6-40c4-b38b-ba9c7736526a-kube-api-access-gvtd9\") pod \"community-operators-56255\" (UID: \"316f7102-a9a6-40c4-b38b-ba9c7736526a\") " pod="openshift-marketplace/community-operators-56255" Jan 27 15:10:55 crc kubenswrapper[4697]: I0127 15:10:55.139263 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:10:55 crc kubenswrapper[4697]: E0127 15:10:55.139545 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:10:55.639529391 +0000 UTC m=+151.811929172 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:10:55 crc kubenswrapper[4697]: I0127 15:10:55.244005 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-nmrvs" Jan 27 15:10:55 crc kubenswrapper[4697]: I0127 15:10:55.244074 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-nmrvs" Jan 27 15:10:55 crc kubenswrapper[4697]: I0127 15:10:55.245503 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djj9k\" (UniqueName: \"kubernetes.io/projected/bf0813c1-e1e9-42a4-8cf9-8fd7fba35e3d-kube-api-access-djj9k\") pod \"certified-operators-59htg\" (UID: \"bf0813c1-e1e9-42a4-8cf9-8fd7fba35e3d\") " pod="openshift-marketplace/certified-operators-59htg" Jan 27 15:10:55 crc kubenswrapper[4697]: I0127 15:10:55.245580 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qlprf\" (UID: \"43fd9fa4-b232-4d49-8f52-27d016de4cad\") " pod="openshift-image-registry/image-registry-697d97f7c8-qlprf" Jan 27 15:10:55 crc kubenswrapper[4697]: I0127 15:10:55.245613 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf0813c1-e1e9-42a4-8cf9-8fd7fba35e3d-catalog-content\") pod \"certified-operators-59htg\" (UID: \"bf0813c1-e1e9-42a4-8cf9-8fd7fba35e3d\") " pod="openshift-marketplace/certified-operators-59htg" Jan 27 15:10:55 crc kubenswrapper[4697]: I0127 15:10:55.245697 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf0813c1-e1e9-42a4-8cf9-8fd7fba35e3d-utilities\") pod \"certified-operators-59htg\" (UID: \"bf0813c1-e1e9-42a4-8cf9-8fd7fba35e3d\") " pod="openshift-marketplace/certified-operators-59htg" Jan 27 15:10:55 crc kubenswrapper[4697]: E0127 15:10:55.246317 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:10:55.746301299 +0000 UTC m=+151.918701200 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qlprf" (UID: "43fd9fa4-b232-4d49-8f52-27d016de4cad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:10:55 crc kubenswrapper[4697]: I0127 15:10:55.256813 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-8qkrg"] Jan 27 15:10:55 crc kubenswrapper[4697]: I0127 15:10:55.264663 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8qkrg" Jan 27 15:10:55 crc kubenswrapper[4697]: W0127 15:10:55.271202 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-d790aa7a88220f2a2864808bf07f5874d8eb3fb910ab53262819ddfaea65eb3d WatchSource:0}: Error finding container d790aa7a88220f2a2864808bf07f5874d8eb3fb910ab53262819ddfaea65eb3d: Status 404 returned error can't find the container with id d790aa7a88220f2a2864808bf07f5874d8eb3fb910ab53262819ddfaea65eb3d Jan 27 15:10:55 crc kubenswrapper[4697]: I0127 15:10:55.279619 4697 patch_prober.go:28] interesting pod/apiserver-76f77b778f-nmrvs container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 27 15:10:55 crc kubenswrapper[4697]: [+]log ok Jan 27 15:10:55 crc kubenswrapper[4697]: [+]etcd ok Jan 27 15:10:55 crc kubenswrapper[4697]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 27 15:10:55 crc kubenswrapper[4697]: [+]poststarthook/generic-apiserver-start-informers ok Jan 27 15:10:55 crc kubenswrapper[4697]: [+]poststarthook/max-in-flight-filter ok Jan 27 15:10:55 crc kubenswrapper[4697]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 27 15:10:55 crc kubenswrapper[4697]: [+]poststarthook/image.openshift.io-apiserver-caches ok Jan 27 15:10:55 crc kubenswrapper[4697]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Jan 27 15:10:55 crc kubenswrapper[4697]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Jan 27 15:10:55 crc kubenswrapper[4697]: [+]poststarthook/project.openshift.io-projectcache ok Jan 27 15:10:55 crc kubenswrapper[4697]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Jan 27 15:10:55 crc kubenswrapper[4697]: [+]poststarthook/openshift.io-startinformers ok Jan 27 15:10:55 crc kubenswrapper[4697]: [+]poststarthook/openshift.io-restmapperupdater ok Jan 27 15:10:55 crc kubenswrapper[4697]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 27 15:10:55 crc kubenswrapper[4697]: livez check failed Jan 27 15:10:55 crc kubenswrapper[4697]: I0127 15:10:55.279659 4697 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-nmrvs" podUID="f9c4aaa3-b53b-4b3f-8d6a-b9b7eef37362" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 15:10:55 crc kubenswrapper[4697]: I0127 15:10:55.286409 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8qkrg"] Jan 27 15:10:55 crc kubenswrapper[4697]: I0127 15:10:55.287634 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-8b9kp" event={"ID":"9957edac-db7a-4d39-8224-f3e24a16bb43","Type":"ContainerStarted","Data":"c781f2cddb8ac26897da78a8ad392f8d0c5759cc8655c3903dc5ca96e928fab3"} Jan 27 15:10:55 crc kubenswrapper[4697]: I0127 15:10:55.313617 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"9aeac2431b6fd8e34ce4cf36ddea867fdacf199e679b07cee807fa844480fba7"} Jan 27 15:10:55 crc kubenswrapper[4697]: I0127 15:10:55.313661 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"15f5e557a61b323bd689e9043a00b020533bd0ad831b9c19cfd103d0ef5f53d6"} Jan 27 15:10:55 crc kubenswrapper[4697]: I0127 15:10:55.314118 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:10:55 crc kubenswrapper[4697]: I0127 15:10:55.350989 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:10:55 crc kubenswrapper[4697]: I0127 15:10:55.351499 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf0813c1-e1e9-42a4-8cf9-8fd7fba35e3d-utilities\") pod \"certified-operators-59htg\" (UID: \"bf0813c1-e1e9-42a4-8cf9-8fd7fba35e3d\") " pod="openshift-marketplace/certified-operators-59htg" Jan 27 15:10:55 crc kubenswrapper[4697]: I0127 15:10:55.351563 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djj9k\" (UniqueName: \"kubernetes.io/projected/bf0813c1-e1e9-42a4-8cf9-8fd7fba35e3d-kube-api-access-djj9k\") pod \"certified-operators-59htg\" (UID: \"bf0813c1-e1e9-42a4-8cf9-8fd7fba35e3d\") " pod="openshift-marketplace/certified-operators-59htg" Jan 27 15:10:55 crc kubenswrapper[4697]: I0127 15:10:55.351595 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf0813c1-e1e9-42a4-8cf9-8fd7fba35e3d-catalog-content\") pod \"certified-operators-59htg\" (UID: \"bf0813c1-e1e9-42a4-8cf9-8fd7fba35e3d\") " pod="openshift-marketplace/certified-operators-59htg" Jan 27 15:10:55 crc kubenswrapper[4697]: I0127 15:10:55.351942 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf0813c1-e1e9-42a4-8cf9-8fd7fba35e3d-catalog-content\") pod \"certified-operators-59htg\" (UID: \"bf0813c1-e1e9-42a4-8cf9-8fd7fba35e3d\") " pod="openshift-marketplace/certified-operators-59htg" Jan 27 15:10:55 crc kubenswrapper[4697]: E0127 15:10:55.352351 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:10:55.852335656 +0000 UTC m=+152.024735437 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:10:55 crc kubenswrapper[4697]: I0127 15:10:55.352563 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf0813c1-e1e9-42a4-8cf9-8fd7fba35e3d-utilities\") pod \"certified-operators-59htg\" (UID: \"bf0813c1-e1e9-42a4-8cf9-8fd7fba35e3d\") " pod="openshift-marketplace/certified-operators-59htg" Jan 27 15:10:55 crc kubenswrapper[4697]: I0127 15:10:55.428464 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-56255" Jan 27 15:10:55 crc kubenswrapper[4697]: I0127 15:10:55.449853 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-wg52h"] Jan 27 15:10:55 crc kubenswrapper[4697]: I0127 15:10:55.455706 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qlprf\" (UID: \"43fd9fa4-b232-4d49-8f52-27d016de4cad\") " pod="openshift-image-registry/image-registry-697d97f7c8-qlprf" Jan 27 15:10:55 crc kubenswrapper[4697]: I0127 15:10:55.455760 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8945ffcc-ee9c-46ab-b2dd-474253d4ba03-catalog-content\") pod \"community-operators-8qkrg\" (UID: \"8945ffcc-ee9c-46ab-b2dd-474253d4ba03\") " pod="openshift-marketplace/community-operators-8qkrg" Jan 27 15:10:55 crc kubenswrapper[4697]: I0127 15:10:55.455795 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8945ffcc-ee9c-46ab-b2dd-474253d4ba03-utilities\") pod \"community-operators-8qkrg\" (UID: \"8945ffcc-ee9c-46ab-b2dd-474253d4ba03\") " pod="openshift-marketplace/community-operators-8qkrg" Jan 27 15:10:55 crc kubenswrapper[4697]: I0127 15:10:55.455844 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqxwb\" (UniqueName: \"kubernetes.io/projected/8945ffcc-ee9c-46ab-b2dd-474253d4ba03-kube-api-access-sqxwb\") pod \"community-operators-8qkrg\" (UID: \"8945ffcc-ee9c-46ab-b2dd-474253d4ba03\") " pod="openshift-marketplace/community-operators-8qkrg" Jan 27 15:10:55 crc kubenswrapper[4697]: E0127 15:10:55.456680 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:10:55.956660493 +0000 UTC m=+152.129060274 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qlprf" (UID: "43fd9fa4-b232-4d49-8f52-27d016de4cad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:10:55 crc kubenswrapper[4697]: I0127 15:10:55.459016 4697 patch_prober.go:28] interesting pod/router-default-5444994796-wmwsd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 15:10:55 crc kubenswrapper[4697]: [-]has-synced failed: reason withheld Jan 27 15:10:55 crc kubenswrapper[4697]: [+]process-running ok Jan 27 15:10:55 crc kubenswrapper[4697]: healthz check failed Jan 27 15:10:55 crc kubenswrapper[4697]: I0127 15:10:55.459069 4697 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wmwsd" podUID="d50c0395-ec10-4463-92e4-29defdd47f62" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 15:10:55 crc kubenswrapper[4697]: I0127 15:10:55.460063 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wg52h" Jan 27 15:10:55 crc kubenswrapper[4697]: I0127 15:10:55.474436 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djj9k\" (UniqueName: \"kubernetes.io/projected/bf0813c1-e1e9-42a4-8cf9-8fd7fba35e3d-kube-api-access-djj9k\") pod \"certified-operators-59htg\" (UID: \"bf0813c1-e1e9-42a4-8cf9-8fd7fba35e3d\") " pod="openshift-marketplace/certified-operators-59htg" Jan 27 15:10:55 crc kubenswrapper[4697]: I0127 15:10:55.511766 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-wjd95" Jan 27 15:10:55 crc kubenswrapper[4697]: I0127 15:10:55.525454 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-wjd95" Jan 27 15:10:55 crc kubenswrapper[4697]: I0127 15:10:55.547077 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wg52h"] Jan 27 15:10:55 crc kubenswrapper[4697]: I0127 15:10:55.556473 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:10:55 crc kubenswrapper[4697]: I0127 15:10:55.556774 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sqxwb\" (UniqueName: \"kubernetes.io/projected/8945ffcc-ee9c-46ab-b2dd-474253d4ba03-kube-api-access-sqxwb\") pod \"community-operators-8qkrg\" (UID: \"8945ffcc-ee9c-46ab-b2dd-474253d4ba03\") " pod="openshift-marketplace/community-operators-8qkrg" Jan 27 15:10:55 crc kubenswrapper[4697]: I0127 15:10:55.556896 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8945ffcc-ee9c-46ab-b2dd-474253d4ba03-catalog-content\") pod \"community-operators-8qkrg\" (UID: \"8945ffcc-ee9c-46ab-b2dd-474253d4ba03\") " pod="openshift-marketplace/community-operators-8qkrg" Jan 27 15:10:55 crc kubenswrapper[4697]: I0127 15:10:55.556925 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8945ffcc-ee9c-46ab-b2dd-474253d4ba03-utilities\") pod \"community-operators-8qkrg\" (UID: \"8945ffcc-ee9c-46ab-b2dd-474253d4ba03\") " pod="openshift-marketplace/community-operators-8qkrg" Jan 27 15:10:55 crc kubenswrapper[4697]: I0127 15:10:55.556479 4697 patch_prober.go:28] interesting pod/console-f9d7485db-wjd95 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.7:8443/health\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Jan 27 15:10:55 crc kubenswrapper[4697]: I0127 15:10:55.557204 4697 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-wjd95" podUID="0f95124d-8a5d-4a0d-b4cd-906d0341a6a2" containerName="console" probeResult="failure" output="Get \"https://10.217.0.7:8443/health\": dial tcp 10.217.0.7:8443: connect: connection refused" Jan 27 15:10:55 crc kubenswrapper[4697]: I0127 15:10:55.557452 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8945ffcc-ee9c-46ab-b2dd-474253d4ba03-utilities\") pod \"community-operators-8qkrg\" (UID: \"8945ffcc-ee9c-46ab-b2dd-474253d4ba03\") " pod="openshift-marketplace/community-operators-8qkrg" Jan 27 15:10:55 crc kubenswrapper[4697]: E0127 15:10:55.557126 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:10:56.057094068 +0000 UTC m=+152.229493849 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:10:55 crc kubenswrapper[4697]: I0127 15:10:55.558018 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8945ffcc-ee9c-46ab-b2dd-474253d4ba03-catalog-content\") pod \"community-operators-8qkrg\" (UID: \"8945ffcc-ee9c-46ab-b2dd-474253d4ba03\") " pod="openshift-marketplace/community-operators-8qkrg" Jan 27 15:10:55 crc kubenswrapper[4697]: I0127 15:10:55.603879 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sqxwb\" (UniqueName: \"kubernetes.io/projected/8945ffcc-ee9c-46ab-b2dd-474253d4ba03-kube-api-access-sqxwb\") pod \"community-operators-8qkrg\" (UID: \"8945ffcc-ee9c-46ab-b2dd-474253d4ba03\") " pod="openshift-marketplace/community-operators-8qkrg" Jan 27 15:10:55 crc kubenswrapper[4697]: I0127 15:10:55.658564 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qlprf\" (UID: \"43fd9fa4-b232-4d49-8f52-27d016de4cad\") " pod="openshift-image-registry/image-registry-697d97f7c8-qlprf" Jan 27 15:10:55 crc kubenswrapper[4697]: I0127 15:10:55.658878 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f446277-5df0-4b04-9f9b-cce248835bcd-catalog-content\") pod \"certified-operators-wg52h\" (UID: \"9f446277-5df0-4b04-9f9b-cce248835bcd\") " pod="openshift-marketplace/certified-operators-wg52h" Jan 27 15:10:55 crc kubenswrapper[4697]: I0127 15:10:55.659025 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f446277-5df0-4b04-9f9b-cce248835bcd-utilities\") pod \"certified-operators-wg52h\" (UID: \"9f446277-5df0-4b04-9f9b-cce248835bcd\") " pod="openshift-marketplace/certified-operators-wg52h" Jan 27 15:10:55 crc kubenswrapper[4697]: I0127 15:10:55.659132 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtldz\" (UniqueName: \"kubernetes.io/projected/9f446277-5df0-4b04-9f9b-cce248835bcd-kube-api-access-rtldz\") pod \"certified-operators-wg52h\" (UID: \"9f446277-5df0-4b04-9f9b-cce248835bcd\") " pod="openshift-marketplace/certified-operators-wg52h" Jan 27 15:10:55 crc kubenswrapper[4697]: E0127 15:10:55.661526 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:10:56.161506808 +0000 UTC m=+152.333906699 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qlprf" (UID: "43fd9fa4-b232-4d49-8f52-27d016de4cad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:10:55 crc kubenswrapper[4697]: I0127 15:10:55.679948 4697 patch_prober.go:28] interesting pod/downloads-7954f5f757-78k6r container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Jan 27 15:10:55 crc kubenswrapper[4697]: I0127 15:10:55.679996 4697 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-78k6r" podUID="73d9ac28-74b0-4ead-b4e4-b270264feb05" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Jan 27 15:10:55 crc kubenswrapper[4697]: I0127 15:10:55.680295 4697 patch_prober.go:28] interesting pod/downloads-7954f5f757-78k6r container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Jan 27 15:10:55 crc kubenswrapper[4697]: I0127 15:10:55.680309 4697 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-78k6r" podUID="73d9ac28-74b0-4ead-b4e4-b270264feb05" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Jan 27 15:10:55 crc kubenswrapper[4697]: I0127 15:10:55.687140 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-tgccn" Jan 27 15:10:55 crc kubenswrapper[4697]: I0127 15:10:55.732046 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 27 15:10:55 crc kubenswrapper[4697]: I0127 15:10:55.734805 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-59htg" Jan 27 15:10:55 crc kubenswrapper[4697]: I0127 15:10:55.760664 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:10:55 crc kubenswrapper[4697]: E0127 15:10:55.760748 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:10:56.260730718 +0000 UTC m=+152.433130499 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:10:55 crc kubenswrapper[4697]: I0127 15:10:55.761124 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qlprf\" (UID: \"43fd9fa4-b232-4d49-8f52-27d016de4cad\") " pod="openshift-image-registry/image-registry-697d97f7c8-qlprf" Jan 27 15:10:55 crc kubenswrapper[4697]: I0127 15:10:55.761166 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f446277-5df0-4b04-9f9b-cce248835bcd-catalog-content\") pod \"certified-operators-wg52h\" (UID: \"9f446277-5df0-4b04-9f9b-cce248835bcd\") " pod="openshift-marketplace/certified-operators-wg52h" Jan 27 15:10:55 crc kubenswrapper[4697]: I0127 15:10:55.761238 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f446277-5df0-4b04-9f9b-cce248835bcd-utilities\") pod \"certified-operators-wg52h\" (UID: \"9f446277-5df0-4b04-9f9b-cce248835bcd\") " pod="openshift-marketplace/certified-operators-wg52h" Jan 27 15:10:55 crc kubenswrapper[4697]: I0127 15:10:55.761264 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rtldz\" (UniqueName: \"kubernetes.io/projected/9f446277-5df0-4b04-9f9b-cce248835bcd-kube-api-access-rtldz\") pod \"certified-operators-wg52h\" (UID: \"9f446277-5df0-4b04-9f9b-cce248835bcd\") " pod="openshift-marketplace/certified-operators-wg52h" Jan 27 15:10:55 crc kubenswrapper[4697]: E0127 15:10:55.762935 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:10:56.262919791 +0000 UTC m=+152.435319572 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qlprf" (UID: "43fd9fa4-b232-4d49-8f52-27d016de4cad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:10:55 crc kubenswrapper[4697]: I0127 15:10:55.762951 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f446277-5df0-4b04-9f9b-cce248835bcd-utilities\") pod \"certified-operators-wg52h\" (UID: \"9f446277-5df0-4b04-9f9b-cce248835bcd\") " pod="openshift-marketplace/certified-operators-wg52h" Jan 27 15:10:55 crc kubenswrapper[4697]: I0127 15:10:55.763313 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f446277-5df0-4b04-9f9b-cce248835bcd-catalog-content\") pod \"certified-operators-wg52h\" (UID: \"9f446277-5df0-4b04-9f9b-cce248835bcd\") " pod="openshift-marketplace/certified-operators-wg52h" Jan 27 15:10:55 crc kubenswrapper[4697]: I0127 15:10:55.835639 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rtldz\" (UniqueName: \"kubernetes.io/projected/9f446277-5df0-4b04-9f9b-cce248835bcd-kube-api-access-rtldz\") pod \"certified-operators-wg52h\" (UID: \"9f446277-5df0-4b04-9f9b-cce248835bcd\") " pod="openshift-marketplace/certified-operators-wg52h" Jan 27 15:10:55 crc kubenswrapper[4697]: I0127 15:10:55.877552 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:10:55 crc kubenswrapper[4697]: E0127 15:10:55.882996 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:10:56.382969083 +0000 UTC m=+152.555368864 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:10:55 crc kubenswrapper[4697]: I0127 15:10:55.903103 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wg52h" Jan 27 15:10:55 crc kubenswrapper[4697]: I0127 15:10:55.903627 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8qkrg" Jan 27 15:10:55 crc kubenswrapper[4697]: I0127 15:10:55.930506 4697 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-vj475 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 15:10:55 crc kubenswrapper[4697]: I0127 15:10:55.930549 4697 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-vj475 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 15:10:55 crc kubenswrapper[4697]: I0127 15:10:55.930563 4697 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-vj475" podUID="cef607c4-e16c-4ae6-9b66-3206b100267c" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 15:10:55 crc kubenswrapper[4697]: I0127 15:10:55.930592 4697 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-vj475" podUID="cef607c4-e16c-4ae6-9b66-3206b100267c" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 15:10:55 crc kubenswrapper[4697]: I0127 15:10:55.938988 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-w85nj" Jan 27 15:10:55 crc kubenswrapper[4697]: I0127 15:10:55.943088 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h46d6" Jan 27 15:10:55 crc kubenswrapper[4697]: I0127 15:10:55.943119 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h46d6" Jan 27 15:10:55 crc kubenswrapper[4697]: I0127 15:10:55.949574 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-v52qb" Jan 27 15:10:55 crc kubenswrapper[4697]: I0127 15:10:55.958603 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h46d6" Jan 27 15:10:55 crc kubenswrapper[4697]: I0127 15:10:55.984519 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qlprf\" (UID: \"43fd9fa4-b232-4d49-8f52-27d016de4cad\") " pod="openshift-image-registry/image-registry-697d97f7c8-qlprf" Jan 27 15:10:55 crc kubenswrapper[4697]: E0127 15:10:55.986483 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:10:56.486454216 +0000 UTC m=+152.658853987 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qlprf" (UID: "43fd9fa4-b232-4d49-8f52-27d016de4cad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:10:56 crc kubenswrapper[4697]: I0127 15:10:56.086794 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:10:56 crc kubenswrapper[4697]: E0127 15:10:56.086975 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:10:56.586950713 +0000 UTC m=+152.759350494 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:10:56 crc kubenswrapper[4697]: I0127 15:10:56.087022 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qlprf\" (UID: \"43fd9fa4-b232-4d49-8f52-27d016de4cad\") " pod="openshift-image-registry/image-registry-697d97f7c8-qlprf" Jan 27 15:10:56 crc kubenswrapper[4697]: E0127 15:10:56.087448 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:10:56.587440097 +0000 UTC m=+152.759839878 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qlprf" (UID: "43fd9fa4-b232-4d49-8f52-27d016de4cad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:10:56 crc kubenswrapper[4697]: I0127 15:10:56.192200 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:10:56 crc kubenswrapper[4697]: E0127 15:10:56.192565 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:10:56.692547907 +0000 UTC m=+152.864947688 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:10:56 crc kubenswrapper[4697]: I0127 15:10:56.271184 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-45xm2" Jan 27 15:10:56 crc kubenswrapper[4697]: I0127 15:10:56.294195 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qlprf\" (UID: \"43fd9fa4-b232-4d49-8f52-27d016de4cad\") " pod="openshift-image-registry/image-registry-697d97f7c8-qlprf" Jan 27 15:10:56 crc kubenswrapper[4697]: E0127 15:10:56.294542 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:10:56.794527437 +0000 UTC m=+152.966927218 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qlprf" (UID: "43fd9fa4-b232-4d49-8f52-27d016de4cad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:10:56 crc kubenswrapper[4697]: I0127 15:10:56.363493 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"8e113883f245c0a0753e6842a2c50d1047d58df83c492f554fe63f368faf6edd"} Jan 27 15:10:56 crc kubenswrapper[4697]: I0127 15:10:56.366014 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-8b9kp" event={"ID":"9957edac-db7a-4d39-8224-f3e24a16bb43","Type":"ContainerStarted","Data":"ac143e3963b58f3b98face666edc9275d12d1dd7e127940e5af5de36d2da1deb"} Jan 27 15:10:56 crc kubenswrapper[4697]: I0127 15:10:56.381589 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"cbb9f5fcc90285e42d26671ccff5f11715e3a9ef8b4ed8b5f1e1383f00a81acd"} Jan 27 15:10:56 crc kubenswrapper[4697]: I0127 15:10:56.381635 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"d790aa7a88220f2a2864808bf07f5874d8eb3fb910ab53262819ddfaea65eb3d"} Jan 27 15:10:56 crc kubenswrapper[4697]: I0127 15:10:56.395175 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:10:56 crc kubenswrapper[4697]: E0127 15:10:56.396662 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:10:56.89664296 +0000 UTC m=+153.069042751 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:10:56 crc kubenswrapper[4697]: I0127 15:10:56.410641 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"cf07624a-74f5-4561-81f2-d1955c199a85","Type":"ContainerStarted","Data":"c1b0721ee83f379997085f772f541c37990ff5025e62cd12e6a09ac9bbf35986"} Jan 27 15:10:56 crc kubenswrapper[4697]: I0127 15:10:56.437461 4697 generic.go:334] "Generic (PLEG): container finished" podID="79f6280f-8dc0-42b8-be4c-cbbc6528bf58" containerID="fa905afa8cf330172707edf7a50b0996520776d36a6e83c10c9788272644f84f" exitCode=0 Jan 27 15:10:56 crc kubenswrapper[4697]: I0127 15:10:56.438452 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492100-sfd69" event={"ID":"79f6280f-8dc0-42b8-be4c-cbbc6528bf58","Type":"ContainerDied","Data":"fa905afa8cf330172707edf7a50b0996520776d36a6e83c10c9788272644f84f"} Jan 27 15:10:56 crc kubenswrapper[4697]: I0127 15:10:56.443284 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-wmwsd" Jan 27 15:10:56 crc kubenswrapper[4697]: I0127 15:10:56.449540 4697 patch_prober.go:28] interesting pod/router-default-5444994796-wmwsd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 15:10:56 crc kubenswrapper[4697]: [-]has-synced failed: reason withheld Jan 27 15:10:56 crc kubenswrapper[4697]: [+]process-running ok Jan 27 15:10:56 crc kubenswrapper[4697]: healthz check failed Jan 27 15:10:56 crc kubenswrapper[4697]: I0127 15:10:56.449592 4697 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wmwsd" podUID="d50c0395-ec10-4463-92e4-29defdd47f62" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 15:10:56 crc kubenswrapper[4697]: I0127 15:10:56.451328 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-59htg"] Jan 27 15:10:56 crc kubenswrapper[4697]: I0127 15:10:56.463110 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h46d6" Jan 27 15:10:56 crc kubenswrapper[4697]: I0127 15:10:56.487714 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-js7zh" Jan 27 15:10:56 crc kubenswrapper[4697]: I0127 15:10:56.498134 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qkkgm" Jan 27 15:10:56 crc kubenswrapper[4697]: I0127 15:10:56.499719 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qlprf\" (UID: \"43fd9fa4-b232-4d49-8f52-27d016de4cad\") " pod="openshift-image-registry/image-registry-697d97f7c8-qlprf" Jan 27 15:10:56 crc kubenswrapper[4697]: E0127 15:10:56.501560 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:10:57.001546274 +0000 UTC m=+153.173946055 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qlprf" (UID: "43fd9fa4-b232-4d49-8f52-27d016de4cad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:10:56 crc kubenswrapper[4697]: I0127 15:10:56.543188 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-js7zh" Jan 27 15:10:56 crc kubenswrapper[4697]: I0127 15:10:56.551919 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dr2qr" Jan 27 15:10:56 crc kubenswrapper[4697]: I0127 15:10:56.618403 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:10:56 crc kubenswrapper[4697]: E0127 15:10:56.619500 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:10:57.119480735 +0000 UTC m=+153.291880516 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:10:56 crc kubenswrapper[4697]: I0127 15:10:56.727609 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qlprf\" (UID: \"43fd9fa4-b232-4d49-8f52-27d016de4cad\") " pod="openshift-image-registry/image-registry-697d97f7c8-qlprf" Jan 27 15:10:56 crc kubenswrapper[4697]: E0127 15:10:56.727970 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:10:57.227959232 +0000 UTC m=+153.400359013 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qlprf" (UID: "43fd9fa4-b232-4d49-8f52-27d016de4cad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:10:56 crc kubenswrapper[4697]: I0127 15:10:56.830476 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:10:56 crc kubenswrapper[4697]: E0127 15:10:56.831358 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:10:57.331337012 +0000 UTC m=+153.503736793 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:10:56 crc kubenswrapper[4697]: I0127 15:10:56.932851 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qlprf\" (UID: \"43fd9fa4-b232-4d49-8f52-27d016de4cad\") " pod="openshift-image-registry/image-registry-697d97f7c8-qlprf" Jan 27 15:10:56 crc kubenswrapper[4697]: E0127 15:10:56.933212 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:10:57.433175788 +0000 UTC m=+153.605575569 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qlprf" (UID: "43fd9fa4-b232-4d49-8f52-27d016de4cad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:10:57 crc kubenswrapper[4697]: I0127 15:10:57.037268 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:10:57 crc kubenswrapper[4697]: E0127 15:10:57.037412 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:10:57.537382861 +0000 UTC m=+153.709782652 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:10:57 crc kubenswrapper[4697]: I0127 15:10:57.037535 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qlprf\" (UID: \"43fd9fa4-b232-4d49-8f52-27d016de4cad\") " pod="openshift-image-registry/image-registry-697d97f7c8-qlprf" Jan 27 15:10:57 crc kubenswrapper[4697]: E0127 15:10:57.037838 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:10:57.537817494 +0000 UTC m=+153.710217275 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qlprf" (UID: "43fd9fa4-b232-4d49-8f52-27d016de4cad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:10:57 crc kubenswrapper[4697]: I0127 15:10:57.139427 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:10:57 crc kubenswrapper[4697]: E0127 15:10:57.139642 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:10:57.639616788 +0000 UTC m=+153.812016569 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:10:57 crc kubenswrapper[4697]: I0127 15:10:57.139715 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qlprf\" (UID: \"43fd9fa4-b232-4d49-8f52-27d016de4cad\") " pod="openshift-image-registry/image-registry-697d97f7c8-qlprf" Jan 27 15:10:57 crc kubenswrapper[4697]: E0127 15:10:57.140113 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:10:57.640105523 +0000 UTC m=+153.812505304 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qlprf" (UID: "43fd9fa4-b232-4d49-8f52-27d016de4cad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:10:57 crc kubenswrapper[4697]: I0127 15:10:57.159562 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-cbv2z"] Jan 27 15:10:57 crc kubenswrapper[4697]: I0127 15:10:57.160642 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cbv2z" Jan 27 15:10:57 crc kubenswrapper[4697]: I0127 15:10:57.162669 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 27 15:10:57 crc kubenswrapper[4697]: I0127 15:10:57.186946 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cbv2z"] Jan 27 15:10:57 crc kubenswrapper[4697]: I0127 15:10:57.209520 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-56255"] Jan 27 15:10:57 crc kubenswrapper[4697]: I0127 15:10:57.240435 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:10:57 crc kubenswrapper[4697]: I0127 15:10:57.240694 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hrcv\" (UniqueName: \"kubernetes.io/projected/d7864bf9-220d-402f-bb77-0240a422c2f8-kube-api-access-5hrcv\") pod \"redhat-marketplace-cbv2z\" (UID: \"d7864bf9-220d-402f-bb77-0240a422c2f8\") " pod="openshift-marketplace/redhat-marketplace-cbv2z" Jan 27 15:10:57 crc kubenswrapper[4697]: I0127 15:10:57.240733 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7864bf9-220d-402f-bb77-0240a422c2f8-catalog-content\") pod \"redhat-marketplace-cbv2z\" (UID: \"d7864bf9-220d-402f-bb77-0240a422c2f8\") " pod="openshift-marketplace/redhat-marketplace-cbv2z" Jan 27 15:10:57 crc kubenswrapper[4697]: I0127 15:10:57.240749 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7864bf9-220d-402f-bb77-0240a422c2f8-utilities\") pod \"redhat-marketplace-cbv2z\" (UID: \"d7864bf9-220d-402f-bb77-0240a422c2f8\") " pod="openshift-marketplace/redhat-marketplace-cbv2z" Jan 27 15:10:57 crc kubenswrapper[4697]: E0127 15:10:57.240841 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:10:57.740826676 +0000 UTC m=+153.913226457 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:10:57 crc kubenswrapper[4697]: I0127 15:10:57.342511 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qlprf\" (UID: \"43fd9fa4-b232-4d49-8f52-27d016de4cad\") " pod="openshift-image-registry/image-registry-697d97f7c8-qlprf" Jan 27 15:10:57 crc kubenswrapper[4697]: I0127 15:10:57.342582 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5hrcv\" (UniqueName: \"kubernetes.io/projected/d7864bf9-220d-402f-bb77-0240a422c2f8-kube-api-access-5hrcv\") pod \"redhat-marketplace-cbv2z\" (UID: \"d7864bf9-220d-402f-bb77-0240a422c2f8\") " pod="openshift-marketplace/redhat-marketplace-cbv2z" Jan 27 15:10:57 crc kubenswrapper[4697]: I0127 15:10:57.342616 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7864bf9-220d-402f-bb77-0240a422c2f8-catalog-content\") pod \"redhat-marketplace-cbv2z\" (UID: \"d7864bf9-220d-402f-bb77-0240a422c2f8\") " pod="openshift-marketplace/redhat-marketplace-cbv2z" Jan 27 15:10:57 crc kubenswrapper[4697]: I0127 15:10:57.342633 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7864bf9-220d-402f-bb77-0240a422c2f8-utilities\") pod \"redhat-marketplace-cbv2z\" (UID: \"d7864bf9-220d-402f-bb77-0240a422c2f8\") " pod="openshift-marketplace/redhat-marketplace-cbv2z" Jan 27 15:10:57 crc kubenswrapper[4697]: I0127 15:10:57.343027 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7864bf9-220d-402f-bb77-0240a422c2f8-utilities\") pod \"redhat-marketplace-cbv2z\" (UID: \"d7864bf9-220d-402f-bb77-0240a422c2f8\") " pod="openshift-marketplace/redhat-marketplace-cbv2z" Jan 27 15:10:57 crc kubenswrapper[4697]: E0127 15:10:57.343254 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:10:57.843241458 +0000 UTC m=+154.015641239 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qlprf" (UID: "43fd9fa4-b232-4d49-8f52-27d016de4cad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:10:57 crc kubenswrapper[4697]: I0127 15:10:57.345051 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7864bf9-220d-402f-bb77-0240a422c2f8-catalog-content\") pod \"redhat-marketplace-cbv2z\" (UID: \"d7864bf9-220d-402f-bb77-0240a422c2f8\") " pod="openshift-marketplace/redhat-marketplace-cbv2z" Jan 27 15:10:57 crc kubenswrapper[4697]: I0127 15:10:57.372912 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wg52h"] Jan 27 15:10:57 crc kubenswrapper[4697]: I0127 15:10:57.384328 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5hrcv\" (UniqueName: \"kubernetes.io/projected/d7864bf9-220d-402f-bb77-0240a422c2f8-kube-api-access-5hrcv\") pod \"redhat-marketplace-cbv2z\" (UID: \"d7864bf9-220d-402f-bb77-0240a422c2f8\") " pod="openshift-marketplace/redhat-marketplace-cbv2z" Jan 27 15:10:57 crc kubenswrapper[4697]: I0127 15:10:57.443407 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:10:57 crc kubenswrapper[4697]: E0127 15:10:57.443687 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:10:57.943673123 +0000 UTC m=+154.116072904 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:10:57 crc kubenswrapper[4697]: I0127 15:10:57.444697 4697 patch_prober.go:28] interesting pod/router-default-5444994796-wmwsd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 15:10:57 crc kubenswrapper[4697]: [-]has-synced failed: reason withheld Jan 27 15:10:57 crc kubenswrapper[4697]: [+]process-running ok Jan 27 15:10:57 crc kubenswrapper[4697]: healthz check failed Jan 27 15:10:57 crc kubenswrapper[4697]: I0127 15:10:57.444742 4697 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wmwsd" podUID="d50c0395-ec10-4463-92e4-29defdd47f62" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 15:10:57 crc kubenswrapper[4697]: I0127 15:10:57.450597 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wg52h" event={"ID":"9f446277-5df0-4b04-9f9b-cce248835bcd","Type":"ContainerStarted","Data":"a812afe0a72b6cdfb23d4e9eeeb4fde1feca2263015b42cee84612f178454cda"} Jan 27 15:10:57 crc kubenswrapper[4697]: I0127 15:10:57.463116 4697 generic.go:334] "Generic (PLEG): container finished" podID="bf0813c1-e1e9-42a4-8cf9-8fd7fba35e3d" containerID="8cddfd6a2cc55c3c78aae23b52977997bc1b96a4dd948af487e09beabba7afba" exitCode=0 Jan 27 15:10:57 crc kubenswrapper[4697]: I0127 15:10:57.463181 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-59htg" event={"ID":"bf0813c1-e1e9-42a4-8cf9-8fd7fba35e3d","Type":"ContainerDied","Data":"8cddfd6a2cc55c3c78aae23b52977997bc1b96a4dd948af487e09beabba7afba"} Jan 27 15:10:57 crc kubenswrapper[4697]: I0127 15:10:57.463211 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-59htg" event={"ID":"bf0813c1-e1e9-42a4-8cf9-8fd7fba35e3d","Type":"ContainerStarted","Data":"e42ec4b279cb18c82c3c8bc9e910789f2171d0df56384daf8541822228ac1ce6"} Jan 27 15:10:57 crc kubenswrapper[4697]: I0127 15:10:57.468613 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8qkrg"] Jan 27 15:10:57 crc kubenswrapper[4697]: I0127 15:10:57.473549 4697 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 15:10:57 crc kubenswrapper[4697]: I0127 15:10:57.473829 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cbv2z" Jan 27 15:10:57 crc kubenswrapper[4697]: I0127 15:10:57.474833 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-56255" event={"ID":"316f7102-a9a6-40c4-b38b-ba9c7736526a","Type":"ContainerStarted","Data":"2af11725f4c88077db07e7383c9c73df1ff77fc5eedeb887c192cd8cb63fc249"} Jan 27 15:10:57 crc kubenswrapper[4697]: I0127 15:10:57.480847 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"cf07624a-74f5-4561-81f2-d1955c199a85","Type":"ContainerStarted","Data":"7a111e742c5e2df8f12fcd4f299600f51be5142d0bac8080ae0349952333b96f"} Jan 27 15:10:57 crc kubenswrapper[4697]: I0127 15:10:57.492847 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"979d09e5c5a842c4a9c148dd759dbb5adb9af95f017ff3074aa559c6b6458790"} Jan 27 15:10:57 crc kubenswrapper[4697]: I0127 15:10:57.547075 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qlprf\" (UID: \"43fd9fa4-b232-4d49-8f52-27d016de4cad\") " pod="openshift-image-registry/image-registry-697d97f7c8-qlprf" Jan 27 15:10:57 crc kubenswrapper[4697]: E0127 15:10:57.549178 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:10:58.049163264 +0000 UTC m=+154.221563045 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qlprf" (UID: "43fd9fa4-b232-4d49-8f52-27d016de4cad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:10:57 crc kubenswrapper[4697]: I0127 15:10:57.563357 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-nckjk"] Jan 27 15:10:57 crc kubenswrapper[4697]: I0127 15:10:57.564361 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nckjk" Jan 27 15:10:57 crc kubenswrapper[4697]: I0127 15:10:57.598458 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nckjk"] Jan 27 15:10:57 crc kubenswrapper[4697]: I0127 15:10:57.649766 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:10:57 crc kubenswrapper[4697]: I0127 15:10:57.649989 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e1946c0-832f-4b77-8e87-a716e9a10a8f-catalog-content\") pod \"redhat-marketplace-nckjk\" (UID: \"4e1946c0-832f-4b77-8e87-a716e9a10a8f\") " pod="openshift-marketplace/redhat-marketplace-nckjk" Jan 27 15:10:57 crc kubenswrapper[4697]: I0127 15:10:57.650042 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rcr6\" (UniqueName: \"kubernetes.io/projected/4e1946c0-832f-4b77-8e87-a716e9a10a8f-kube-api-access-2rcr6\") pod \"redhat-marketplace-nckjk\" (UID: \"4e1946c0-832f-4b77-8e87-a716e9a10a8f\") " pod="openshift-marketplace/redhat-marketplace-nckjk" Jan 27 15:10:57 crc kubenswrapper[4697]: I0127 15:10:57.650070 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e1946c0-832f-4b77-8e87-a716e9a10a8f-utilities\") pod \"redhat-marketplace-nckjk\" (UID: \"4e1946c0-832f-4b77-8e87-a716e9a10a8f\") " pod="openshift-marketplace/redhat-marketplace-nckjk" Jan 27 15:10:57 crc kubenswrapper[4697]: E0127 15:10:57.650182 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:10:58.150166955 +0000 UTC m=+154.322566736 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:10:57 crc kubenswrapper[4697]: I0127 15:10:57.752728 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e1946c0-832f-4b77-8e87-a716e9a10a8f-catalog-content\") pod \"redhat-marketplace-nckjk\" (UID: \"4e1946c0-832f-4b77-8e87-a716e9a10a8f\") " pod="openshift-marketplace/redhat-marketplace-nckjk" Jan 27 15:10:57 crc kubenswrapper[4697]: I0127 15:10:57.753135 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qlprf\" (UID: \"43fd9fa4-b232-4d49-8f52-27d016de4cad\") " pod="openshift-image-registry/image-registry-697d97f7c8-qlprf" Jan 27 15:10:57 crc kubenswrapper[4697]: I0127 15:10:57.753181 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rcr6\" (UniqueName: \"kubernetes.io/projected/4e1946c0-832f-4b77-8e87-a716e9a10a8f-kube-api-access-2rcr6\") pod \"redhat-marketplace-nckjk\" (UID: \"4e1946c0-832f-4b77-8e87-a716e9a10a8f\") " pod="openshift-marketplace/redhat-marketplace-nckjk" Jan 27 15:10:57 crc kubenswrapper[4697]: I0127 15:10:57.753223 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e1946c0-832f-4b77-8e87-a716e9a10a8f-utilities\") pod \"redhat-marketplace-nckjk\" (UID: \"4e1946c0-832f-4b77-8e87-a716e9a10a8f\") " pod="openshift-marketplace/redhat-marketplace-nckjk" Jan 27 15:10:57 crc kubenswrapper[4697]: I0127 15:10:57.753412 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e1946c0-832f-4b77-8e87-a716e9a10a8f-catalog-content\") pod \"redhat-marketplace-nckjk\" (UID: \"4e1946c0-832f-4b77-8e87-a716e9a10a8f\") " pod="openshift-marketplace/redhat-marketplace-nckjk" Jan 27 15:10:57 crc kubenswrapper[4697]: I0127 15:10:57.753608 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e1946c0-832f-4b77-8e87-a716e9a10a8f-utilities\") pod \"redhat-marketplace-nckjk\" (UID: \"4e1946c0-832f-4b77-8e87-a716e9a10a8f\") " pod="openshift-marketplace/redhat-marketplace-nckjk" Jan 27 15:10:57 crc kubenswrapper[4697]: E0127 15:10:57.753828 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:10:58.253809483 +0000 UTC m=+154.426209264 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qlprf" (UID: "43fd9fa4-b232-4d49-8f52-27d016de4cad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:10:57 crc kubenswrapper[4697]: I0127 15:10:57.764131 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-8b9kp" podStartSLOduration=14.7640864 podStartE2EDuration="14.7640864s" podCreationTimestamp="2026-01-27 15:10:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:10:57.703991551 +0000 UTC m=+153.876391332" watchObservedRunningTime="2026-01-27 15:10:57.7640864 +0000 UTC m=+153.936486181" Jan 27 15:10:57 crc kubenswrapper[4697]: I0127 15:10:57.769633 4697 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 27 15:10:57 crc kubenswrapper[4697]: I0127 15:10:57.780478 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rcr6\" (UniqueName: \"kubernetes.io/projected/4e1946c0-832f-4b77-8e87-a716e9a10a8f-kube-api-access-2rcr6\") pod \"redhat-marketplace-nckjk\" (UID: \"4e1946c0-832f-4b77-8e87-a716e9a10a8f\") " pod="openshift-marketplace/redhat-marketplace-nckjk" Jan 27 15:10:57 crc kubenswrapper[4697]: I0127 15:10:57.797570 4697 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-27T15:10:57.769682821Z","Handler":null,"Name":""} Jan 27 15:10:57 crc kubenswrapper[4697]: I0127 15:10:57.809043 4697 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 27 15:10:57 crc kubenswrapper[4697]: I0127 15:10:57.809086 4697 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 27 15:10:57 crc kubenswrapper[4697]: I0127 15:10:57.856114 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:10:57 crc kubenswrapper[4697]: I0127 15:10:57.898666 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nckjk" Jan 27 15:10:57 crc kubenswrapper[4697]: I0127 15:10:57.931820 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-wst7l"] Jan 27 15:10:57 crc kubenswrapper[4697]: I0127 15:10:57.942027 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-vj475" Jan 27 15:10:57 crc kubenswrapper[4697]: I0127 15:10:57.942290 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wst7l" Jan 27 15:10:57 crc kubenswrapper[4697]: I0127 15:10:57.944385 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 27 15:10:57 crc kubenswrapper[4697]: I0127 15:10:57.962614 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wst7l"] Jan 27 15:10:57 crc kubenswrapper[4697]: I0127 15:10:57.990938 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 27 15:10:58 crc kubenswrapper[4697]: I0127 15:10:58.060480 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20946332-e642-4802-b943-8c504ef8c3ec-utilities\") pod \"redhat-operators-wst7l\" (UID: \"20946332-e642-4802-b943-8c504ef8c3ec\") " pod="openshift-marketplace/redhat-operators-wst7l" Jan 27 15:10:58 crc kubenswrapper[4697]: I0127 15:10:58.060539 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qlprf\" (UID: \"43fd9fa4-b232-4d49-8f52-27d016de4cad\") " pod="openshift-image-registry/image-registry-697d97f7c8-qlprf" Jan 27 15:10:58 crc kubenswrapper[4697]: I0127 15:10:58.060580 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20946332-e642-4802-b943-8c504ef8c3ec-catalog-content\") pod \"redhat-operators-wst7l\" (UID: \"20946332-e642-4802-b943-8c504ef8c3ec\") " pod="openshift-marketplace/redhat-operators-wst7l" Jan 27 15:10:58 crc kubenswrapper[4697]: I0127 15:10:58.060702 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5pdh\" (UniqueName: \"kubernetes.io/projected/20946332-e642-4802-b943-8c504ef8c3ec-kube-api-access-t5pdh\") pod \"redhat-operators-wst7l\" (UID: \"20946332-e642-4802-b943-8c504ef8c3ec\") " pod="openshift-marketplace/redhat-operators-wst7l" Jan 27 15:10:58 crc kubenswrapper[4697]: I0127 15:10:58.127502 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492100-sfd69" Jan 27 15:10:58 crc kubenswrapper[4697]: I0127 15:10:58.129007 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cbv2z"] Jan 27 15:10:58 crc kubenswrapper[4697]: I0127 15:10:58.139776 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-5h858"] Jan 27 15:10:58 crc kubenswrapper[4697]: E0127 15:10:58.140047 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79f6280f-8dc0-42b8-be4c-cbbc6528bf58" containerName="collect-profiles" Jan 27 15:10:58 crc kubenswrapper[4697]: I0127 15:10:58.140061 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="79f6280f-8dc0-42b8-be4c-cbbc6528bf58" containerName="collect-profiles" Jan 27 15:10:58 crc kubenswrapper[4697]: I0127 15:10:58.140189 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="79f6280f-8dc0-42b8-be4c-cbbc6528bf58" containerName="collect-profiles" Jan 27 15:10:58 crc kubenswrapper[4697]: I0127 15:10:58.143545 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5h858" Jan 27 15:10:58 crc kubenswrapper[4697]: I0127 15:10:58.163660 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/79f6280f-8dc0-42b8-be4c-cbbc6528bf58-config-volume\") pod \"79f6280f-8dc0-42b8-be4c-cbbc6528bf58\" (UID: \"79f6280f-8dc0-42b8-be4c-cbbc6528bf58\") " Jan 27 15:10:58 crc kubenswrapper[4697]: I0127 15:10:58.163706 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/79f6280f-8dc0-42b8-be4c-cbbc6528bf58-secret-volume\") pod \"79f6280f-8dc0-42b8-be4c-cbbc6528bf58\" (UID: \"79f6280f-8dc0-42b8-be4c-cbbc6528bf58\") " Jan 27 15:10:58 crc kubenswrapper[4697]: I0127 15:10:58.163778 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kz6g9\" (UniqueName: \"kubernetes.io/projected/79f6280f-8dc0-42b8-be4c-cbbc6528bf58-kube-api-access-kz6g9\") pod \"79f6280f-8dc0-42b8-be4c-cbbc6528bf58\" (UID: \"79f6280f-8dc0-42b8-be4c-cbbc6528bf58\") " Jan 27 15:10:58 crc kubenswrapper[4697]: I0127 15:10:58.164089 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20946332-e642-4802-b943-8c504ef8c3ec-catalog-content\") pod \"redhat-operators-wst7l\" (UID: \"20946332-e642-4802-b943-8c504ef8c3ec\") " pod="openshift-marketplace/redhat-operators-wst7l" Jan 27 15:10:58 crc kubenswrapper[4697]: I0127 15:10:58.164134 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4h76\" (UniqueName: \"kubernetes.io/projected/e2bffcd5-911f-4cd1-92b3-e70c361719c4-kube-api-access-x4h76\") pod \"redhat-operators-5h858\" (UID: \"e2bffcd5-911f-4cd1-92b3-e70c361719c4\") " pod="openshift-marketplace/redhat-operators-5h858" Jan 27 15:10:58 crc kubenswrapper[4697]: I0127 15:10:58.164161 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t5pdh\" (UniqueName: \"kubernetes.io/projected/20946332-e642-4802-b943-8c504ef8c3ec-kube-api-access-t5pdh\") pod \"redhat-operators-wst7l\" (UID: \"20946332-e642-4802-b943-8c504ef8c3ec\") " pod="openshift-marketplace/redhat-operators-wst7l" Jan 27 15:10:58 crc kubenswrapper[4697]: I0127 15:10:58.164963 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2bffcd5-911f-4cd1-92b3-e70c361719c4-utilities\") pod \"redhat-operators-5h858\" (UID: \"e2bffcd5-911f-4cd1-92b3-e70c361719c4\") " pod="openshift-marketplace/redhat-operators-5h858" Jan 27 15:10:58 crc kubenswrapper[4697]: I0127 15:10:58.165013 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20946332-e642-4802-b943-8c504ef8c3ec-utilities\") pod \"redhat-operators-wst7l\" (UID: \"20946332-e642-4802-b943-8c504ef8c3ec\") " pod="openshift-marketplace/redhat-operators-wst7l" Jan 27 15:10:58 crc kubenswrapper[4697]: I0127 15:10:58.165068 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2bffcd5-911f-4cd1-92b3-e70c361719c4-catalog-content\") pod \"redhat-operators-5h858\" (UID: \"e2bffcd5-911f-4cd1-92b3-e70c361719c4\") " pod="openshift-marketplace/redhat-operators-5h858" Jan 27 15:10:58 crc kubenswrapper[4697]: I0127 15:10:58.165530 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20946332-e642-4802-b943-8c504ef8c3ec-catalog-content\") pod \"redhat-operators-wst7l\" (UID: \"20946332-e642-4802-b943-8c504ef8c3ec\") " pod="openshift-marketplace/redhat-operators-wst7l" Jan 27 15:10:58 crc kubenswrapper[4697]: I0127 15:10:58.165764 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20946332-e642-4802-b943-8c504ef8c3ec-utilities\") pod \"redhat-operators-wst7l\" (UID: \"20946332-e642-4802-b943-8c504ef8c3ec\") " pod="openshift-marketplace/redhat-operators-wst7l" Jan 27 15:10:58 crc kubenswrapper[4697]: I0127 15:10:58.166379 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/79f6280f-8dc0-42b8-be4c-cbbc6528bf58-config-volume" (OuterVolumeSpecName: "config-volume") pod "79f6280f-8dc0-42b8-be4c-cbbc6528bf58" (UID: "79f6280f-8dc0-42b8-be4c-cbbc6528bf58"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:10:58 crc kubenswrapper[4697]: I0127 15:10:58.171384 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5h858"] Jan 27 15:10:58 crc kubenswrapper[4697]: I0127 15:10:58.176984 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79f6280f-8dc0-42b8-be4c-cbbc6528bf58-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "79f6280f-8dc0-42b8-be4c-cbbc6528bf58" (UID: "79f6280f-8dc0-42b8-be4c-cbbc6528bf58"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:10:58 crc kubenswrapper[4697]: I0127 15:10:58.180190 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79f6280f-8dc0-42b8-be4c-cbbc6528bf58-kube-api-access-kz6g9" (OuterVolumeSpecName: "kube-api-access-kz6g9") pod "79f6280f-8dc0-42b8-be4c-cbbc6528bf58" (UID: "79f6280f-8dc0-42b8-be4c-cbbc6528bf58"). InnerVolumeSpecName "kube-api-access-kz6g9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:10:58 crc kubenswrapper[4697]: I0127 15:10:58.206868 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t5pdh\" (UniqueName: \"kubernetes.io/projected/20946332-e642-4802-b943-8c504ef8c3ec-kube-api-access-t5pdh\") pod \"redhat-operators-wst7l\" (UID: \"20946332-e642-4802-b943-8c504ef8c3ec\") " pod="openshift-marketplace/redhat-operators-wst7l" Jan 27 15:10:58 crc kubenswrapper[4697]: I0127 15:10:58.266311 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x4h76\" (UniqueName: \"kubernetes.io/projected/e2bffcd5-911f-4cd1-92b3-e70c361719c4-kube-api-access-x4h76\") pod \"redhat-operators-5h858\" (UID: \"e2bffcd5-911f-4cd1-92b3-e70c361719c4\") " pod="openshift-marketplace/redhat-operators-5h858" Jan 27 15:10:58 crc kubenswrapper[4697]: I0127 15:10:58.266684 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2bffcd5-911f-4cd1-92b3-e70c361719c4-utilities\") pod \"redhat-operators-5h858\" (UID: \"e2bffcd5-911f-4cd1-92b3-e70c361719c4\") " pod="openshift-marketplace/redhat-operators-5h858" Jan 27 15:10:58 crc kubenswrapper[4697]: I0127 15:10:58.266870 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2bffcd5-911f-4cd1-92b3-e70c361719c4-catalog-content\") pod \"redhat-operators-5h858\" (UID: \"e2bffcd5-911f-4cd1-92b3-e70c361719c4\") " pod="openshift-marketplace/redhat-operators-5h858" Jan 27 15:10:58 crc kubenswrapper[4697]: I0127 15:10:58.266915 4697 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/79f6280f-8dc0-42b8-be4c-cbbc6528bf58-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 27 15:10:58 crc kubenswrapper[4697]: I0127 15:10:58.266926 4697 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/79f6280f-8dc0-42b8-be4c-cbbc6528bf58-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 15:10:58 crc kubenswrapper[4697]: I0127 15:10:58.266935 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kz6g9\" (UniqueName: \"kubernetes.io/projected/79f6280f-8dc0-42b8-be4c-cbbc6528bf58-kube-api-access-kz6g9\") on node \"crc\" DevicePath \"\"" Jan 27 15:10:58 crc kubenswrapper[4697]: I0127 15:10:58.267246 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2bffcd5-911f-4cd1-92b3-e70c361719c4-catalog-content\") pod \"redhat-operators-5h858\" (UID: \"e2bffcd5-911f-4cd1-92b3-e70c361719c4\") " pod="openshift-marketplace/redhat-operators-5h858" Jan 27 15:10:58 crc kubenswrapper[4697]: I0127 15:10:58.267430 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2bffcd5-911f-4cd1-92b3-e70c361719c4-utilities\") pod \"redhat-operators-5h858\" (UID: \"e2bffcd5-911f-4cd1-92b3-e70c361719c4\") " pod="openshift-marketplace/redhat-operators-5h858" Jan 27 15:10:58 crc kubenswrapper[4697]: I0127 15:10:58.269854 4697 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 15:10:58 crc kubenswrapper[4697]: I0127 15:10:58.269940 4697 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qlprf\" (UID: \"43fd9fa4-b232-4d49-8f52-27d016de4cad\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-qlprf" Jan 27 15:10:58 crc kubenswrapper[4697]: I0127 15:10:58.284527 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x4h76\" (UniqueName: \"kubernetes.io/projected/e2bffcd5-911f-4cd1-92b3-e70c361719c4-kube-api-access-x4h76\") pod \"redhat-operators-5h858\" (UID: \"e2bffcd5-911f-4cd1-92b3-e70c361719c4\") " pod="openshift-marketplace/redhat-operators-5h858" Jan 27 15:10:58 crc kubenswrapper[4697]: I0127 15:10:58.284929 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wst7l" Jan 27 15:10:58 crc kubenswrapper[4697]: I0127 15:10:58.321965 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nckjk"] Jan 27 15:10:58 crc kubenswrapper[4697]: I0127 15:10:58.333351 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qlprf\" (UID: \"43fd9fa4-b232-4d49-8f52-27d016de4cad\") " pod="openshift-image-registry/image-registry-697d97f7c8-qlprf" Jan 27 15:10:58 crc kubenswrapper[4697]: I0127 15:10:58.370034 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-qlprf" Jan 27 15:10:58 crc kubenswrapper[4697]: I0127 15:10:58.451097 4697 patch_prober.go:28] interesting pod/router-default-5444994796-wmwsd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 15:10:58 crc kubenswrapper[4697]: [-]has-synced failed: reason withheld Jan 27 15:10:58 crc kubenswrapper[4697]: [+]process-running ok Jan 27 15:10:58 crc kubenswrapper[4697]: healthz check failed Jan 27 15:10:58 crc kubenswrapper[4697]: I0127 15:10:58.451154 4697 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wmwsd" podUID="d50c0395-ec10-4463-92e4-29defdd47f62" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 15:10:58 crc kubenswrapper[4697]: I0127 15:10:58.496237 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5h858" Jan 27 15:10:58 crc kubenswrapper[4697]: I0127 15:10:58.579694 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 27 15:10:58 crc kubenswrapper[4697]: I0127 15:10:58.580362 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nckjk" event={"ID":"4e1946c0-832f-4b77-8e87-a716e9a10a8f","Type":"ContainerStarted","Data":"99b0c3cb7f7d2f7e1bac76e58332eb7e0460b411beb9ca843ecfcdb2df370dc0"} Jan 27 15:10:58 crc kubenswrapper[4697]: I0127 15:10:58.580384 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wst7l"] Jan 27 15:10:58 crc kubenswrapper[4697]: I0127 15:10:58.586902 4697 generic.go:334] "Generic (PLEG): container finished" podID="9f446277-5df0-4b04-9f9b-cce248835bcd" containerID="be769011b69038e2900f3c7a43da6a2bde3956badf2d2689c9120e5063a6df6c" exitCode=0 Jan 27 15:10:58 crc kubenswrapper[4697]: I0127 15:10:58.587004 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wg52h" event={"ID":"9f446277-5df0-4b04-9f9b-cce248835bcd","Type":"ContainerDied","Data":"be769011b69038e2900f3c7a43da6a2bde3956badf2d2689c9120e5063a6df6c"} Jan 27 15:10:58 crc kubenswrapper[4697]: I0127 15:10:58.621586 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cbv2z" event={"ID":"d7864bf9-220d-402f-bb77-0240a422c2f8","Type":"ContainerStarted","Data":"e70db5c6ed677a639d2d5c04f3cd9bd3c0931ee09b13f2328ad630080eaaee95"} Jan 27 15:10:58 crc kubenswrapper[4697]: I0127 15:10:58.621672 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cbv2z" event={"ID":"d7864bf9-220d-402f-bb77-0240a422c2f8","Type":"ContainerStarted","Data":"a136edd5440e98ce1be0cc6e4f49af93fe0a252c2a0fbed45df56ed8399f459e"} Jan 27 15:10:58 crc kubenswrapper[4697]: I0127 15:10:58.626651 4697 generic.go:334] "Generic (PLEG): container finished" podID="316f7102-a9a6-40c4-b38b-ba9c7736526a" containerID="905d5ffe2f26fc6a47a46026dbcb1aecd4b8e1e24dcbd5491e7e593b72bd2fc3" exitCode=0 Jan 27 15:10:58 crc kubenswrapper[4697]: I0127 15:10:58.627591 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-56255" event={"ID":"316f7102-a9a6-40c4-b38b-ba9c7736526a","Type":"ContainerDied","Data":"905d5ffe2f26fc6a47a46026dbcb1aecd4b8e1e24dcbd5491e7e593b72bd2fc3"} Jan 27 15:10:58 crc kubenswrapper[4697]: I0127 15:10:58.631072 4697 generic.go:334] "Generic (PLEG): container finished" podID="8945ffcc-ee9c-46ab-b2dd-474253d4ba03" containerID="ac33bd78fc30639888157b9f37c439f103a5d92365b90fa84e5cbc4e33047953" exitCode=0 Jan 27 15:10:58 crc kubenswrapper[4697]: I0127 15:10:58.631128 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8qkrg" event={"ID":"8945ffcc-ee9c-46ab-b2dd-474253d4ba03","Type":"ContainerDied","Data":"ac33bd78fc30639888157b9f37c439f103a5d92365b90fa84e5cbc4e33047953"} Jan 27 15:10:58 crc kubenswrapper[4697]: I0127 15:10:58.631145 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8qkrg" event={"ID":"8945ffcc-ee9c-46ab-b2dd-474253d4ba03","Type":"ContainerStarted","Data":"237e7f03ad44043aeac263652b7a0506ff25e70f413d1ee58ce03758a2d9f739"} Jan 27 15:10:58 crc kubenswrapper[4697]: I0127 15:10:58.653560 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492100-sfd69" Jan 27 15:10:58 crc kubenswrapper[4697]: I0127 15:10:58.654158 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492100-sfd69" event={"ID":"79f6280f-8dc0-42b8-be4c-cbbc6528bf58","Type":"ContainerDied","Data":"68c9cc7a37facf9e7d739a8819d0899e5dd90a151d0e7075a4ba5bad7ecd91f4"} Jan 27 15:10:58 crc kubenswrapper[4697]: I0127 15:10:58.655039 4697 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="68c9cc7a37facf9e7d739a8819d0899e5dd90a151d0e7075a4ba5bad7ecd91f4" Jan 27 15:10:58 crc kubenswrapper[4697]: I0127 15:10:58.666583 4697 generic.go:334] "Generic (PLEG): container finished" podID="cf07624a-74f5-4561-81f2-d1955c199a85" containerID="7a111e742c5e2df8f12fcd4f299600f51be5142d0bac8080ae0349952333b96f" exitCode=0 Jan 27 15:10:58 crc kubenswrapper[4697]: I0127 15:10:58.666657 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"cf07624a-74f5-4561-81f2-d1955c199a85","Type":"ContainerDied","Data":"7a111e742c5e2df8f12fcd4f299600f51be5142d0bac8080ae0349952333b96f"} Jan 27 15:10:58 crc kubenswrapper[4697]: I0127 15:10:58.901738 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 15:10:58 crc kubenswrapper[4697]: I0127 15:10:58.940392 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-qlprf"] Jan 27 15:10:58 crc kubenswrapper[4697]: I0127 15:10:58.983453 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cf07624a-74f5-4561-81f2-d1955c199a85-kubelet-dir\") pod \"cf07624a-74f5-4561-81f2-d1955c199a85\" (UID: \"cf07624a-74f5-4561-81f2-d1955c199a85\") " Jan 27 15:10:58 crc kubenswrapper[4697]: I0127 15:10:58.983525 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cf07624a-74f5-4561-81f2-d1955c199a85-kube-api-access\") pod \"cf07624a-74f5-4561-81f2-d1955c199a85\" (UID: \"cf07624a-74f5-4561-81f2-d1955c199a85\") " Jan 27 15:10:58 crc kubenswrapper[4697]: I0127 15:10:58.983647 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cf07624a-74f5-4561-81f2-d1955c199a85-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "cf07624a-74f5-4561-81f2-d1955c199a85" (UID: "cf07624a-74f5-4561-81f2-d1955c199a85"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 15:10:58 crc kubenswrapper[4697]: I0127 15:10:58.983754 4697 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cf07624a-74f5-4561-81f2-d1955c199a85-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 27 15:10:58 crc kubenswrapper[4697]: I0127 15:10:58.989297 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf07624a-74f5-4561-81f2-d1955c199a85-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "cf07624a-74f5-4561-81f2-d1955c199a85" (UID: "cf07624a-74f5-4561-81f2-d1955c199a85"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:10:59 crc kubenswrapper[4697]: W0127 15:10:59.002040 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod43fd9fa4_b232_4d49_8f52_27d016de4cad.slice/crio-ecb5fb67f830321bb79acff313b3649d77744cb96b27d21a807a3b03c69d1093 WatchSource:0}: Error finding container ecb5fb67f830321bb79acff313b3649d77744cb96b27d21a807a3b03c69d1093: Status 404 returned error can't find the container with id ecb5fb67f830321bb79acff313b3649d77744cb96b27d21a807a3b03c69d1093 Jan 27 15:10:59 crc kubenswrapper[4697]: I0127 15:10:59.026023 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5h858"] Jan 27 15:10:59 crc kubenswrapper[4697]: I0127 15:10:59.042963 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 27 15:10:59 crc kubenswrapper[4697]: E0127 15:10:59.043212 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf07624a-74f5-4561-81f2-d1955c199a85" containerName="pruner" Jan 27 15:10:59 crc kubenswrapper[4697]: I0127 15:10:59.043229 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf07624a-74f5-4561-81f2-d1955c199a85" containerName="pruner" Jan 27 15:10:59 crc kubenswrapper[4697]: I0127 15:10:59.043324 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf07624a-74f5-4561-81f2-d1955c199a85" containerName="pruner" Jan 27 15:10:59 crc kubenswrapper[4697]: I0127 15:10:59.043651 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 15:10:59 crc kubenswrapper[4697]: I0127 15:10:59.048037 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 27 15:10:59 crc kubenswrapper[4697]: I0127 15:10:59.048677 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 27 15:10:59 crc kubenswrapper[4697]: I0127 15:10:59.053576 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 27 15:10:59 crc kubenswrapper[4697]: I0127 15:10:59.085019 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5511eebc-21ab-440b-ad78-6daddd45ce35-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"5511eebc-21ab-440b-ad78-6daddd45ce35\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 15:10:59 crc kubenswrapper[4697]: I0127 15:10:59.085273 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5511eebc-21ab-440b-ad78-6daddd45ce35-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"5511eebc-21ab-440b-ad78-6daddd45ce35\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 15:10:59 crc kubenswrapper[4697]: I0127 15:10:59.085745 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cf07624a-74f5-4561-81f2-d1955c199a85-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 15:10:59 crc kubenswrapper[4697]: I0127 15:10:59.187190 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5511eebc-21ab-440b-ad78-6daddd45ce35-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"5511eebc-21ab-440b-ad78-6daddd45ce35\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 15:10:59 crc kubenswrapper[4697]: I0127 15:10:59.187284 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5511eebc-21ab-440b-ad78-6daddd45ce35-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"5511eebc-21ab-440b-ad78-6daddd45ce35\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 15:10:59 crc kubenswrapper[4697]: I0127 15:10:59.187369 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5511eebc-21ab-440b-ad78-6daddd45ce35-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"5511eebc-21ab-440b-ad78-6daddd45ce35\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 15:10:59 crc kubenswrapper[4697]: I0127 15:10:59.204373 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5511eebc-21ab-440b-ad78-6daddd45ce35-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"5511eebc-21ab-440b-ad78-6daddd45ce35\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 15:10:59 crc kubenswrapper[4697]: I0127 15:10:59.334186 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-z6jxw" Jan 27 15:10:59 crc kubenswrapper[4697]: I0127 15:10:59.364856 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 15:10:59 crc kubenswrapper[4697]: I0127 15:10:59.445897 4697 patch_prober.go:28] interesting pod/router-default-5444994796-wmwsd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 15:10:59 crc kubenswrapper[4697]: [-]has-synced failed: reason withheld Jan 27 15:10:59 crc kubenswrapper[4697]: [+]process-running ok Jan 27 15:10:59 crc kubenswrapper[4697]: healthz check failed Jan 27 15:10:59 crc kubenswrapper[4697]: I0127 15:10:59.446164 4697 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wmwsd" podUID="d50c0395-ec10-4463-92e4-29defdd47f62" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 15:10:59 crc kubenswrapper[4697]: I0127 15:10:59.668956 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 27 15:10:59 crc kubenswrapper[4697]: I0127 15:10:59.691976 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5h858" event={"ID":"e2bffcd5-911f-4cd1-92b3-e70c361719c4","Type":"ContainerStarted","Data":"86a78fec4a65667fa92b3b24055434ff637f114cc36e4acea9b42dad5d50a1aa"} Jan 27 15:10:59 crc kubenswrapper[4697]: I0127 15:10:59.696493 4697 generic.go:334] "Generic (PLEG): container finished" podID="20946332-e642-4802-b943-8c504ef8c3ec" containerID="ff8934e32768840d7a6891eff01e1594c2b342778bd9cb5d6695426d3632a359" exitCode=0 Jan 27 15:10:59 crc kubenswrapper[4697]: I0127 15:10:59.696562 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wst7l" event={"ID":"20946332-e642-4802-b943-8c504ef8c3ec","Type":"ContainerDied","Data":"ff8934e32768840d7a6891eff01e1594c2b342778bd9cb5d6695426d3632a359"} Jan 27 15:10:59 crc kubenswrapper[4697]: I0127 15:10:59.696593 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wst7l" event={"ID":"20946332-e642-4802-b943-8c504ef8c3ec","Type":"ContainerStarted","Data":"d2c42e721683e1257b972b3e1f0950b092bca4e3a92b391de6e02aba5604a70e"} Jan 27 15:10:59 crc kubenswrapper[4697]: I0127 15:10:59.699454 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-qlprf" event={"ID":"43fd9fa4-b232-4d49-8f52-27d016de4cad","Type":"ContainerStarted","Data":"4c05c973d10dc4b2a7a36b30c3e7bb6079e30ac7362f7b4d2d5a568e049272cd"} Jan 27 15:10:59 crc kubenswrapper[4697]: I0127 15:10:59.699500 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-qlprf" event={"ID":"43fd9fa4-b232-4d49-8f52-27d016de4cad","Type":"ContainerStarted","Data":"ecb5fb67f830321bb79acff313b3649d77744cb96b27d21a807a3b03c69d1093"} Jan 27 15:10:59 crc kubenswrapper[4697]: I0127 15:10:59.699696 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-qlprf" Jan 27 15:10:59 crc kubenswrapper[4697]: I0127 15:10:59.712192 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 15:10:59 crc kubenswrapper[4697]: I0127 15:10:59.712238 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"cf07624a-74f5-4561-81f2-d1955c199a85","Type":"ContainerDied","Data":"c1b0721ee83f379997085f772f541c37990ff5025e62cd12e6a09ac9bbf35986"} Jan 27 15:10:59 crc kubenswrapper[4697]: I0127 15:10:59.712301 4697 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c1b0721ee83f379997085f772f541c37990ff5025e62cd12e6a09ac9bbf35986" Jan 27 15:10:59 crc kubenswrapper[4697]: I0127 15:10:59.731356 4697 generic.go:334] "Generic (PLEG): container finished" podID="4e1946c0-832f-4b77-8e87-a716e9a10a8f" containerID="8f016cd7a2ab829a41d67255ee0e689567016eb0444e540f55bfbdb6fd0916f5" exitCode=0 Jan 27 15:10:59 crc kubenswrapper[4697]: I0127 15:10:59.731439 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nckjk" event={"ID":"4e1946c0-832f-4b77-8e87-a716e9a10a8f","Type":"ContainerDied","Data":"8f016cd7a2ab829a41d67255ee0e689567016eb0444e540f55bfbdb6fd0916f5"} Jan 27 15:10:59 crc kubenswrapper[4697]: I0127 15:10:59.734158 4697 generic.go:334] "Generic (PLEG): container finished" podID="d7864bf9-220d-402f-bb77-0240a422c2f8" containerID="e70db5c6ed677a639d2d5c04f3cd9bd3c0931ee09b13f2328ad630080eaaee95" exitCode=0 Jan 27 15:10:59 crc kubenswrapper[4697]: I0127 15:10:59.734185 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cbv2z" event={"ID":"d7864bf9-220d-402f-bb77-0240a422c2f8","Type":"ContainerDied","Data":"e70db5c6ed677a639d2d5c04f3cd9bd3c0931ee09b13f2328ad630080eaaee95"} Jan 27 15:10:59 crc kubenswrapper[4697]: I0127 15:10:59.790383 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-qlprf" podStartSLOduration=133.790362207 podStartE2EDuration="2m13.790362207s" podCreationTimestamp="2026-01-27 15:08:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:10:59.750870684 +0000 UTC m=+155.923270465" watchObservedRunningTime="2026-01-27 15:10:59.790362207 +0000 UTC m=+155.962761988" Jan 27 15:11:00 crc kubenswrapper[4697]: I0127 15:11:00.252366 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-nmrvs" Jan 27 15:11:00 crc kubenswrapper[4697]: I0127 15:11:00.269171 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-nmrvs" Jan 27 15:11:00 crc kubenswrapper[4697]: I0127 15:11:00.449069 4697 patch_prober.go:28] interesting pod/router-default-5444994796-wmwsd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 15:11:00 crc kubenswrapper[4697]: [-]has-synced failed: reason withheld Jan 27 15:11:00 crc kubenswrapper[4697]: [+]process-running ok Jan 27 15:11:00 crc kubenswrapper[4697]: healthz check failed Jan 27 15:11:00 crc kubenswrapper[4697]: I0127 15:11:00.449117 4697 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wmwsd" podUID="d50c0395-ec10-4463-92e4-29defdd47f62" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 15:11:00 crc kubenswrapper[4697]: I0127 15:11:00.758622 4697 generic.go:334] "Generic (PLEG): container finished" podID="5511eebc-21ab-440b-ad78-6daddd45ce35" containerID="7f7a256fcca4af5e699ebe23a9a55c0f20951f0dbdb03f02f71c68433e80ce7c" exitCode=0 Jan 27 15:11:00 crc kubenswrapper[4697]: I0127 15:11:00.758703 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"5511eebc-21ab-440b-ad78-6daddd45ce35","Type":"ContainerDied","Data":"7f7a256fcca4af5e699ebe23a9a55c0f20951f0dbdb03f02f71c68433e80ce7c"} Jan 27 15:11:00 crc kubenswrapper[4697]: I0127 15:11:00.758729 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"5511eebc-21ab-440b-ad78-6daddd45ce35","Type":"ContainerStarted","Data":"fe38b6e6b195def3815343905caf583454247b28b0eb6f78cedc166c15bed8ab"} Jan 27 15:11:00 crc kubenswrapper[4697]: I0127 15:11:00.769379 4697 generic.go:334] "Generic (PLEG): container finished" podID="e2bffcd5-911f-4cd1-92b3-e70c361719c4" containerID="6d6ab85daf49ea0b7474b77fb5bd4170c7bdb80ac64fd6d166164f859208781c" exitCode=0 Jan 27 15:11:00 crc kubenswrapper[4697]: I0127 15:11:00.769956 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5h858" event={"ID":"e2bffcd5-911f-4cd1-92b3-e70c361719c4","Type":"ContainerDied","Data":"6d6ab85daf49ea0b7474b77fb5bd4170c7bdb80ac64fd6d166164f859208781c"} Jan 27 15:11:01 crc kubenswrapper[4697]: I0127 15:11:01.208074 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-5xn9m" Jan 27 15:11:01 crc kubenswrapper[4697]: I0127 15:11:01.445148 4697 patch_prober.go:28] interesting pod/router-default-5444994796-wmwsd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 15:11:01 crc kubenswrapper[4697]: [-]has-synced failed: reason withheld Jan 27 15:11:01 crc kubenswrapper[4697]: [+]process-running ok Jan 27 15:11:01 crc kubenswrapper[4697]: healthz check failed Jan 27 15:11:01 crc kubenswrapper[4697]: I0127 15:11:01.445203 4697 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wmwsd" podUID="d50c0395-ec10-4463-92e4-29defdd47f62" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 15:11:02 crc kubenswrapper[4697]: I0127 15:11:02.118604 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 15:11:02 crc kubenswrapper[4697]: I0127 15:11:02.230365 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5511eebc-21ab-440b-ad78-6daddd45ce35-kubelet-dir\") pod \"5511eebc-21ab-440b-ad78-6daddd45ce35\" (UID: \"5511eebc-21ab-440b-ad78-6daddd45ce35\") " Jan 27 15:11:02 crc kubenswrapper[4697]: I0127 15:11:02.230433 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5511eebc-21ab-440b-ad78-6daddd45ce35-kube-api-access\") pod \"5511eebc-21ab-440b-ad78-6daddd45ce35\" (UID: \"5511eebc-21ab-440b-ad78-6daddd45ce35\") " Jan 27 15:11:02 crc kubenswrapper[4697]: I0127 15:11:02.231665 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5511eebc-21ab-440b-ad78-6daddd45ce35-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "5511eebc-21ab-440b-ad78-6daddd45ce35" (UID: "5511eebc-21ab-440b-ad78-6daddd45ce35"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 15:11:02 crc kubenswrapper[4697]: I0127 15:11:02.261555 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5511eebc-21ab-440b-ad78-6daddd45ce35-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "5511eebc-21ab-440b-ad78-6daddd45ce35" (UID: "5511eebc-21ab-440b-ad78-6daddd45ce35"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:11:02 crc kubenswrapper[4697]: I0127 15:11:02.333409 4697 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5511eebc-21ab-440b-ad78-6daddd45ce35-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 27 15:11:02 crc kubenswrapper[4697]: I0127 15:11:02.333450 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5511eebc-21ab-440b-ad78-6daddd45ce35-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 15:11:02 crc kubenswrapper[4697]: I0127 15:11:02.445859 4697 patch_prober.go:28] interesting pod/router-default-5444994796-wmwsd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 15:11:02 crc kubenswrapper[4697]: [-]has-synced failed: reason withheld Jan 27 15:11:02 crc kubenswrapper[4697]: [+]process-running ok Jan 27 15:11:02 crc kubenswrapper[4697]: healthz check failed Jan 27 15:11:02 crc kubenswrapper[4697]: I0127 15:11:02.445912 4697 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wmwsd" podUID="d50c0395-ec10-4463-92e4-29defdd47f62" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 15:11:02 crc kubenswrapper[4697]: I0127 15:11:02.805961 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"5511eebc-21ab-440b-ad78-6daddd45ce35","Type":"ContainerDied","Data":"fe38b6e6b195def3815343905caf583454247b28b0eb6f78cedc166c15bed8ab"} Jan 27 15:11:02 crc kubenswrapper[4697]: I0127 15:11:02.806006 4697 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe38b6e6b195def3815343905caf583454247b28b0eb6f78cedc166c15bed8ab" Jan 27 15:11:02 crc kubenswrapper[4697]: I0127 15:11:02.806068 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 15:11:03 crc kubenswrapper[4697]: I0127 15:11:03.446348 4697 patch_prober.go:28] interesting pod/router-default-5444994796-wmwsd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 15:11:03 crc kubenswrapper[4697]: [-]has-synced failed: reason withheld Jan 27 15:11:03 crc kubenswrapper[4697]: [+]process-running ok Jan 27 15:11:03 crc kubenswrapper[4697]: healthz check failed Jan 27 15:11:03 crc kubenswrapper[4697]: I0127 15:11:03.446644 4697 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wmwsd" podUID="d50c0395-ec10-4463-92e4-29defdd47f62" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 15:11:04 crc kubenswrapper[4697]: I0127 15:11:04.445176 4697 patch_prober.go:28] interesting pod/router-default-5444994796-wmwsd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 15:11:04 crc kubenswrapper[4697]: [-]has-synced failed: reason withheld Jan 27 15:11:04 crc kubenswrapper[4697]: [+]process-running ok Jan 27 15:11:04 crc kubenswrapper[4697]: healthz check failed Jan 27 15:11:04 crc kubenswrapper[4697]: I0127 15:11:04.445235 4697 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wmwsd" podUID="d50c0395-ec10-4463-92e4-29defdd47f62" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 15:11:05 crc kubenswrapper[4697]: I0127 15:11:05.445907 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-wmwsd" Jan 27 15:11:05 crc kubenswrapper[4697]: I0127 15:11:05.450737 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-wmwsd" Jan 27 15:11:05 crc kubenswrapper[4697]: I0127 15:11:05.502333 4697 patch_prober.go:28] interesting pod/console-f9d7485db-wjd95 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.7:8443/health\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Jan 27 15:11:05 crc kubenswrapper[4697]: I0127 15:11:05.502425 4697 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-wjd95" podUID="0f95124d-8a5d-4a0d-b4cd-906d0341a6a2" containerName="console" probeResult="failure" output="Get \"https://10.217.0.7:8443/health\": dial tcp 10.217.0.7:8443: connect: connection refused" Jan 27 15:11:05 crc kubenswrapper[4697]: I0127 15:11:05.678592 4697 patch_prober.go:28] interesting pod/downloads-7954f5f757-78k6r container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Jan 27 15:11:05 crc kubenswrapper[4697]: I0127 15:11:05.678654 4697 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-78k6r" podUID="73d9ac28-74b0-4ead-b4e4-b270264feb05" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Jan 27 15:11:05 crc kubenswrapper[4697]: I0127 15:11:05.678896 4697 patch_prober.go:28] interesting pod/downloads-7954f5f757-78k6r container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Jan 27 15:11:05 crc kubenswrapper[4697]: I0127 15:11:05.678942 4697 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-78k6r" podUID="73d9ac28-74b0-4ead-b4e4-b270264feb05" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Jan 27 15:11:08 crc kubenswrapper[4697]: I0127 15:11:08.336351 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/11ed6885-450d-477c-8e08-acf5fbde2fa3-metrics-certs\") pod \"network-metrics-daemon-vwctp\" (UID: \"11ed6885-450d-477c-8e08-acf5fbde2fa3\") " pod="openshift-multus/network-metrics-daemon-vwctp" Jan 27 15:11:08 crc kubenswrapper[4697]: I0127 15:11:08.342450 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/11ed6885-450d-477c-8e08-acf5fbde2fa3-metrics-certs\") pod \"network-metrics-daemon-vwctp\" (UID: \"11ed6885-450d-477c-8e08-acf5fbde2fa3\") " pod="openshift-multus/network-metrics-daemon-vwctp" Jan 27 15:11:08 crc kubenswrapper[4697]: I0127 15:11:08.581069 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vwctp" Jan 27 15:11:13 crc kubenswrapper[4697]: I0127 15:11:13.256373 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-7ddjp"] Jan 27 15:11:13 crc kubenswrapper[4697]: I0127 15:11:13.258246 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-7ddjp" podUID="894a6339-d208-46db-8769-ac9153cb1ba0" containerName="controller-manager" containerID="cri-o://2fc1f82f5af8a5feb11e657399a7ee2576c5e556ea67cadb25f04438e85c53ca" gracePeriod=30 Jan 27 15:11:13 crc kubenswrapper[4697]: I0127 15:11:13.270730 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-w85nj"] Jan 27 15:11:13 crc kubenswrapper[4697]: I0127 15:11:13.270938 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-w85nj" podUID="24828dfa-ec12-4de9-aaba-96716e62d49a" containerName="route-controller-manager" containerID="cri-o://4d7ec758b6907fa68f890c141ed29a5d149c6698b620aedc7be76c3588e23169" gracePeriod=30 Jan 27 15:11:14 crc kubenswrapper[4697]: I0127 15:11:14.988546 4697 generic.go:334] "Generic (PLEG): container finished" podID="894a6339-d208-46db-8769-ac9153cb1ba0" containerID="2fc1f82f5af8a5feb11e657399a7ee2576c5e556ea67cadb25f04438e85c53ca" exitCode=0 Jan 27 15:11:14 crc kubenswrapper[4697]: I0127 15:11:14.988622 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-7ddjp" event={"ID":"894a6339-d208-46db-8769-ac9153cb1ba0","Type":"ContainerDied","Data":"2fc1f82f5af8a5feb11e657399a7ee2576c5e556ea67cadb25f04438e85c53ca"} Jan 27 15:11:14 crc kubenswrapper[4697]: I0127 15:11:14.990468 4697 generic.go:334] "Generic (PLEG): container finished" podID="24828dfa-ec12-4de9-aaba-96716e62d49a" containerID="4d7ec758b6907fa68f890c141ed29a5d149c6698b620aedc7be76c3588e23169" exitCode=0 Jan 27 15:11:14 crc kubenswrapper[4697]: I0127 15:11:14.990494 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-w85nj" event={"ID":"24828dfa-ec12-4de9-aaba-96716e62d49a","Type":"ContainerDied","Data":"4d7ec758b6907fa68f890c141ed29a5d149c6698b620aedc7be76c3588e23169"} Jan 27 15:11:15 crc kubenswrapper[4697]: I0127 15:11:15.485026 4697 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-7ddjp container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Jan 27 15:11:15 crc kubenswrapper[4697]: I0127 15:11:15.485085 4697 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-7ddjp" podUID="894a6339-d208-46db-8769-ac9153cb1ba0" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Jan 27 15:11:15 crc kubenswrapper[4697]: I0127 15:11:15.505856 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-wjd95" Jan 27 15:11:15 crc kubenswrapper[4697]: I0127 15:11:15.516310 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-wjd95" Jan 27 15:11:15 crc kubenswrapper[4697]: I0127 15:11:15.677486 4697 patch_prober.go:28] interesting pod/downloads-7954f5f757-78k6r container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Jan 27 15:11:15 crc kubenswrapper[4697]: I0127 15:11:15.677516 4697 patch_prober.go:28] interesting pod/downloads-7954f5f757-78k6r container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Jan 27 15:11:15 crc kubenswrapper[4697]: I0127 15:11:15.677534 4697 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-78k6r" podUID="73d9ac28-74b0-4ead-b4e4-b270264feb05" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Jan 27 15:11:15 crc kubenswrapper[4697]: I0127 15:11:15.677572 4697 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-7954f5f757-78k6r" Jan 27 15:11:15 crc kubenswrapper[4697]: I0127 15:11:15.677568 4697 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-78k6r" podUID="73d9ac28-74b0-4ead-b4e4-b270264feb05" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Jan 27 15:11:15 crc kubenswrapper[4697]: I0127 15:11:15.678055 4697 patch_prober.go:28] interesting pod/downloads-7954f5f757-78k6r container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Jan 27 15:11:15 crc kubenswrapper[4697]: I0127 15:11:15.678067 4697 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"e33bd7cbf3778228ca190ab911c0b0638a3e181005a1e63447946ef07e9c92da"} pod="openshift-console/downloads-7954f5f757-78k6r" containerMessage="Container download-server failed liveness probe, will be restarted" Jan 27 15:11:15 crc kubenswrapper[4697]: I0127 15:11:15.678149 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/downloads-7954f5f757-78k6r" podUID="73d9ac28-74b0-4ead-b4e4-b270264feb05" containerName="download-server" containerID="cri-o://e33bd7cbf3778228ca190ab911c0b0638a3e181005a1e63447946ef07e9c92da" gracePeriod=2 Jan 27 15:11:15 crc kubenswrapper[4697]: I0127 15:11:15.679023 4697 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-78k6r" podUID="73d9ac28-74b0-4ead-b4e4-b270264feb05" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Jan 27 15:11:15 crc kubenswrapper[4697]: I0127 15:11:15.927763 4697 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-w85nj container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Jan 27 15:11:15 crc kubenswrapper[4697]: I0127 15:11:15.927825 4697 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-w85nj" podUID="24828dfa-ec12-4de9-aaba-96716e62d49a" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" Jan 27 15:11:18 crc kubenswrapper[4697]: I0127 15:11:18.017003 4697 generic.go:334] "Generic (PLEG): container finished" podID="73d9ac28-74b0-4ead-b4e4-b270264feb05" containerID="e33bd7cbf3778228ca190ab911c0b0638a3e181005a1e63447946ef07e9c92da" exitCode=0 Jan 27 15:11:18 crc kubenswrapper[4697]: I0127 15:11:18.017081 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-78k6r" event={"ID":"73d9ac28-74b0-4ead-b4e4-b270264feb05","Type":"ContainerDied","Data":"e33bd7cbf3778228ca190ab911c0b0638a3e181005a1e63447946ef07e9c92da"} Jan 27 15:11:18 crc kubenswrapper[4697]: I0127 15:11:18.375877 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-qlprf" Jan 27 15:11:25 crc kubenswrapper[4697]: I0127 15:11:25.109362 4697 patch_prober.go:28] interesting pod/machine-config-daemon-wz495 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 15:11:25 crc kubenswrapper[4697]: I0127 15:11:25.110019 4697 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 15:11:25 crc kubenswrapper[4697]: I0127 15:11:25.484395 4697 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-7ddjp container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Jan 27 15:11:25 crc kubenswrapper[4697]: I0127 15:11:25.484521 4697 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-7ddjp" podUID="894a6339-d208-46db-8769-ac9153cb1ba0" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Jan 27 15:11:25 crc kubenswrapper[4697]: I0127 15:11:25.679184 4697 patch_prober.go:28] interesting pod/downloads-7954f5f757-78k6r container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Jan 27 15:11:25 crc kubenswrapper[4697]: I0127 15:11:25.679292 4697 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-78k6r" podUID="73d9ac28-74b0-4ead-b4e4-b270264feb05" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Jan 27 15:11:25 crc kubenswrapper[4697]: I0127 15:11:25.928658 4697 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-w85nj container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Jan 27 15:11:25 crc kubenswrapper[4697]: I0127 15:11:25.928753 4697 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-w85nj" podUID="24828dfa-ec12-4de9-aaba-96716e62d49a" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" Jan 27 15:11:26 crc kubenswrapper[4697]: I0127 15:11:26.484417 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wp9j5" Jan 27 15:11:34 crc kubenswrapper[4697]: I0127 15:11:34.268585 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:11:35 crc kubenswrapper[4697]: I0127 15:11:35.486291 4697 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-7ddjp container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Jan 27 15:11:35 crc kubenswrapper[4697]: I0127 15:11:35.486831 4697 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-7ddjp" podUID="894a6339-d208-46db-8769-ac9153cb1ba0" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Jan 27 15:11:35 crc kubenswrapper[4697]: I0127 15:11:35.678229 4697 patch_prober.go:28] interesting pod/downloads-7954f5f757-78k6r container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Jan 27 15:11:35 crc kubenswrapper[4697]: I0127 15:11:35.678274 4697 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-78k6r" podUID="73d9ac28-74b0-4ead-b4e4-b270264feb05" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Jan 27 15:11:35 crc kubenswrapper[4697]: I0127 15:11:35.927664 4697 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-w85nj container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Jan 27 15:11:35 crc kubenswrapper[4697]: I0127 15:11:35.927729 4697 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-w85nj" podUID="24828dfa-ec12-4de9-aaba-96716e62d49a" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" Jan 27 15:11:38 crc kubenswrapper[4697]: I0127 15:11:38.246357 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 27 15:11:38 crc kubenswrapper[4697]: E0127 15:11:38.246958 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5511eebc-21ab-440b-ad78-6daddd45ce35" containerName="pruner" Jan 27 15:11:38 crc kubenswrapper[4697]: I0127 15:11:38.246973 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="5511eebc-21ab-440b-ad78-6daddd45ce35" containerName="pruner" Jan 27 15:11:38 crc kubenswrapper[4697]: I0127 15:11:38.247105 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="5511eebc-21ab-440b-ad78-6daddd45ce35" containerName="pruner" Jan 27 15:11:38 crc kubenswrapper[4697]: I0127 15:11:38.247561 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 15:11:38 crc kubenswrapper[4697]: I0127 15:11:38.251748 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 27 15:11:38 crc kubenswrapper[4697]: I0127 15:11:38.252251 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 27 15:11:38 crc kubenswrapper[4697]: I0127 15:11:38.253302 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 27 15:11:38 crc kubenswrapper[4697]: I0127 15:11:38.405699 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e0fca15d-774a-465e-9da1-686eed214cd7-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"e0fca15d-774a-465e-9da1-686eed214cd7\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 15:11:38 crc kubenswrapper[4697]: I0127 15:11:38.405895 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e0fca15d-774a-465e-9da1-686eed214cd7-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"e0fca15d-774a-465e-9da1-686eed214cd7\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 15:11:38 crc kubenswrapper[4697]: I0127 15:11:38.506936 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e0fca15d-774a-465e-9da1-686eed214cd7-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"e0fca15d-774a-465e-9da1-686eed214cd7\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 15:11:38 crc kubenswrapper[4697]: I0127 15:11:38.506994 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e0fca15d-774a-465e-9da1-686eed214cd7-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"e0fca15d-774a-465e-9da1-686eed214cd7\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 15:11:38 crc kubenswrapper[4697]: I0127 15:11:38.507025 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e0fca15d-774a-465e-9da1-686eed214cd7-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"e0fca15d-774a-465e-9da1-686eed214cd7\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 15:11:38 crc kubenswrapper[4697]: I0127 15:11:38.524174 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e0fca15d-774a-465e-9da1-686eed214cd7-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"e0fca15d-774a-465e-9da1-686eed214cd7\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 15:11:38 crc kubenswrapper[4697]: I0127 15:11:38.565767 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 15:11:41 crc kubenswrapper[4697]: E0127 15:11:41.487566 4697 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 27 15:11:41 crc kubenswrapper[4697]: E0127 15:11:41.488072 4697 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2rcr6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-nckjk_openshift-marketplace(4e1946c0-832f-4b77-8e87-a716e9a10a8f): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 15:11:41 crc kubenswrapper[4697]: E0127 15:11:41.489297 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-nckjk" podUID="4e1946c0-832f-4b77-8e87-a716e9a10a8f" Jan 27 15:11:42 crc kubenswrapper[4697]: I0127 15:11:42.441828 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 27 15:11:42 crc kubenswrapper[4697]: I0127 15:11:42.442732 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 27 15:11:42 crc kubenswrapper[4697]: I0127 15:11:42.446746 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 27 15:11:42 crc kubenswrapper[4697]: I0127 15:11:42.590743 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/01d02477-81a8-4453-bac0-0aaed3d659b5-kube-api-access\") pod \"installer-9-crc\" (UID: \"01d02477-81a8-4453-bac0-0aaed3d659b5\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 15:11:42 crc kubenswrapper[4697]: I0127 15:11:42.590849 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/01d02477-81a8-4453-bac0-0aaed3d659b5-var-lock\") pod \"installer-9-crc\" (UID: \"01d02477-81a8-4453-bac0-0aaed3d659b5\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 15:11:42 crc kubenswrapper[4697]: I0127 15:11:42.590875 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/01d02477-81a8-4453-bac0-0aaed3d659b5-kubelet-dir\") pod \"installer-9-crc\" (UID: \"01d02477-81a8-4453-bac0-0aaed3d659b5\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 15:11:42 crc kubenswrapper[4697]: I0127 15:11:42.692427 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/01d02477-81a8-4453-bac0-0aaed3d659b5-kube-api-access\") pod \"installer-9-crc\" (UID: \"01d02477-81a8-4453-bac0-0aaed3d659b5\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 15:11:42 crc kubenswrapper[4697]: I0127 15:11:42.692534 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/01d02477-81a8-4453-bac0-0aaed3d659b5-var-lock\") pod \"installer-9-crc\" (UID: \"01d02477-81a8-4453-bac0-0aaed3d659b5\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 15:11:42 crc kubenswrapper[4697]: I0127 15:11:42.692562 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/01d02477-81a8-4453-bac0-0aaed3d659b5-kubelet-dir\") pod \"installer-9-crc\" (UID: \"01d02477-81a8-4453-bac0-0aaed3d659b5\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 15:11:42 crc kubenswrapper[4697]: I0127 15:11:42.692693 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/01d02477-81a8-4453-bac0-0aaed3d659b5-kubelet-dir\") pod \"installer-9-crc\" (UID: \"01d02477-81a8-4453-bac0-0aaed3d659b5\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 15:11:42 crc kubenswrapper[4697]: I0127 15:11:42.692756 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/01d02477-81a8-4453-bac0-0aaed3d659b5-var-lock\") pod \"installer-9-crc\" (UID: \"01d02477-81a8-4453-bac0-0aaed3d659b5\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 15:11:42 crc kubenswrapper[4697]: I0127 15:11:42.721944 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/01d02477-81a8-4453-bac0-0aaed3d659b5-kube-api-access\") pod \"installer-9-crc\" (UID: \"01d02477-81a8-4453-bac0-0aaed3d659b5\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 15:11:42 crc kubenswrapper[4697]: I0127 15:11:42.804208 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 27 15:11:45 crc kubenswrapper[4697]: I0127 15:11:45.678096 4697 patch_prober.go:28] interesting pod/downloads-7954f5f757-78k6r container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Jan 27 15:11:45 crc kubenswrapper[4697]: I0127 15:11:45.678169 4697 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-78k6r" podUID="73d9ac28-74b0-4ead-b4e4-b270264feb05" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Jan 27 15:11:46 crc kubenswrapper[4697]: I0127 15:11:46.484673 4697 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-7ddjp container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 15:11:46 crc kubenswrapper[4697]: I0127 15:11:46.484750 4697 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-7ddjp" podUID="894a6339-d208-46db-8769-ac9153cb1ba0" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 15:11:46 crc kubenswrapper[4697]: I0127 15:11:46.928669 4697 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-w85nj container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 15:11:46 crc kubenswrapper[4697]: I0127 15:11:46.928757 4697 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-w85nj" podUID="24828dfa-ec12-4de9-aaba-96716e62d49a" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 15:11:48 crc kubenswrapper[4697]: E0127 15:11:48.780920 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-nckjk" podUID="4e1946c0-832f-4b77-8e87-a716e9a10a8f" Jan 27 15:11:49 crc kubenswrapper[4697]: E0127 15:11:49.633104 4697 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 27 15:11:49 crc kubenswrapper[4697]: E0127 15:11:49.633255 4697 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sqxwb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-8qkrg_openshift-marketplace(8945ffcc-ee9c-46ab-b2dd-474253d4ba03): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 15:11:49 crc kubenswrapper[4697]: E0127 15:11:49.635002 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-8qkrg" podUID="8945ffcc-ee9c-46ab-b2dd-474253d4ba03" Jan 27 15:11:52 crc kubenswrapper[4697]: E0127 15:11:52.111431 4697 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 27 15:11:52 crc kubenswrapper[4697]: E0127 15:11:52.111920 4697 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gvtd9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-56255_openshift-marketplace(316f7102-a9a6-40c4-b38b-ba9c7736526a): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 15:11:52 crc kubenswrapper[4697]: E0127 15:11:52.113278 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-56255" podUID="316f7102-a9a6-40c4-b38b-ba9c7736526a" Jan 27 15:11:55 crc kubenswrapper[4697]: I0127 15:11:55.109000 4697 patch_prober.go:28] interesting pod/machine-config-daemon-wz495 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 15:11:55 crc kubenswrapper[4697]: I0127 15:11:55.109324 4697 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 15:11:55 crc kubenswrapper[4697]: I0127 15:11:55.109366 4697 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wz495" Jan 27 15:11:55 crc kubenswrapper[4697]: I0127 15:11:55.109952 4697 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"faaced835dbc76e880a1fd29824b00fca5f720686e476bcba6ad4f807e28e8e2"} pod="openshift-machine-config-operator/machine-config-daemon-wz495" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 15:11:55 crc kubenswrapper[4697]: I0127 15:11:55.110003 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" containerName="machine-config-daemon" containerID="cri-o://faaced835dbc76e880a1fd29824b00fca5f720686e476bcba6ad4f807e28e8e2" gracePeriod=600 Jan 27 15:11:55 crc kubenswrapper[4697]: I0127 15:11:55.678104 4697 patch_prober.go:28] interesting pod/downloads-7954f5f757-78k6r container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Jan 27 15:11:55 crc kubenswrapper[4697]: I0127 15:11:55.678181 4697 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-78k6r" podUID="73d9ac28-74b0-4ead-b4e4-b270264feb05" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Jan 27 15:11:56 crc kubenswrapper[4697]: I0127 15:11:56.485137 4697 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-7ddjp container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 15:11:56 crc kubenswrapper[4697]: I0127 15:11:56.485782 4697 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-7ddjp" podUID="894a6339-d208-46db-8769-ac9153cb1ba0" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 15:11:56 crc kubenswrapper[4697]: I0127 15:11:56.928851 4697 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-w85nj container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 15:11:56 crc kubenswrapper[4697]: I0127 15:11:56.929280 4697 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-w85nj" podUID="24828dfa-ec12-4de9-aaba-96716e62d49a" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 15:11:58 crc kubenswrapper[4697]: I0127 15:11:58.466622 4697 generic.go:334] "Generic (PLEG): container finished" podID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" containerID="faaced835dbc76e880a1fd29824b00fca5f720686e476bcba6ad4f807e28e8e2" exitCode=0 Jan 27 15:11:58 crc kubenswrapper[4697]: I0127 15:11:58.466665 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wz495" event={"ID":"e9bec8bc-b2a6-4865-83ca-692ae5c022a6","Type":"ContainerDied","Data":"faaced835dbc76e880a1fd29824b00fca5f720686e476bcba6ad4f807e28e8e2"} Jan 27 15:11:59 crc kubenswrapper[4697]: E0127 15:11:59.712747 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-56255" podUID="316f7102-a9a6-40c4-b38b-ba9c7736526a" Jan 27 15:11:59 crc kubenswrapper[4697]: E0127 15:11:59.801280 4697 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 27 15:11:59 crc kubenswrapper[4697]: E0127 15:11:59.802085 4697 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x4h76,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-5h858_openshift-marketplace(e2bffcd5-911f-4cd1-92b3-e70c361719c4): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 15:11:59 crc kubenswrapper[4697]: E0127 15:11:59.803905 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-5h858" podUID="e2bffcd5-911f-4cd1-92b3-e70c361719c4" Jan 27 15:12:01 crc kubenswrapper[4697]: E0127 15:12:01.370553 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-5h858" podUID="e2bffcd5-911f-4cd1-92b3-e70c361719c4" Jan 27 15:12:01 crc kubenswrapper[4697]: I0127 15:12:01.481819 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-w85nj" Jan 27 15:12:01 crc kubenswrapper[4697]: I0127 15:12:01.484421 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-7ddjp" event={"ID":"894a6339-d208-46db-8769-ac9153cb1ba0","Type":"ContainerDied","Data":"885ac78ad4178301d09e55851c0a8ba33a024a355890367aa811516fdf404619"} Jan 27 15:12:01 crc kubenswrapper[4697]: I0127 15:12:01.484446 4697 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="885ac78ad4178301d09e55851c0a8ba33a024a355890367aa811516fdf404619" Jan 27 15:12:01 crc kubenswrapper[4697]: I0127 15:12:01.487173 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-7ddjp" Jan 27 15:12:01 crc kubenswrapper[4697]: I0127 15:12:01.488163 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-w85nj" event={"ID":"24828dfa-ec12-4de9-aaba-96716e62d49a","Type":"ContainerDied","Data":"63288257e7188d729dff08ce0da5c1a00adc5fd00a1eceb002fb2582c18e1592"} Jan 27 15:12:01 crc kubenswrapper[4697]: I0127 15:12:01.488197 4697 scope.go:117] "RemoveContainer" containerID="4d7ec758b6907fa68f890c141ed29a5d149c6698b620aedc7be76c3588e23169" Jan 27 15:12:01 crc kubenswrapper[4697]: I0127 15:12:01.488320 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-w85nj" Jan 27 15:12:01 crc kubenswrapper[4697]: E0127 15:12:01.488965 4697 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 27 15:12:01 crc kubenswrapper[4697]: E0127 15:12:01.489071 4697 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rtldz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-wg52h_openshift-marketplace(9f446277-5df0-4b04-9f9b-cce248835bcd): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 15:12:01 crc kubenswrapper[4697]: E0127 15:12:01.490151 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-wg52h" podUID="9f446277-5df0-4b04-9f9b-cce248835bcd" Jan 27 15:12:01 crc kubenswrapper[4697]: I0127 15:12:01.552943 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-559f79478c-f5jqw"] Jan 27 15:12:01 crc kubenswrapper[4697]: E0127 15:12:01.553469 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="894a6339-d208-46db-8769-ac9153cb1ba0" containerName="controller-manager" Jan 27 15:12:01 crc kubenswrapper[4697]: I0127 15:12:01.553482 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="894a6339-d208-46db-8769-ac9153cb1ba0" containerName="controller-manager" Jan 27 15:12:01 crc kubenswrapper[4697]: E0127 15:12:01.553530 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24828dfa-ec12-4de9-aaba-96716e62d49a" containerName="route-controller-manager" Jan 27 15:12:01 crc kubenswrapper[4697]: I0127 15:12:01.553542 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="24828dfa-ec12-4de9-aaba-96716e62d49a" containerName="route-controller-manager" Jan 27 15:12:01 crc kubenswrapper[4697]: I0127 15:12:01.553672 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="24828dfa-ec12-4de9-aaba-96716e62d49a" containerName="route-controller-manager" Jan 27 15:12:01 crc kubenswrapper[4697]: I0127 15:12:01.553701 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="894a6339-d208-46db-8769-ac9153cb1ba0" containerName="controller-manager" Jan 27 15:12:01 crc kubenswrapper[4697]: I0127 15:12:01.554106 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-559f79478c-f5jqw" Jan 27 15:12:01 crc kubenswrapper[4697]: I0127 15:12:01.590889 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-559f79478c-f5jqw"] Jan 27 15:12:01 crc kubenswrapper[4697]: E0127 15:12:01.624273 4697 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 27 15:12:01 crc kubenswrapper[4697]: E0127 15:12:01.624400 4697 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5hrcv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-cbv2z_openshift-marketplace(d7864bf9-220d-402f-bb77-0240a422c2f8): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 15:12:01 crc kubenswrapper[4697]: E0127 15:12:01.625923 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-cbv2z" podUID="d7864bf9-220d-402f-bb77-0240a422c2f8" Jan 27 15:12:01 crc kubenswrapper[4697]: I0127 15:12:01.638636 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dfcb2\" (UniqueName: \"kubernetes.io/projected/24828dfa-ec12-4de9-aaba-96716e62d49a-kube-api-access-dfcb2\") pod \"24828dfa-ec12-4de9-aaba-96716e62d49a\" (UID: \"24828dfa-ec12-4de9-aaba-96716e62d49a\") " Jan 27 15:12:01 crc kubenswrapper[4697]: I0127 15:12:01.638678 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24828dfa-ec12-4de9-aaba-96716e62d49a-config\") pod \"24828dfa-ec12-4de9-aaba-96716e62d49a\" (UID: \"24828dfa-ec12-4de9-aaba-96716e62d49a\") " Jan 27 15:12:01 crc kubenswrapper[4697]: I0127 15:12:01.638704 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/24828dfa-ec12-4de9-aaba-96716e62d49a-serving-cert\") pod \"24828dfa-ec12-4de9-aaba-96716e62d49a\" (UID: \"24828dfa-ec12-4de9-aaba-96716e62d49a\") " Jan 27 15:12:01 crc kubenswrapper[4697]: I0127 15:12:01.638742 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/894a6339-d208-46db-8769-ac9153cb1ba0-serving-cert\") pod \"894a6339-d208-46db-8769-ac9153cb1ba0\" (UID: \"894a6339-d208-46db-8769-ac9153cb1ba0\") " Jan 27 15:12:01 crc kubenswrapper[4697]: I0127 15:12:01.638767 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/894a6339-d208-46db-8769-ac9153cb1ba0-client-ca\") pod \"894a6339-d208-46db-8769-ac9153cb1ba0\" (UID: \"894a6339-d208-46db-8769-ac9153cb1ba0\") " Jan 27 15:12:01 crc kubenswrapper[4697]: I0127 15:12:01.638806 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/894a6339-d208-46db-8769-ac9153cb1ba0-config\") pod \"894a6339-d208-46db-8769-ac9153cb1ba0\" (UID: \"894a6339-d208-46db-8769-ac9153cb1ba0\") " Jan 27 15:12:01 crc kubenswrapper[4697]: I0127 15:12:01.638838 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-25dwx\" (UniqueName: \"kubernetes.io/projected/894a6339-d208-46db-8769-ac9153cb1ba0-kube-api-access-25dwx\") pod \"894a6339-d208-46db-8769-ac9153cb1ba0\" (UID: \"894a6339-d208-46db-8769-ac9153cb1ba0\") " Jan 27 15:12:01 crc kubenswrapper[4697]: I0127 15:12:01.638882 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/894a6339-d208-46db-8769-ac9153cb1ba0-proxy-ca-bundles\") pod \"894a6339-d208-46db-8769-ac9153cb1ba0\" (UID: \"894a6339-d208-46db-8769-ac9153cb1ba0\") " Jan 27 15:12:01 crc kubenswrapper[4697]: I0127 15:12:01.638906 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/24828dfa-ec12-4de9-aaba-96716e62d49a-client-ca\") pod \"24828dfa-ec12-4de9-aaba-96716e62d49a\" (UID: \"24828dfa-ec12-4de9-aaba-96716e62d49a\") " Jan 27 15:12:01 crc kubenswrapper[4697]: I0127 15:12:01.639615 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24828dfa-ec12-4de9-aaba-96716e62d49a-client-ca" (OuterVolumeSpecName: "client-ca") pod "24828dfa-ec12-4de9-aaba-96716e62d49a" (UID: "24828dfa-ec12-4de9-aaba-96716e62d49a"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:12:01 crc kubenswrapper[4697]: I0127 15:12:01.639625 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24828dfa-ec12-4de9-aaba-96716e62d49a-config" (OuterVolumeSpecName: "config") pod "24828dfa-ec12-4de9-aaba-96716e62d49a" (UID: "24828dfa-ec12-4de9-aaba-96716e62d49a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:12:01 crc kubenswrapper[4697]: I0127 15:12:01.640431 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/894a6339-d208-46db-8769-ac9153cb1ba0-client-ca" (OuterVolumeSpecName: "client-ca") pod "894a6339-d208-46db-8769-ac9153cb1ba0" (UID: "894a6339-d208-46db-8769-ac9153cb1ba0"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:12:01 crc kubenswrapper[4697]: I0127 15:12:01.642538 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/894a6339-d208-46db-8769-ac9153cb1ba0-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "894a6339-d208-46db-8769-ac9153cb1ba0" (UID: "894a6339-d208-46db-8769-ac9153cb1ba0"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:12:01 crc kubenswrapper[4697]: I0127 15:12:01.643000 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/894a6339-d208-46db-8769-ac9153cb1ba0-config" (OuterVolumeSpecName: "config") pod "894a6339-d208-46db-8769-ac9153cb1ba0" (UID: "894a6339-d208-46db-8769-ac9153cb1ba0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:12:01 crc kubenswrapper[4697]: I0127 15:12:01.646836 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/894a6339-d208-46db-8769-ac9153cb1ba0-kube-api-access-25dwx" (OuterVolumeSpecName: "kube-api-access-25dwx") pod "894a6339-d208-46db-8769-ac9153cb1ba0" (UID: "894a6339-d208-46db-8769-ac9153cb1ba0"). InnerVolumeSpecName "kube-api-access-25dwx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:12:01 crc kubenswrapper[4697]: I0127 15:12:01.648762 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/894a6339-d208-46db-8769-ac9153cb1ba0-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "894a6339-d208-46db-8769-ac9153cb1ba0" (UID: "894a6339-d208-46db-8769-ac9153cb1ba0"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:12:01 crc kubenswrapper[4697]: I0127 15:12:01.649548 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24828dfa-ec12-4de9-aaba-96716e62d49a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "24828dfa-ec12-4de9-aaba-96716e62d49a" (UID: "24828dfa-ec12-4de9-aaba-96716e62d49a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:12:01 crc kubenswrapper[4697]: I0127 15:12:01.653276 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24828dfa-ec12-4de9-aaba-96716e62d49a-kube-api-access-dfcb2" (OuterVolumeSpecName: "kube-api-access-dfcb2") pod "24828dfa-ec12-4de9-aaba-96716e62d49a" (UID: "24828dfa-ec12-4de9-aaba-96716e62d49a"). InnerVolumeSpecName "kube-api-access-dfcb2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:12:01 crc kubenswrapper[4697]: I0127 15:12:01.742223 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e25ade21-cfd1-429b-98a7-d4d886130348-config\") pod \"route-controller-manager-559f79478c-f5jqw\" (UID: \"e25ade21-cfd1-429b-98a7-d4d886130348\") " pod="openshift-route-controller-manager/route-controller-manager-559f79478c-f5jqw" Jan 27 15:12:01 crc kubenswrapper[4697]: I0127 15:12:01.742296 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bngd8\" (UniqueName: \"kubernetes.io/projected/e25ade21-cfd1-429b-98a7-d4d886130348-kube-api-access-bngd8\") pod \"route-controller-manager-559f79478c-f5jqw\" (UID: \"e25ade21-cfd1-429b-98a7-d4d886130348\") " pod="openshift-route-controller-manager/route-controller-manager-559f79478c-f5jqw" Jan 27 15:12:01 crc kubenswrapper[4697]: I0127 15:12:01.742462 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e25ade21-cfd1-429b-98a7-d4d886130348-client-ca\") pod \"route-controller-manager-559f79478c-f5jqw\" (UID: \"e25ade21-cfd1-429b-98a7-d4d886130348\") " pod="openshift-route-controller-manager/route-controller-manager-559f79478c-f5jqw" Jan 27 15:12:01 crc kubenswrapper[4697]: I0127 15:12:01.742507 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e25ade21-cfd1-429b-98a7-d4d886130348-serving-cert\") pod \"route-controller-manager-559f79478c-f5jqw\" (UID: \"e25ade21-cfd1-429b-98a7-d4d886130348\") " pod="openshift-route-controller-manager/route-controller-manager-559f79478c-f5jqw" Jan 27 15:12:01 crc kubenswrapper[4697]: I0127 15:12:01.742616 4697 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/24828dfa-ec12-4de9-aaba-96716e62d49a-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 15:12:01 crc kubenswrapper[4697]: I0127 15:12:01.742635 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dfcb2\" (UniqueName: \"kubernetes.io/projected/24828dfa-ec12-4de9-aaba-96716e62d49a-kube-api-access-dfcb2\") on node \"crc\" DevicePath \"\"" Jan 27 15:12:01 crc kubenswrapper[4697]: I0127 15:12:01.742656 4697 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24828dfa-ec12-4de9-aaba-96716e62d49a-config\") on node \"crc\" DevicePath \"\"" Jan 27 15:12:01 crc kubenswrapper[4697]: I0127 15:12:01.742668 4697 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/24828dfa-ec12-4de9-aaba-96716e62d49a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 15:12:01 crc kubenswrapper[4697]: I0127 15:12:01.742680 4697 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/894a6339-d208-46db-8769-ac9153cb1ba0-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 15:12:01 crc kubenswrapper[4697]: I0127 15:12:01.742693 4697 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/894a6339-d208-46db-8769-ac9153cb1ba0-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 15:12:01 crc kubenswrapper[4697]: I0127 15:12:01.742709 4697 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/894a6339-d208-46db-8769-ac9153cb1ba0-config\") on node \"crc\" DevicePath \"\"" Jan 27 15:12:01 crc kubenswrapper[4697]: I0127 15:12:01.742723 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-25dwx\" (UniqueName: \"kubernetes.io/projected/894a6339-d208-46db-8769-ac9153cb1ba0-kube-api-access-25dwx\") on node \"crc\" DevicePath \"\"" Jan 27 15:12:01 crc kubenswrapper[4697]: I0127 15:12:01.742736 4697 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/894a6339-d208-46db-8769-ac9153cb1ba0-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 27 15:12:01 crc kubenswrapper[4697]: E0127 15:12:01.786088 4697 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 27 15:12:01 crc kubenswrapper[4697]: E0127 15:12:01.786296 4697 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-djj9k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-59htg_openshift-marketplace(bf0813c1-e1e9-42a4-8cf9-8fd7fba35e3d): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 15:12:01 crc kubenswrapper[4697]: E0127 15:12:01.787599 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-59htg" podUID="bf0813c1-e1e9-42a4-8cf9-8fd7fba35e3d" Jan 27 15:12:01 crc kubenswrapper[4697]: I0127 15:12:01.825322 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-w85nj"] Jan 27 15:12:01 crc kubenswrapper[4697]: I0127 15:12:01.836078 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-w85nj"] Jan 27 15:12:01 crc kubenswrapper[4697]: I0127 15:12:01.844002 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e25ade21-cfd1-429b-98a7-d4d886130348-serving-cert\") pod \"route-controller-manager-559f79478c-f5jqw\" (UID: \"e25ade21-cfd1-429b-98a7-d4d886130348\") " pod="openshift-route-controller-manager/route-controller-manager-559f79478c-f5jqw" Jan 27 15:12:01 crc kubenswrapper[4697]: I0127 15:12:01.844096 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e25ade21-cfd1-429b-98a7-d4d886130348-config\") pod \"route-controller-manager-559f79478c-f5jqw\" (UID: \"e25ade21-cfd1-429b-98a7-d4d886130348\") " pod="openshift-route-controller-manager/route-controller-manager-559f79478c-f5jqw" Jan 27 15:12:01 crc kubenswrapper[4697]: I0127 15:12:01.844166 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bngd8\" (UniqueName: \"kubernetes.io/projected/e25ade21-cfd1-429b-98a7-d4d886130348-kube-api-access-bngd8\") pod \"route-controller-manager-559f79478c-f5jqw\" (UID: \"e25ade21-cfd1-429b-98a7-d4d886130348\") " pod="openshift-route-controller-manager/route-controller-manager-559f79478c-f5jqw" Jan 27 15:12:01 crc kubenswrapper[4697]: I0127 15:12:01.844253 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e25ade21-cfd1-429b-98a7-d4d886130348-client-ca\") pod \"route-controller-manager-559f79478c-f5jqw\" (UID: \"e25ade21-cfd1-429b-98a7-d4d886130348\") " pod="openshift-route-controller-manager/route-controller-manager-559f79478c-f5jqw" Jan 27 15:12:01 crc kubenswrapper[4697]: I0127 15:12:01.845450 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e25ade21-cfd1-429b-98a7-d4d886130348-client-ca\") pod \"route-controller-manager-559f79478c-f5jqw\" (UID: \"e25ade21-cfd1-429b-98a7-d4d886130348\") " pod="openshift-route-controller-manager/route-controller-manager-559f79478c-f5jqw" Jan 27 15:12:01 crc kubenswrapper[4697]: I0127 15:12:01.854916 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e25ade21-cfd1-429b-98a7-d4d886130348-serving-cert\") pod \"route-controller-manager-559f79478c-f5jqw\" (UID: \"e25ade21-cfd1-429b-98a7-d4d886130348\") " pod="openshift-route-controller-manager/route-controller-manager-559f79478c-f5jqw" Jan 27 15:12:01 crc kubenswrapper[4697]: I0127 15:12:01.855467 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e25ade21-cfd1-429b-98a7-d4d886130348-config\") pod \"route-controller-manager-559f79478c-f5jqw\" (UID: \"e25ade21-cfd1-429b-98a7-d4d886130348\") " pod="openshift-route-controller-manager/route-controller-manager-559f79478c-f5jqw" Jan 27 15:12:01 crc kubenswrapper[4697]: I0127 15:12:01.865062 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bngd8\" (UniqueName: \"kubernetes.io/projected/e25ade21-cfd1-429b-98a7-d4d886130348-kube-api-access-bngd8\") pod \"route-controller-manager-559f79478c-f5jqw\" (UID: \"e25ade21-cfd1-429b-98a7-d4d886130348\") " pod="openshift-route-controller-manager/route-controller-manager-559f79478c-f5jqw" Jan 27 15:12:01 crc kubenswrapper[4697]: I0127 15:12:01.935879 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-vwctp"] Jan 27 15:12:01 crc kubenswrapper[4697]: I0127 15:12:01.952959 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 27 15:12:01 crc kubenswrapper[4697]: I0127 15:12:01.997446 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 27 15:12:02 crc kubenswrapper[4697]: I0127 15:12:02.000982 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-559f79478c-f5jqw" Jan 27 15:12:02 crc kubenswrapper[4697]: E0127 15:12:02.064553 4697 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 27 15:12:02 crc kubenswrapper[4697]: E0127 15:12:02.064714 4697 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t5pdh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-wst7l_openshift-marketplace(20946332-e642-4802-b943-8c504ef8c3ec): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 15:12:02 crc kubenswrapper[4697]: E0127 15:12:02.066134 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-wst7l" podUID="20946332-e642-4802-b943-8c504ef8c3ec" Jan 27 15:12:02 crc kubenswrapper[4697]: W0127 15:12:02.436490 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod01d02477_81a8_4453_bac0_0aaed3d659b5.slice/crio-35f5960c095e39ea7a83b0d0310a81ac98711fb402f3a30c50a8e5d4cf7d0497 WatchSource:0}: Error finding container 35f5960c095e39ea7a83b0d0310a81ac98711fb402f3a30c50a8e5d4cf7d0497: Status 404 returned error can't find the container with id 35f5960c095e39ea7a83b0d0310a81ac98711fb402f3a30c50a8e5d4cf7d0497 Jan 27 15:12:02 crc kubenswrapper[4697]: I0127 15:12:02.492708 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-vwctp" event={"ID":"11ed6885-450d-477c-8e08-acf5fbde2fa3","Type":"ContainerStarted","Data":"5f0f748a0cc38a63422258fd2c2031c9b86430754989568adecbff297675af3c"} Jan 27 15:12:02 crc kubenswrapper[4697]: I0127 15:12:02.495561 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wz495" event={"ID":"e9bec8bc-b2a6-4865-83ca-692ae5c022a6","Type":"ContainerStarted","Data":"f9e4101694f1899c8f44fa50fe32233101f8e492ef340427ddc5bf1941a9a036"} Jan 27 15:12:02 crc kubenswrapper[4697]: I0127 15:12:02.503685 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-78k6r" event={"ID":"73d9ac28-74b0-4ead-b4e4-b270264feb05","Type":"ContainerStarted","Data":"52632dd34e56946aceed0938509bf9a6f1925c4cdbce5868ebade7b43fa95d6e"} Jan 27 15:12:02 crc kubenswrapper[4697]: I0127 15:12:02.503950 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-78k6r" Jan 27 15:12:02 crc kubenswrapper[4697]: I0127 15:12:02.504254 4697 patch_prober.go:28] interesting pod/downloads-7954f5f757-78k6r container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Jan 27 15:12:02 crc kubenswrapper[4697]: I0127 15:12:02.504291 4697 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-78k6r" podUID="73d9ac28-74b0-4ead-b4e4-b270264feb05" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Jan 27 15:12:02 crc kubenswrapper[4697]: I0127 15:12:02.514496 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"01d02477-81a8-4453-bac0-0aaed3d659b5","Type":"ContainerStarted","Data":"35f5960c095e39ea7a83b0d0310a81ac98711fb402f3a30c50a8e5d4cf7d0497"} Jan 27 15:12:02 crc kubenswrapper[4697]: I0127 15:12:02.516453 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-7ddjp" Jan 27 15:12:02 crc kubenswrapper[4697]: I0127 15:12:02.517092 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"e0fca15d-774a-465e-9da1-686eed214cd7","Type":"ContainerStarted","Data":"cb84cf8d8f9396d53ffcb4bbff285bafb55af81765e73a22fedcc8cde739c4f4"} Jan 27 15:12:02 crc kubenswrapper[4697]: E0127 15:12:02.578530 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-cbv2z" podUID="d7864bf9-220d-402f-bb77-0240a422c2f8" Jan 27 15:12:02 crc kubenswrapper[4697]: E0127 15:12:02.579093 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-wg52h" podUID="9f446277-5df0-4b04-9f9b-cce248835bcd" Jan 27 15:12:02 crc kubenswrapper[4697]: E0127 15:12:02.579237 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-wst7l" podUID="20946332-e642-4802-b943-8c504ef8c3ec" Jan 27 15:12:02 crc kubenswrapper[4697]: E0127 15:12:02.584030 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-59htg" podUID="bf0813c1-e1e9-42a4-8cf9-8fd7fba35e3d" Jan 27 15:12:02 crc kubenswrapper[4697]: I0127 15:12:02.585287 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24828dfa-ec12-4de9-aaba-96716e62d49a" path="/var/lib/kubelet/pods/24828dfa-ec12-4de9-aaba-96716e62d49a/volumes" Jan 27 15:12:02 crc kubenswrapper[4697]: I0127 15:12:02.660233 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-7ddjp"] Jan 27 15:12:02 crc kubenswrapper[4697]: I0127 15:12:02.667430 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-7ddjp"] Jan 27 15:12:02 crc kubenswrapper[4697]: I0127 15:12:02.964103 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-559f79478c-f5jqw"] Jan 27 15:12:02 crc kubenswrapper[4697]: W0127 15:12:02.984080 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode25ade21_cfd1_429b_98a7_d4d886130348.slice/crio-e66384b144357903c71e566c55e2135f6c22aa0e87bd3c3284f7762e5c7a1c69 WatchSource:0}: Error finding container e66384b144357903c71e566c55e2135f6c22aa0e87bd3c3284f7762e5c7a1c69: Status 404 returned error can't find the container with id e66384b144357903c71e566c55e2135f6c22aa0e87bd3c3284f7762e5c7a1c69 Jan 27 15:12:03 crc kubenswrapper[4697]: I0127 15:12:03.524241 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"01d02477-81a8-4453-bac0-0aaed3d659b5","Type":"ContainerStarted","Data":"bf15b21a484b4733ef86ec42c189cc9d9446085c9a752a3c462e69d2e66518c6"} Jan 27 15:12:03 crc kubenswrapper[4697]: I0127 15:12:03.526864 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"e0fca15d-774a-465e-9da1-686eed214cd7","Type":"ContainerStarted","Data":"ab192e592ce742daa619fa2c7cfc3f684e2c2dc1f315513a838c7f72cdaa80e3"} Jan 27 15:12:03 crc kubenswrapper[4697]: I0127 15:12:03.528718 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-vwctp" event={"ID":"11ed6885-450d-477c-8e08-acf5fbde2fa3","Type":"ContainerStarted","Data":"231d8dac338cba97fd6b97e0ce4ddf6bc76c11b7b8780340fc411aa2536d4984"} Jan 27 15:12:03 crc kubenswrapper[4697]: I0127 15:12:03.528752 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-vwctp" event={"ID":"11ed6885-450d-477c-8e08-acf5fbde2fa3","Type":"ContainerStarted","Data":"6e4474fc84693644a77a3b6b58b8197dcff2a287a13c5329808c85f411788842"} Jan 27 15:12:03 crc kubenswrapper[4697]: I0127 15:12:03.532125 4697 generic.go:334] "Generic (PLEG): container finished" podID="8945ffcc-ee9c-46ab-b2dd-474253d4ba03" containerID="6836e7071aba079ec10026fc72a1485b151f58b1b77e27f9b2f8b0374c1a1bca" exitCode=0 Jan 27 15:12:03 crc kubenswrapper[4697]: I0127 15:12:03.532185 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8qkrg" event={"ID":"8945ffcc-ee9c-46ab-b2dd-474253d4ba03","Type":"ContainerDied","Data":"6836e7071aba079ec10026fc72a1485b151f58b1b77e27f9b2f8b0374c1a1bca"} Jan 27 15:12:03 crc kubenswrapper[4697]: I0127 15:12:03.538930 4697 generic.go:334] "Generic (PLEG): container finished" podID="4e1946c0-832f-4b77-8e87-a716e9a10a8f" containerID="bf1254b99a71f15274c0d86d514213b5e7d0571810cc9062a854710bca0ee100" exitCode=0 Jan 27 15:12:03 crc kubenswrapper[4697]: I0127 15:12:03.539003 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nckjk" event={"ID":"4e1946c0-832f-4b77-8e87-a716e9a10a8f","Type":"ContainerDied","Data":"bf1254b99a71f15274c0d86d514213b5e7d0571810cc9062a854710bca0ee100"} Jan 27 15:12:03 crc kubenswrapper[4697]: I0127 15:12:03.542142 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-559f79478c-f5jqw" event={"ID":"e25ade21-cfd1-429b-98a7-d4d886130348","Type":"ContainerStarted","Data":"7f456b7ab5d8a3ac88ea78e3f4ffd16e24141851bd5ca5b23bd1ad0e54505bfc"} Jan 27 15:12:03 crc kubenswrapper[4697]: I0127 15:12:03.542182 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-559f79478c-f5jqw" event={"ID":"e25ade21-cfd1-429b-98a7-d4d886130348","Type":"ContainerStarted","Data":"e66384b144357903c71e566c55e2135f6c22aa0e87bd3c3284f7762e5c7a1c69"} Jan 27 15:12:03 crc kubenswrapper[4697]: I0127 15:12:03.543819 4697 patch_prober.go:28] interesting pod/downloads-7954f5f757-78k6r container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Jan 27 15:12:03 crc kubenswrapper[4697]: I0127 15:12:03.543861 4697 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-78k6r" podUID="73d9ac28-74b0-4ead-b4e4-b270264feb05" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Jan 27 15:12:03 crc kubenswrapper[4697]: I0127 15:12:03.556038 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=21.556018052 podStartE2EDuration="21.556018052s" podCreationTimestamp="2026-01-27 15:11:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:12:03.550427207 +0000 UTC m=+219.722826988" watchObservedRunningTime="2026-01-27 15:12:03.556018052 +0000 UTC m=+219.728417833" Jan 27 15:12:03 crc kubenswrapper[4697]: I0127 15:12:03.595598 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-559f79478c-f5jqw" podStartSLOduration=30.595580392 podStartE2EDuration="30.595580392s" podCreationTimestamp="2026-01-27 15:11:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:12:03.593217271 +0000 UTC m=+219.765617052" watchObservedRunningTime="2026-01-27 15:12:03.595580392 +0000 UTC m=+219.767980183" Jan 27 15:12:03 crc kubenswrapper[4697]: I0127 15:12:03.682008 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=25.681953492 podStartE2EDuration="25.681953492s" podCreationTimestamp="2026-01-27 15:11:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:12:03.680168796 +0000 UTC m=+219.852568577" watchObservedRunningTime="2026-01-27 15:12:03.681953492 +0000 UTC m=+219.854353273" Jan 27 15:12:03 crc kubenswrapper[4697]: I0127 15:12:03.753843 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-vwctp" podStartSLOduration=197.753822227 podStartE2EDuration="3m17.753822227s" podCreationTimestamp="2026-01-27 15:08:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:12:03.751318753 +0000 UTC m=+219.923718534" watchObservedRunningTime="2026-01-27 15:12:03.753822227 +0000 UTC m=+219.926222008" Jan 27 15:12:03 crc kubenswrapper[4697]: I0127 15:12:03.861624 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-84cff6fc4f-9ln2f"] Jan 27 15:12:03 crc kubenswrapper[4697]: I0127 15:12:03.862581 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-84cff6fc4f-9ln2f" Jan 27 15:12:03 crc kubenswrapper[4697]: I0127 15:12:03.864949 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 27 15:12:03 crc kubenswrapper[4697]: I0127 15:12:03.865092 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 27 15:12:03 crc kubenswrapper[4697]: I0127 15:12:03.866451 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 27 15:12:03 crc kubenswrapper[4697]: I0127 15:12:03.866621 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 27 15:12:03 crc kubenswrapper[4697]: I0127 15:12:03.867514 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 27 15:12:03 crc kubenswrapper[4697]: I0127 15:12:03.868942 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 27 15:12:03 crc kubenswrapper[4697]: I0127 15:12:03.882859 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-84cff6fc4f-9ln2f"] Jan 27 15:12:03 crc kubenswrapper[4697]: I0127 15:12:03.889471 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 27 15:12:03 crc kubenswrapper[4697]: I0127 15:12:03.972179 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f706f519-858d-4da1-8b59-5e40c35e0b0d-config\") pod \"controller-manager-84cff6fc4f-9ln2f\" (UID: \"f706f519-858d-4da1-8b59-5e40c35e0b0d\") " pod="openshift-controller-manager/controller-manager-84cff6fc4f-9ln2f" Jan 27 15:12:03 crc kubenswrapper[4697]: I0127 15:12:03.972300 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f706f519-858d-4da1-8b59-5e40c35e0b0d-client-ca\") pod \"controller-manager-84cff6fc4f-9ln2f\" (UID: \"f706f519-858d-4da1-8b59-5e40c35e0b0d\") " pod="openshift-controller-manager/controller-manager-84cff6fc4f-9ln2f" Jan 27 15:12:03 crc kubenswrapper[4697]: I0127 15:12:03.972329 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f706f519-858d-4da1-8b59-5e40c35e0b0d-proxy-ca-bundles\") pod \"controller-manager-84cff6fc4f-9ln2f\" (UID: \"f706f519-858d-4da1-8b59-5e40c35e0b0d\") " pod="openshift-controller-manager/controller-manager-84cff6fc4f-9ln2f" Jan 27 15:12:03 crc kubenswrapper[4697]: I0127 15:12:03.972370 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhjp4\" (UniqueName: \"kubernetes.io/projected/f706f519-858d-4da1-8b59-5e40c35e0b0d-kube-api-access-zhjp4\") pod \"controller-manager-84cff6fc4f-9ln2f\" (UID: \"f706f519-858d-4da1-8b59-5e40c35e0b0d\") " pod="openshift-controller-manager/controller-manager-84cff6fc4f-9ln2f" Jan 27 15:12:03 crc kubenswrapper[4697]: I0127 15:12:03.972411 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f706f519-858d-4da1-8b59-5e40c35e0b0d-serving-cert\") pod \"controller-manager-84cff6fc4f-9ln2f\" (UID: \"f706f519-858d-4da1-8b59-5e40c35e0b0d\") " pod="openshift-controller-manager/controller-manager-84cff6fc4f-9ln2f" Jan 27 15:12:04 crc kubenswrapper[4697]: I0127 15:12:04.074048 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f706f519-858d-4da1-8b59-5e40c35e0b0d-client-ca\") pod \"controller-manager-84cff6fc4f-9ln2f\" (UID: \"f706f519-858d-4da1-8b59-5e40c35e0b0d\") " pod="openshift-controller-manager/controller-manager-84cff6fc4f-9ln2f" Jan 27 15:12:04 crc kubenswrapper[4697]: I0127 15:12:04.074096 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f706f519-858d-4da1-8b59-5e40c35e0b0d-proxy-ca-bundles\") pod \"controller-manager-84cff6fc4f-9ln2f\" (UID: \"f706f519-858d-4da1-8b59-5e40c35e0b0d\") " pod="openshift-controller-manager/controller-manager-84cff6fc4f-9ln2f" Jan 27 15:12:04 crc kubenswrapper[4697]: I0127 15:12:04.074125 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zhjp4\" (UniqueName: \"kubernetes.io/projected/f706f519-858d-4da1-8b59-5e40c35e0b0d-kube-api-access-zhjp4\") pod \"controller-manager-84cff6fc4f-9ln2f\" (UID: \"f706f519-858d-4da1-8b59-5e40c35e0b0d\") " pod="openshift-controller-manager/controller-manager-84cff6fc4f-9ln2f" Jan 27 15:12:04 crc kubenswrapper[4697]: I0127 15:12:04.074155 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f706f519-858d-4da1-8b59-5e40c35e0b0d-serving-cert\") pod \"controller-manager-84cff6fc4f-9ln2f\" (UID: \"f706f519-858d-4da1-8b59-5e40c35e0b0d\") " pod="openshift-controller-manager/controller-manager-84cff6fc4f-9ln2f" Jan 27 15:12:04 crc kubenswrapper[4697]: I0127 15:12:04.074194 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f706f519-858d-4da1-8b59-5e40c35e0b0d-config\") pod \"controller-manager-84cff6fc4f-9ln2f\" (UID: \"f706f519-858d-4da1-8b59-5e40c35e0b0d\") " pod="openshift-controller-manager/controller-manager-84cff6fc4f-9ln2f" Jan 27 15:12:04 crc kubenswrapper[4697]: I0127 15:12:04.075070 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f706f519-858d-4da1-8b59-5e40c35e0b0d-client-ca\") pod \"controller-manager-84cff6fc4f-9ln2f\" (UID: \"f706f519-858d-4da1-8b59-5e40c35e0b0d\") " pod="openshift-controller-manager/controller-manager-84cff6fc4f-9ln2f" Jan 27 15:12:04 crc kubenswrapper[4697]: I0127 15:12:04.075422 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f706f519-858d-4da1-8b59-5e40c35e0b0d-config\") pod \"controller-manager-84cff6fc4f-9ln2f\" (UID: \"f706f519-858d-4da1-8b59-5e40c35e0b0d\") " pod="openshift-controller-manager/controller-manager-84cff6fc4f-9ln2f" Jan 27 15:12:04 crc kubenswrapper[4697]: I0127 15:12:04.076269 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f706f519-858d-4da1-8b59-5e40c35e0b0d-proxy-ca-bundles\") pod \"controller-manager-84cff6fc4f-9ln2f\" (UID: \"f706f519-858d-4da1-8b59-5e40c35e0b0d\") " pod="openshift-controller-manager/controller-manager-84cff6fc4f-9ln2f" Jan 27 15:12:04 crc kubenswrapper[4697]: I0127 15:12:04.087422 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f706f519-858d-4da1-8b59-5e40c35e0b0d-serving-cert\") pod \"controller-manager-84cff6fc4f-9ln2f\" (UID: \"f706f519-858d-4da1-8b59-5e40c35e0b0d\") " pod="openshift-controller-manager/controller-manager-84cff6fc4f-9ln2f" Jan 27 15:12:04 crc kubenswrapper[4697]: I0127 15:12:04.119830 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zhjp4\" (UniqueName: \"kubernetes.io/projected/f706f519-858d-4da1-8b59-5e40c35e0b0d-kube-api-access-zhjp4\") pod \"controller-manager-84cff6fc4f-9ln2f\" (UID: \"f706f519-858d-4da1-8b59-5e40c35e0b0d\") " pod="openshift-controller-manager/controller-manager-84cff6fc4f-9ln2f" Jan 27 15:12:04 crc kubenswrapper[4697]: I0127 15:12:04.179040 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-84cff6fc4f-9ln2f" Jan 27 15:12:04 crc kubenswrapper[4697]: I0127 15:12:04.549413 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nckjk" event={"ID":"4e1946c0-832f-4b77-8e87-a716e9a10a8f","Type":"ContainerStarted","Data":"9ad2c42cc2434515876173e199b82f92d1e79614eecc3b84f680a34af46407f0"} Jan 27 15:12:04 crc kubenswrapper[4697]: I0127 15:12:04.550849 4697 generic.go:334] "Generic (PLEG): container finished" podID="e0fca15d-774a-465e-9da1-686eed214cd7" containerID="ab192e592ce742daa619fa2c7cfc3f684e2c2dc1f315513a838c7f72cdaa80e3" exitCode=0 Jan 27 15:12:04 crc kubenswrapper[4697]: I0127 15:12:04.550967 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"e0fca15d-774a-465e-9da1-686eed214cd7","Type":"ContainerDied","Data":"ab192e592ce742daa619fa2c7cfc3f684e2c2dc1f315513a838c7f72cdaa80e3"} Jan 27 15:12:04 crc kubenswrapper[4697]: I0127 15:12:04.552969 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8qkrg" event={"ID":"8945ffcc-ee9c-46ab-b2dd-474253d4ba03","Type":"ContainerStarted","Data":"97cbc182b413de978937118acd4f0ff8d5ba3b100e9f21ae71c27da9d07e2aaf"} Jan 27 15:12:04 crc kubenswrapper[4697]: I0127 15:12:04.553898 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-559f79478c-f5jqw" Jan 27 15:12:04 crc kubenswrapper[4697]: I0127 15:12:04.563098 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-559f79478c-f5jqw" Jan 27 15:12:04 crc kubenswrapper[4697]: I0127 15:12:04.582148 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="894a6339-d208-46db-8769-ac9153cb1ba0" path="/var/lib/kubelet/pods/894a6339-d208-46db-8769-ac9153cb1ba0/volumes" Jan 27 15:12:04 crc kubenswrapper[4697]: I0127 15:12:04.617850 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-84cff6fc4f-9ln2f"] Jan 27 15:12:05 crc kubenswrapper[4697]: I0127 15:12:05.558202 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-84cff6fc4f-9ln2f" event={"ID":"f706f519-858d-4da1-8b59-5e40c35e0b0d","Type":"ContainerStarted","Data":"a5417ef2176b3addde8457d25dd6026da6b4fe45cf204212aeb1ef2c707394bb"} Jan 27 15:12:05 crc kubenswrapper[4697]: I0127 15:12:05.558261 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-84cff6fc4f-9ln2f" event={"ID":"f706f519-858d-4da1-8b59-5e40c35e0b0d","Type":"ContainerStarted","Data":"6acb07c3c6342364d0f4722530f8b7a6046ed49ddcaa69f4e381a4679a471461"} Jan 27 15:12:05 crc kubenswrapper[4697]: I0127 15:12:05.575803 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-84cff6fc4f-9ln2f" podStartSLOduration=32.575764079 podStartE2EDuration="32.575764079s" podCreationTimestamp="2026-01-27 15:11:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:12:05.573320026 +0000 UTC m=+221.745719817" watchObservedRunningTime="2026-01-27 15:12:05.575764079 +0000 UTC m=+221.748163870" Jan 27 15:12:05 crc kubenswrapper[4697]: I0127 15:12:05.596566 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-nckjk" podStartSLOduration=4.366022772 podStartE2EDuration="1m8.596544635s" podCreationTimestamp="2026-01-27 15:10:57 +0000 UTC" firstStartedPulling="2026-01-27 15:10:59.733022548 +0000 UTC m=+155.905422329" lastFinishedPulling="2026-01-27 15:12:03.963544401 +0000 UTC m=+220.135944192" observedRunningTime="2026-01-27 15:12:05.593358653 +0000 UTC m=+221.765758444" watchObservedRunningTime="2026-01-27 15:12:05.596544635 +0000 UTC m=+221.768944416" Jan 27 15:12:05 crc kubenswrapper[4697]: I0127 15:12:05.633389 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-8qkrg" podStartSLOduration=5.312995209 podStartE2EDuration="1m10.633361826s" podCreationTimestamp="2026-01-27 15:10:55 +0000 UTC" firstStartedPulling="2026-01-27 15:10:58.652628749 +0000 UTC m=+154.825028530" lastFinishedPulling="2026-01-27 15:12:03.972995366 +0000 UTC m=+220.145395147" observedRunningTime="2026-01-27 15:12:05.609302425 +0000 UTC m=+221.781702216" watchObservedRunningTime="2026-01-27 15:12:05.633361826 +0000 UTC m=+221.805761617" Jan 27 15:12:05 crc kubenswrapper[4697]: I0127 15:12:05.678023 4697 patch_prober.go:28] interesting pod/downloads-7954f5f757-78k6r container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Jan 27 15:12:05 crc kubenswrapper[4697]: I0127 15:12:05.678098 4697 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-78k6r" podUID="73d9ac28-74b0-4ead-b4e4-b270264feb05" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Jan 27 15:12:05 crc kubenswrapper[4697]: I0127 15:12:05.678371 4697 patch_prober.go:28] interesting pod/downloads-7954f5f757-78k6r container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Jan 27 15:12:05 crc kubenswrapper[4697]: I0127 15:12:05.678411 4697 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-78k6r" podUID="73d9ac28-74b0-4ead-b4e4-b270264feb05" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Jan 27 15:12:05 crc kubenswrapper[4697]: I0127 15:12:05.826805 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 15:12:05 crc kubenswrapper[4697]: I0127 15:12:05.904950 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-8qkrg" Jan 27 15:12:05 crc kubenswrapper[4697]: I0127 15:12:05.905002 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-8qkrg" Jan 27 15:12:05 crc kubenswrapper[4697]: I0127 15:12:05.997874 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e0fca15d-774a-465e-9da1-686eed214cd7-kube-api-access\") pod \"e0fca15d-774a-465e-9da1-686eed214cd7\" (UID: \"e0fca15d-774a-465e-9da1-686eed214cd7\") " Jan 27 15:12:05 crc kubenswrapper[4697]: I0127 15:12:05.998233 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e0fca15d-774a-465e-9da1-686eed214cd7-kubelet-dir\") pod \"e0fca15d-774a-465e-9da1-686eed214cd7\" (UID: \"e0fca15d-774a-465e-9da1-686eed214cd7\") " Jan 27 15:12:05 crc kubenswrapper[4697]: I0127 15:12:05.998381 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e0fca15d-774a-465e-9da1-686eed214cd7-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "e0fca15d-774a-465e-9da1-686eed214cd7" (UID: "e0fca15d-774a-465e-9da1-686eed214cd7"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 15:12:05 crc kubenswrapper[4697]: I0127 15:12:05.998552 4697 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e0fca15d-774a-465e-9da1-686eed214cd7-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 27 15:12:06 crc kubenswrapper[4697]: I0127 15:12:06.007073 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0fca15d-774a-465e-9da1-686eed214cd7-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e0fca15d-774a-465e-9da1-686eed214cd7" (UID: "e0fca15d-774a-465e-9da1-686eed214cd7"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:12:06 crc kubenswrapper[4697]: I0127 15:12:06.099639 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e0fca15d-774a-465e-9da1-686eed214cd7-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 15:12:06 crc kubenswrapper[4697]: I0127 15:12:06.565077 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"e0fca15d-774a-465e-9da1-686eed214cd7","Type":"ContainerDied","Data":"cb84cf8d8f9396d53ffcb4bbff285bafb55af81765e73a22fedcc8cde739c4f4"} Jan 27 15:12:06 crc kubenswrapper[4697]: I0127 15:12:06.565124 4697 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cb84cf8d8f9396d53ffcb4bbff285bafb55af81765e73a22fedcc8cde739c4f4" Jan 27 15:12:06 crc kubenswrapper[4697]: I0127 15:12:06.565177 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 15:12:06 crc kubenswrapper[4697]: I0127 15:12:06.566286 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-84cff6fc4f-9ln2f" Jan 27 15:12:06 crc kubenswrapper[4697]: I0127 15:12:06.578226 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-84cff6fc4f-9ln2f" Jan 27 15:12:07 crc kubenswrapper[4697]: I0127 15:12:07.491063 4697 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-8qkrg" podUID="8945ffcc-ee9c-46ab-b2dd-474253d4ba03" containerName="registry-server" probeResult="failure" output=< Jan 27 15:12:07 crc kubenswrapper[4697]: timeout: failed to connect service ":50051" within 1s Jan 27 15:12:07 crc kubenswrapper[4697]: > Jan 27 15:12:07 crc kubenswrapper[4697]: I0127 15:12:07.899718 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-nckjk" Jan 27 15:12:07 crc kubenswrapper[4697]: I0127 15:12:07.899810 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-nckjk" Jan 27 15:12:08 crc kubenswrapper[4697]: I0127 15:12:08.021378 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-nckjk" Jan 27 15:12:15 crc kubenswrapper[4697]: I0127 15:12:15.627891 4697 generic.go:334] "Generic (PLEG): container finished" podID="316f7102-a9a6-40c4-b38b-ba9c7736526a" containerID="58359a344dcf3fc526e390ead399464d02504a33c6b90773cf210ccdec2e72fa" exitCode=0 Jan 27 15:12:15 crc kubenswrapper[4697]: I0127 15:12:15.627974 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-56255" event={"ID":"316f7102-a9a6-40c4-b38b-ba9c7736526a","Type":"ContainerDied","Data":"58359a344dcf3fc526e390ead399464d02504a33c6b90773cf210ccdec2e72fa"} Jan 27 15:12:15 crc kubenswrapper[4697]: I0127 15:12:15.638752 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5h858" event={"ID":"e2bffcd5-911f-4cd1-92b3-e70c361719c4","Type":"ContainerStarted","Data":"925a46581d0bafde121764f90c80e65d5670ef52e74e5186bd2b0f9108780406"} Jan 27 15:12:15 crc kubenswrapper[4697]: I0127 15:12:15.641402 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-59htg" event={"ID":"bf0813c1-e1e9-42a4-8cf9-8fd7fba35e3d","Type":"ContainerStarted","Data":"633fc583b6005a43efa801b10ac82a0323a1375b30c13145cc7fda0a1df01c70"} Jan 27 15:12:15 crc kubenswrapper[4697]: I0127 15:12:15.692175 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-78k6r" Jan 27 15:12:15 crc kubenswrapper[4697]: I0127 15:12:15.953744 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-8qkrg" Jan 27 15:12:16 crc kubenswrapper[4697]: I0127 15:12:16.001793 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-8qkrg" Jan 27 15:12:16 crc kubenswrapper[4697]: I0127 15:12:16.648679 4697 generic.go:334] "Generic (PLEG): container finished" podID="bf0813c1-e1e9-42a4-8cf9-8fd7fba35e3d" containerID="633fc583b6005a43efa801b10ac82a0323a1375b30c13145cc7fda0a1df01c70" exitCode=0 Jan 27 15:12:16 crc kubenswrapper[4697]: I0127 15:12:16.648742 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-59htg" event={"ID":"bf0813c1-e1e9-42a4-8cf9-8fd7fba35e3d","Type":"ContainerDied","Data":"633fc583b6005a43efa801b10ac82a0323a1375b30c13145cc7fda0a1df01c70"} Jan 27 15:12:16 crc kubenswrapper[4697]: I0127 15:12:16.660205 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5h858" event={"ID":"e2bffcd5-911f-4cd1-92b3-e70c361719c4","Type":"ContainerDied","Data":"925a46581d0bafde121764f90c80e65d5670ef52e74e5186bd2b0f9108780406"} Jan 27 15:12:16 crc kubenswrapper[4697]: I0127 15:12:16.660152 4697 generic.go:334] "Generic (PLEG): container finished" podID="e2bffcd5-911f-4cd1-92b3-e70c361719c4" containerID="925a46581d0bafde121764f90c80e65d5670ef52e74e5186bd2b0f9108780406" exitCode=0 Jan 27 15:12:17 crc kubenswrapper[4697]: I0127 15:12:17.954446 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-nckjk" Jan 27 15:12:19 crc kubenswrapper[4697]: I0127 15:12:19.645624 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8qkrg"] Jan 27 15:12:19 crc kubenswrapper[4697]: I0127 15:12:19.646398 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-8qkrg" podUID="8945ffcc-ee9c-46ab-b2dd-474253d4ba03" containerName="registry-server" containerID="cri-o://97cbc182b413de978937118acd4f0ff8d5ba3b100e9f21ae71c27da9d07e2aaf" gracePeriod=2 Jan 27 15:12:19 crc kubenswrapper[4697]: I0127 15:12:19.690179 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5h858" event={"ID":"e2bffcd5-911f-4cd1-92b3-e70c361719c4","Type":"ContainerStarted","Data":"ee19b0e310f9ff098fe73fe0a107ceea8b3ce61b6d00a3aa97ddec42bc0d99a9"} Jan 27 15:12:19 crc kubenswrapper[4697]: I0127 15:12:19.703240 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-56255" event={"ID":"316f7102-a9a6-40c4-b38b-ba9c7736526a","Type":"ContainerStarted","Data":"748b048149034c965bdff46bd997b49b83543fff9b59ab2e8abeb9d6c62c789e"} Jan 27 15:12:19 crc kubenswrapper[4697]: I0127 15:12:19.708889 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wst7l" event={"ID":"20946332-e642-4802-b943-8c504ef8c3ec","Type":"ContainerStarted","Data":"2d4b85c6a3fc9dad6ab73cec5f33afa3ba363d59ef701e5e9fb29d624e226307"} Jan 27 15:12:19 crc kubenswrapper[4697]: I0127 15:12:19.714175 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-59htg" event={"ID":"bf0813c1-e1e9-42a4-8cf9-8fd7fba35e3d","Type":"ContainerStarted","Data":"19278f20367c3d68412249f688ef50f3eeaa19d235ac0a41cc7b2050ebcd7af7"} Jan 27 15:12:19 crc kubenswrapper[4697]: I0127 15:12:19.720591 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-5h858" podStartSLOduration=3.473287902 podStartE2EDuration="1m21.720572152s" podCreationTimestamp="2026-01-27 15:10:58 +0000 UTC" firstStartedPulling="2026-01-27 15:11:00.771935727 +0000 UTC m=+156.944335508" lastFinishedPulling="2026-01-27 15:12:19.019219967 +0000 UTC m=+235.191619758" observedRunningTime="2026-01-27 15:12:19.719590387 +0000 UTC m=+235.891990198" watchObservedRunningTime="2026-01-27 15:12:19.720572152 +0000 UTC m=+235.892971933" Jan 27 15:12:19 crc kubenswrapper[4697]: I0127 15:12:19.720633 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cbv2z" event={"ID":"d7864bf9-220d-402f-bb77-0240a422c2f8","Type":"ContainerStarted","Data":"2b1863d9ef7301bac8f70977042df56ff9f3ad13fa843ca1c9d1e5421fac1da2"} Jan 27 15:12:19 crc kubenswrapper[4697]: I0127 15:12:19.736732 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-59htg" podStartSLOduration=4.267328967 podStartE2EDuration="1m25.736716699s" podCreationTimestamp="2026-01-27 15:10:54 +0000 UTC" firstStartedPulling="2026-01-27 15:10:57.473264799 +0000 UTC m=+153.645664580" lastFinishedPulling="2026-01-27 15:12:18.942652521 +0000 UTC m=+235.115052312" observedRunningTime="2026-01-27 15:12:19.733971738 +0000 UTC m=+235.906371519" watchObservedRunningTime="2026-01-27 15:12:19.736716699 +0000 UTC m=+235.909116480" Jan 27 15:12:19 crc kubenswrapper[4697]: I0127 15:12:19.771639 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-56255" podStartSLOduration=5.584137427 podStartE2EDuration="1m25.77162087s" podCreationTimestamp="2026-01-27 15:10:54 +0000 UTC" firstStartedPulling="2026-01-27 15:10:58.652919198 +0000 UTC m=+154.825318969" lastFinishedPulling="2026-01-27 15:12:18.840402631 +0000 UTC m=+235.012802412" observedRunningTime="2026-01-27 15:12:19.771001194 +0000 UTC m=+235.943400975" watchObservedRunningTime="2026-01-27 15:12:19.77162087 +0000 UTC m=+235.944020651" Jan 27 15:12:20 crc kubenswrapper[4697]: I0127 15:12:20.726917 4697 generic.go:334] "Generic (PLEG): container finished" podID="20946332-e642-4802-b943-8c504ef8c3ec" containerID="2d4b85c6a3fc9dad6ab73cec5f33afa3ba363d59ef701e5e9fb29d624e226307" exitCode=0 Jan 27 15:12:20 crc kubenswrapper[4697]: I0127 15:12:20.726994 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wst7l" event={"ID":"20946332-e642-4802-b943-8c504ef8c3ec","Type":"ContainerDied","Data":"2d4b85c6a3fc9dad6ab73cec5f33afa3ba363d59ef701e5e9fb29d624e226307"} Jan 27 15:12:20 crc kubenswrapper[4697]: I0127 15:12:20.729173 4697 generic.go:334] "Generic (PLEG): container finished" podID="d7864bf9-220d-402f-bb77-0240a422c2f8" containerID="2b1863d9ef7301bac8f70977042df56ff9f3ad13fa843ca1c9d1e5421fac1da2" exitCode=0 Jan 27 15:12:20 crc kubenswrapper[4697]: I0127 15:12:20.729206 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cbv2z" event={"ID":"d7864bf9-220d-402f-bb77-0240a422c2f8","Type":"ContainerDied","Data":"2b1863d9ef7301bac8f70977042df56ff9f3ad13fa843ca1c9d1e5421fac1da2"} Jan 27 15:12:21 crc kubenswrapper[4697]: I0127 15:12:21.743831 4697 generic.go:334] "Generic (PLEG): container finished" podID="8945ffcc-ee9c-46ab-b2dd-474253d4ba03" containerID="97cbc182b413de978937118acd4f0ff8d5ba3b100e9f21ae71c27da9d07e2aaf" exitCode=0 Jan 27 15:12:21 crc kubenswrapper[4697]: I0127 15:12:21.743903 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8qkrg" event={"ID":"8945ffcc-ee9c-46ab-b2dd-474253d4ba03","Type":"ContainerDied","Data":"97cbc182b413de978937118acd4f0ff8d5ba3b100e9f21ae71c27da9d07e2aaf"} Jan 27 15:12:21 crc kubenswrapper[4697]: I0127 15:12:21.745692 4697 generic.go:334] "Generic (PLEG): container finished" podID="9f446277-5df0-4b04-9f9b-cce248835bcd" containerID="13424e52e8677931055388dd4d9ab772e7dfd571cd9b02fbd39120cb65864710" exitCode=0 Jan 27 15:12:21 crc kubenswrapper[4697]: I0127 15:12:21.745811 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wg52h" event={"ID":"9f446277-5df0-4b04-9f9b-cce248835bcd","Type":"ContainerDied","Data":"13424e52e8677931055388dd4d9ab772e7dfd571cd9b02fbd39120cb65864710"} Jan 27 15:12:21 crc kubenswrapper[4697]: I0127 15:12:21.975211 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8qkrg" Jan 27 15:12:22 crc kubenswrapper[4697]: I0127 15:12:22.033060 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nckjk"] Jan 27 15:12:22 crc kubenswrapper[4697]: I0127 15:12:22.033406 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-nckjk" podUID="4e1946c0-832f-4b77-8e87-a716e9a10a8f" containerName="registry-server" containerID="cri-o://9ad2c42cc2434515876173e199b82f92d1e79614eecc3b84f680a34af46407f0" gracePeriod=2 Jan 27 15:12:22 crc kubenswrapper[4697]: I0127 15:12:22.118912 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8945ffcc-ee9c-46ab-b2dd-474253d4ba03-catalog-content\") pod \"8945ffcc-ee9c-46ab-b2dd-474253d4ba03\" (UID: \"8945ffcc-ee9c-46ab-b2dd-474253d4ba03\") " Jan 27 15:12:22 crc kubenswrapper[4697]: I0127 15:12:22.127058 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8945ffcc-ee9c-46ab-b2dd-474253d4ba03-utilities\") pod \"8945ffcc-ee9c-46ab-b2dd-474253d4ba03\" (UID: \"8945ffcc-ee9c-46ab-b2dd-474253d4ba03\") " Jan 27 15:12:22 crc kubenswrapper[4697]: I0127 15:12:22.127099 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sqxwb\" (UniqueName: \"kubernetes.io/projected/8945ffcc-ee9c-46ab-b2dd-474253d4ba03-kube-api-access-sqxwb\") pod \"8945ffcc-ee9c-46ab-b2dd-474253d4ba03\" (UID: \"8945ffcc-ee9c-46ab-b2dd-474253d4ba03\") " Jan 27 15:12:22 crc kubenswrapper[4697]: I0127 15:12:22.127777 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8945ffcc-ee9c-46ab-b2dd-474253d4ba03-utilities" (OuterVolumeSpecName: "utilities") pod "8945ffcc-ee9c-46ab-b2dd-474253d4ba03" (UID: "8945ffcc-ee9c-46ab-b2dd-474253d4ba03"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:12:22 crc kubenswrapper[4697]: I0127 15:12:22.132307 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8945ffcc-ee9c-46ab-b2dd-474253d4ba03-kube-api-access-sqxwb" (OuterVolumeSpecName: "kube-api-access-sqxwb") pod "8945ffcc-ee9c-46ab-b2dd-474253d4ba03" (UID: "8945ffcc-ee9c-46ab-b2dd-474253d4ba03"). InnerVolumeSpecName "kube-api-access-sqxwb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:12:22 crc kubenswrapper[4697]: I0127 15:12:22.168519 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8945ffcc-ee9c-46ab-b2dd-474253d4ba03-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8945ffcc-ee9c-46ab-b2dd-474253d4ba03" (UID: "8945ffcc-ee9c-46ab-b2dd-474253d4ba03"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:12:22 crc kubenswrapper[4697]: I0127 15:12:22.229283 4697 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8945ffcc-ee9c-46ab-b2dd-474253d4ba03-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 15:12:22 crc kubenswrapper[4697]: I0127 15:12:22.229339 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sqxwb\" (UniqueName: \"kubernetes.io/projected/8945ffcc-ee9c-46ab-b2dd-474253d4ba03-kube-api-access-sqxwb\") on node \"crc\" DevicePath \"\"" Jan 27 15:12:22 crc kubenswrapper[4697]: I0127 15:12:22.229360 4697 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8945ffcc-ee9c-46ab-b2dd-474253d4ba03-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 15:12:22 crc kubenswrapper[4697]: I0127 15:12:22.752910 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8qkrg" event={"ID":"8945ffcc-ee9c-46ab-b2dd-474253d4ba03","Type":"ContainerDied","Data":"237e7f03ad44043aeac263652b7a0506ff25e70f413d1ee58ce03758a2d9f739"} Jan 27 15:12:22 crc kubenswrapper[4697]: I0127 15:12:22.752964 4697 scope.go:117] "RemoveContainer" containerID="97cbc182b413de978937118acd4f0ff8d5ba3b100e9f21ae71c27da9d07e2aaf" Jan 27 15:12:22 crc kubenswrapper[4697]: I0127 15:12:22.753017 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8qkrg" Jan 27 15:12:22 crc kubenswrapper[4697]: I0127 15:12:22.769946 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8qkrg"] Jan 27 15:12:22 crc kubenswrapper[4697]: I0127 15:12:22.772688 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-8qkrg"] Jan 27 15:12:22 crc kubenswrapper[4697]: I0127 15:12:22.774323 4697 scope.go:117] "RemoveContainer" containerID="6836e7071aba079ec10026fc72a1485b151f58b1b77e27f9b2f8b0374c1a1bca" Jan 27 15:12:22 crc kubenswrapper[4697]: I0127 15:12:22.788910 4697 scope.go:117] "RemoveContainer" containerID="ac33bd78fc30639888157b9f37c439f103a5d92365b90fa84e5cbc4e33047953" Jan 27 15:12:23 crc kubenswrapper[4697]: I0127 15:12:23.315244 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nckjk" Jan 27 15:12:23 crc kubenswrapper[4697]: I0127 15:12:23.346073 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2rcr6\" (UniqueName: \"kubernetes.io/projected/4e1946c0-832f-4b77-8e87-a716e9a10a8f-kube-api-access-2rcr6\") pod \"4e1946c0-832f-4b77-8e87-a716e9a10a8f\" (UID: \"4e1946c0-832f-4b77-8e87-a716e9a10a8f\") " Jan 27 15:12:23 crc kubenswrapper[4697]: I0127 15:12:23.346128 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e1946c0-832f-4b77-8e87-a716e9a10a8f-catalog-content\") pod \"4e1946c0-832f-4b77-8e87-a716e9a10a8f\" (UID: \"4e1946c0-832f-4b77-8e87-a716e9a10a8f\") " Jan 27 15:12:23 crc kubenswrapper[4697]: I0127 15:12:23.346164 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e1946c0-832f-4b77-8e87-a716e9a10a8f-utilities\") pod \"4e1946c0-832f-4b77-8e87-a716e9a10a8f\" (UID: \"4e1946c0-832f-4b77-8e87-a716e9a10a8f\") " Jan 27 15:12:23 crc kubenswrapper[4697]: I0127 15:12:23.347074 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e1946c0-832f-4b77-8e87-a716e9a10a8f-utilities" (OuterVolumeSpecName: "utilities") pod "4e1946c0-832f-4b77-8e87-a716e9a10a8f" (UID: "4e1946c0-832f-4b77-8e87-a716e9a10a8f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:12:23 crc kubenswrapper[4697]: I0127 15:12:23.351162 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e1946c0-832f-4b77-8e87-a716e9a10a8f-kube-api-access-2rcr6" (OuterVolumeSpecName: "kube-api-access-2rcr6") pod "4e1946c0-832f-4b77-8e87-a716e9a10a8f" (UID: "4e1946c0-832f-4b77-8e87-a716e9a10a8f"). InnerVolumeSpecName "kube-api-access-2rcr6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:12:23 crc kubenswrapper[4697]: I0127 15:12:23.382775 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e1946c0-832f-4b77-8e87-a716e9a10a8f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4e1946c0-832f-4b77-8e87-a716e9a10a8f" (UID: "4e1946c0-832f-4b77-8e87-a716e9a10a8f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:12:23 crc kubenswrapper[4697]: I0127 15:12:23.447161 4697 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e1946c0-832f-4b77-8e87-a716e9a10a8f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 15:12:23 crc kubenswrapper[4697]: I0127 15:12:23.447358 4697 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e1946c0-832f-4b77-8e87-a716e9a10a8f-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 15:12:23 crc kubenswrapper[4697]: I0127 15:12:23.447368 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2rcr6\" (UniqueName: \"kubernetes.io/projected/4e1946c0-832f-4b77-8e87-a716e9a10a8f-kube-api-access-2rcr6\") on node \"crc\" DevicePath \"\"" Jan 27 15:12:23 crc kubenswrapper[4697]: I0127 15:12:23.760504 4697 generic.go:334] "Generic (PLEG): container finished" podID="4e1946c0-832f-4b77-8e87-a716e9a10a8f" containerID="9ad2c42cc2434515876173e199b82f92d1e79614eecc3b84f680a34af46407f0" exitCode=0 Jan 27 15:12:23 crc kubenswrapper[4697]: I0127 15:12:23.760541 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nckjk" event={"ID":"4e1946c0-832f-4b77-8e87-a716e9a10a8f","Type":"ContainerDied","Data":"9ad2c42cc2434515876173e199b82f92d1e79614eecc3b84f680a34af46407f0"} Jan 27 15:12:23 crc kubenswrapper[4697]: I0127 15:12:23.760568 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nckjk" event={"ID":"4e1946c0-832f-4b77-8e87-a716e9a10a8f","Type":"ContainerDied","Data":"99b0c3cb7f7d2f7e1bac76e58332eb7e0460b411beb9ca843ecfcdb2df370dc0"} Jan 27 15:12:23 crc kubenswrapper[4697]: I0127 15:12:23.760588 4697 scope.go:117] "RemoveContainer" containerID="9ad2c42cc2434515876173e199b82f92d1e79614eecc3b84f680a34af46407f0" Jan 27 15:12:23 crc kubenswrapper[4697]: I0127 15:12:23.760707 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nckjk" Jan 27 15:12:23 crc kubenswrapper[4697]: I0127 15:12:23.789366 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nckjk"] Jan 27 15:12:23 crc kubenswrapper[4697]: I0127 15:12:23.789414 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-nckjk"] Jan 27 15:12:23 crc kubenswrapper[4697]: I0127 15:12:23.889753 4697 scope.go:117] "RemoveContainer" containerID="bf1254b99a71f15274c0d86d514213b5e7d0571810cc9062a854710bca0ee100" Jan 27 15:12:24 crc kubenswrapper[4697]: I0127 15:12:24.464598 4697 scope.go:117] "RemoveContainer" containerID="8f016cd7a2ab829a41d67255ee0e689567016eb0444e540f55bfbdb6fd0916f5" Jan 27 15:12:24 crc kubenswrapper[4697]: I0127 15:12:24.486328 4697 scope.go:117] "RemoveContainer" containerID="9ad2c42cc2434515876173e199b82f92d1e79614eecc3b84f680a34af46407f0" Jan 27 15:12:24 crc kubenswrapper[4697]: E0127 15:12:24.486830 4697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ad2c42cc2434515876173e199b82f92d1e79614eecc3b84f680a34af46407f0\": container with ID starting with 9ad2c42cc2434515876173e199b82f92d1e79614eecc3b84f680a34af46407f0 not found: ID does not exist" containerID="9ad2c42cc2434515876173e199b82f92d1e79614eecc3b84f680a34af46407f0" Jan 27 15:12:24 crc kubenswrapper[4697]: I0127 15:12:24.486872 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ad2c42cc2434515876173e199b82f92d1e79614eecc3b84f680a34af46407f0"} err="failed to get container status \"9ad2c42cc2434515876173e199b82f92d1e79614eecc3b84f680a34af46407f0\": rpc error: code = NotFound desc = could not find container \"9ad2c42cc2434515876173e199b82f92d1e79614eecc3b84f680a34af46407f0\": container with ID starting with 9ad2c42cc2434515876173e199b82f92d1e79614eecc3b84f680a34af46407f0 not found: ID does not exist" Jan 27 15:12:24 crc kubenswrapper[4697]: I0127 15:12:24.486899 4697 scope.go:117] "RemoveContainer" containerID="bf1254b99a71f15274c0d86d514213b5e7d0571810cc9062a854710bca0ee100" Jan 27 15:12:24 crc kubenswrapper[4697]: E0127 15:12:24.491738 4697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf1254b99a71f15274c0d86d514213b5e7d0571810cc9062a854710bca0ee100\": container with ID starting with bf1254b99a71f15274c0d86d514213b5e7d0571810cc9062a854710bca0ee100 not found: ID does not exist" containerID="bf1254b99a71f15274c0d86d514213b5e7d0571810cc9062a854710bca0ee100" Jan 27 15:12:24 crc kubenswrapper[4697]: I0127 15:12:24.491774 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf1254b99a71f15274c0d86d514213b5e7d0571810cc9062a854710bca0ee100"} err="failed to get container status \"bf1254b99a71f15274c0d86d514213b5e7d0571810cc9062a854710bca0ee100\": rpc error: code = NotFound desc = could not find container \"bf1254b99a71f15274c0d86d514213b5e7d0571810cc9062a854710bca0ee100\": container with ID starting with bf1254b99a71f15274c0d86d514213b5e7d0571810cc9062a854710bca0ee100 not found: ID does not exist" Jan 27 15:12:24 crc kubenswrapper[4697]: I0127 15:12:24.491834 4697 scope.go:117] "RemoveContainer" containerID="8f016cd7a2ab829a41d67255ee0e689567016eb0444e540f55bfbdb6fd0916f5" Jan 27 15:12:24 crc kubenswrapper[4697]: E0127 15:12:24.493306 4697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f016cd7a2ab829a41d67255ee0e689567016eb0444e540f55bfbdb6fd0916f5\": container with ID starting with 8f016cd7a2ab829a41d67255ee0e689567016eb0444e540f55bfbdb6fd0916f5 not found: ID does not exist" containerID="8f016cd7a2ab829a41d67255ee0e689567016eb0444e540f55bfbdb6fd0916f5" Jan 27 15:12:24 crc kubenswrapper[4697]: I0127 15:12:24.494105 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f016cd7a2ab829a41d67255ee0e689567016eb0444e540f55bfbdb6fd0916f5"} err="failed to get container status \"8f016cd7a2ab829a41d67255ee0e689567016eb0444e540f55bfbdb6fd0916f5\": rpc error: code = NotFound desc = could not find container \"8f016cd7a2ab829a41d67255ee0e689567016eb0444e540f55bfbdb6fd0916f5\": container with ID starting with 8f016cd7a2ab829a41d67255ee0e689567016eb0444e540f55bfbdb6fd0916f5 not found: ID does not exist" Jan 27 15:12:24 crc kubenswrapper[4697]: I0127 15:12:24.582156 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e1946c0-832f-4b77-8e87-a716e9a10a8f" path="/var/lib/kubelet/pods/4e1946c0-832f-4b77-8e87-a716e9a10a8f/volumes" Jan 27 15:12:24 crc kubenswrapper[4697]: I0127 15:12:24.583059 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8945ffcc-ee9c-46ab-b2dd-474253d4ba03" path="/var/lib/kubelet/pods/8945ffcc-ee9c-46ab-b2dd-474253d4ba03/volumes" Jan 27 15:12:25 crc kubenswrapper[4697]: I0127 15:12:25.430043 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-56255" Jan 27 15:12:25 crc kubenswrapper[4697]: I0127 15:12:25.430960 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-56255" Jan 27 15:12:25 crc kubenswrapper[4697]: I0127 15:12:25.470446 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-56255" Jan 27 15:12:25 crc kubenswrapper[4697]: I0127 15:12:25.736279 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-59htg" Jan 27 15:12:25 crc kubenswrapper[4697]: I0127 15:12:25.736328 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-59htg" Jan 27 15:12:25 crc kubenswrapper[4697]: I0127 15:12:25.776101 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wst7l" event={"ID":"20946332-e642-4802-b943-8c504ef8c3ec","Type":"ContainerStarted","Data":"ab904360537628b6277b9dbe62f06c910b7868c23a57255c36036686a33b3add"} Jan 27 15:12:25 crc kubenswrapper[4697]: I0127 15:12:25.790214 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-59htg" Jan 27 15:12:25 crc kubenswrapper[4697]: I0127 15:12:25.825470 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-56255" Jan 27 15:12:25 crc kubenswrapper[4697]: I0127 15:12:25.888601 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-59htg" Jan 27 15:12:26 crc kubenswrapper[4697]: I0127 15:12:26.799531 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-wst7l" podStartSLOduration=5.03362262 podStartE2EDuration="1m29.799513959s" podCreationTimestamp="2026-01-27 15:10:57 +0000 UTC" firstStartedPulling="2026-01-27 15:10:59.698617403 +0000 UTC m=+155.871017184" lastFinishedPulling="2026-01-27 15:12:24.464508742 +0000 UTC m=+240.636908523" observedRunningTime="2026-01-27 15:12:26.798607485 +0000 UTC m=+242.971007276" watchObservedRunningTime="2026-01-27 15:12:26.799513959 +0000 UTC m=+242.971913740" Jan 27 15:12:28 crc kubenswrapper[4697]: I0127 15:12:28.285627 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-wst7l" Jan 27 15:12:28 crc kubenswrapper[4697]: I0127 15:12:28.286256 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-wst7l" Jan 27 15:12:28 crc kubenswrapper[4697]: I0127 15:12:28.497511 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-5h858" Jan 27 15:12:28 crc kubenswrapper[4697]: I0127 15:12:28.497846 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-5h858" Jan 27 15:12:28 crc kubenswrapper[4697]: I0127 15:12:28.537262 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-5h858" Jan 27 15:12:28 crc kubenswrapper[4697]: I0127 15:12:28.806149 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wg52h" event={"ID":"9f446277-5df0-4b04-9f9b-cce248835bcd","Type":"ContainerStarted","Data":"ced72411403dbf9bc7fe7070b78e09bbe0921886c2b5f0350a18aa73a6a3d982"} Jan 27 15:12:28 crc kubenswrapper[4697]: I0127 15:12:28.808276 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cbv2z" event={"ID":"d7864bf9-220d-402f-bb77-0240a422c2f8","Type":"ContainerStarted","Data":"6ad129e2b3ab491f043bb2b6f6f846788fa6f0f4397c088ff3445dbc07c48556"} Jan 27 15:12:28 crc kubenswrapper[4697]: I0127 15:12:28.833335 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-wg52h" podStartSLOduration=4.542616952 podStartE2EDuration="1m33.833314619s" podCreationTimestamp="2026-01-27 15:10:55 +0000 UTC" firstStartedPulling="2026-01-27 15:10:58.607048631 +0000 UTC m=+154.779448412" lastFinishedPulling="2026-01-27 15:12:27.897746258 +0000 UTC m=+244.070146079" observedRunningTime="2026-01-27 15:12:28.83223232 +0000 UTC m=+245.004632101" watchObservedRunningTime="2026-01-27 15:12:28.833314619 +0000 UTC m=+245.005714390" Jan 27 15:12:28 crc kubenswrapper[4697]: I0127 15:12:28.848203 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-5h858" Jan 27 15:12:28 crc kubenswrapper[4697]: I0127 15:12:28.851006 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-cbv2z" podStartSLOduration=3.714383198 podStartE2EDuration="1m31.850990165s" podCreationTimestamp="2026-01-27 15:10:57 +0000 UTC" firstStartedPulling="2026-01-27 15:10:59.735312574 +0000 UTC m=+155.907712355" lastFinishedPulling="2026-01-27 15:12:27.871919521 +0000 UTC m=+244.044319322" observedRunningTime="2026-01-27 15:12:28.849211479 +0000 UTC m=+245.021611270" watchObservedRunningTime="2026-01-27 15:12:28.850990165 +0000 UTC m=+245.023389946" Jan 27 15:12:29 crc kubenswrapper[4697]: I0127 15:12:29.348689 4697 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-wst7l" podUID="20946332-e642-4802-b943-8c504ef8c3ec" containerName="registry-server" probeResult="failure" output=< Jan 27 15:12:29 crc kubenswrapper[4697]: timeout: failed to connect service ":50051" within 1s Jan 27 15:12:29 crc kubenswrapper[4697]: > Jan 27 15:12:30 crc kubenswrapper[4697]: I0127 15:12:30.220664 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5h858"] Jan 27 15:12:30 crc kubenswrapper[4697]: I0127 15:12:30.818651 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-5h858" podUID="e2bffcd5-911f-4cd1-92b3-e70c361719c4" containerName="registry-server" containerID="cri-o://ee19b0e310f9ff098fe73fe0a107ceea8b3ce61b6d00a3aa97ddec42bc0d99a9" gracePeriod=2 Jan 27 15:12:31 crc kubenswrapper[4697]: I0127 15:12:31.361353 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5h858" Jan 27 15:12:31 crc kubenswrapper[4697]: I0127 15:12:31.452138 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2bffcd5-911f-4cd1-92b3-e70c361719c4-catalog-content\") pod \"e2bffcd5-911f-4cd1-92b3-e70c361719c4\" (UID: \"e2bffcd5-911f-4cd1-92b3-e70c361719c4\") " Jan 27 15:12:31 crc kubenswrapper[4697]: I0127 15:12:31.452194 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2bffcd5-911f-4cd1-92b3-e70c361719c4-utilities\") pod \"e2bffcd5-911f-4cd1-92b3-e70c361719c4\" (UID: \"e2bffcd5-911f-4cd1-92b3-e70c361719c4\") " Jan 27 15:12:31 crc kubenswrapper[4697]: I0127 15:12:31.452220 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4h76\" (UniqueName: \"kubernetes.io/projected/e2bffcd5-911f-4cd1-92b3-e70c361719c4-kube-api-access-x4h76\") pod \"e2bffcd5-911f-4cd1-92b3-e70c361719c4\" (UID: \"e2bffcd5-911f-4cd1-92b3-e70c361719c4\") " Jan 27 15:12:31 crc kubenswrapper[4697]: I0127 15:12:31.453566 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e2bffcd5-911f-4cd1-92b3-e70c361719c4-utilities" (OuterVolumeSpecName: "utilities") pod "e2bffcd5-911f-4cd1-92b3-e70c361719c4" (UID: "e2bffcd5-911f-4cd1-92b3-e70c361719c4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:12:31 crc kubenswrapper[4697]: I0127 15:12:31.456978 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2bffcd5-911f-4cd1-92b3-e70c361719c4-kube-api-access-x4h76" (OuterVolumeSpecName: "kube-api-access-x4h76") pod "e2bffcd5-911f-4cd1-92b3-e70c361719c4" (UID: "e2bffcd5-911f-4cd1-92b3-e70c361719c4"). InnerVolumeSpecName "kube-api-access-x4h76". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:12:31 crc kubenswrapper[4697]: I0127 15:12:31.553156 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4h76\" (UniqueName: \"kubernetes.io/projected/e2bffcd5-911f-4cd1-92b3-e70c361719c4-kube-api-access-x4h76\") on node \"crc\" DevicePath \"\"" Jan 27 15:12:31 crc kubenswrapper[4697]: I0127 15:12:31.553186 4697 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2bffcd5-911f-4cd1-92b3-e70c361719c4-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 15:12:31 crc kubenswrapper[4697]: I0127 15:12:31.573360 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e2bffcd5-911f-4cd1-92b3-e70c361719c4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e2bffcd5-911f-4cd1-92b3-e70c361719c4" (UID: "e2bffcd5-911f-4cd1-92b3-e70c361719c4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:12:31 crc kubenswrapper[4697]: I0127 15:12:31.657997 4697 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2bffcd5-911f-4cd1-92b3-e70c361719c4-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 15:12:31 crc kubenswrapper[4697]: I0127 15:12:31.825536 4697 generic.go:334] "Generic (PLEG): container finished" podID="e2bffcd5-911f-4cd1-92b3-e70c361719c4" containerID="ee19b0e310f9ff098fe73fe0a107ceea8b3ce61b6d00a3aa97ddec42bc0d99a9" exitCode=0 Jan 27 15:12:31 crc kubenswrapper[4697]: I0127 15:12:31.825587 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5h858" event={"ID":"e2bffcd5-911f-4cd1-92b3-e70c361719c4","Type":"ContainerDied","Data":"ee19b0e310f9ff098fe73fe0a107ceea8b3ce61b6d00a3aa97ddec42bc0d99a9"} Jan 27 15:12:31 crc kubenswrapper[4697]: I0127 15:12:31.825619 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5h858" event={"ID":"e2bffcd5-911f-4cd1-92b3-e70c361719c4","Type":"ContainerDied","Data":"86a78fec4a65667fa92b3b24055434ff637f114cc36e4acea9b42dad5d50a1aa"} Jan 27 15:12:31 crc kubenswrapper[4697]: I0127 15:12:31.825639 4697 scope.go:117] "RemoveContainer" containerID="ee19b0e310f9ff098fe73fe0a107ceea8b3ce61b6d00a3aa97ddec42bc0d99a9" Jan 27 15:12:31 crc kubenswrapper[4697]: I0127 15:12:31.825829 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5h858" Jan 27 15:12:31 crc kubenswrapper[4697]: I0127 15:12:31.859508 4697 scope.go:117] "RemoveContainer" containerID="925a46581d0bafde121764f90c80e65d5670ef52e74e5186bd2b0f9108780406" Jan 27 15:12:31 crc kubenswrapper[4697]: I0127 15:12:31.862648 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5h858"] Jan 27 15:12:31 crc kubenswrapper[4697]: I0127 15:12:31.867000 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-5h858"] Jan 27 15:12:31 crc kubenswrapper[4697]: I0127 15:12:31.877398 4697 scope.go:117] "RemoveContainer" containerID="6d6ab85daf49ea0b7474b77fb5bd4170c7bdb80ac64fd6d166164f859208781c" Jan 27 15:12:31 crc kubenswrapper[4697]: I0127 15:12:31.890015 4697 scope.go:117] "RemoveContainer" containerID="ee19b0e310f9ff098fe73fe0a107ceea8b3ce61b6d00a3aa97ddec42bc0d99a9" Jan 27 15:12:31 crc kubenswrapper[4697]: E0127 15:12:31.890423 4697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee19b0e310f9ff098fe73fe0a107ceea8b3ce61b6d00a3aa97ddec42bc0d99a9\": container with ID starting with ee19b0e310f9ff098fe73fe0a107ceea8b3ce61b6d00a3aa97ddec42bc0d99a9 not found: ID does not exist" containerID="ee19b0e310f9ff098fe73fe0a107ceea8b3ce61b6d00a3aa97ddec42bc0d99a9" Jan 27 15:12:31 crc kubenswrapper[4697]: I0127 15:12:31.890454 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee19b0e310f9ff098fe73fe0a107ceea8b3ce61b6d00a3aa97ddec42bc0d99a9"} err="failed to get container status \"ee19b0e310f9ff098fe73fe0a107ceea8b3ce61b6d00a3aa97ddec42bc0d99a9\": rpc error: code = NotFound desc = could not find container \"ee19b0e310f9ff098fe73fe0a107ceea8b3ce61b6d00a3aa97ddec42bc0d99a9\": container with ID starting with ee19b0e310f9ff098fe73fe0a107ceea8b3ce61b6d00a3aa97ddec42bc0d99a9 not found: ID does not exist" Jan 27 15:12:31 crc kubenswrapper[4697]: I0127 15:12:31.890477 4697 scope.go:117] "RemoveContainer" containerID="925a46581d0bafde121764f90c80e65d5670ef52e74e5186bd2b0f9108780406" Jan 27 15:12:31 crc kubenswrapper[4697]: E0127 15:12:31.890805 4697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"925a46581d0bafde121764f90c80e65d5670ef52e74e5186bd2b0f9108780406\": container with ID starting with 925a46581d0bafde121764f90c80e65d5670ef52e74e5186bd2b0f9108780406 not found: ID does not exist" containerID="925a46581d0bafde121764f90c80e65d5670ef52e74e5186bd2b0f9108780406" Jan 27 15:12:31 crc kubenswrapper[4697]: I0127 15:12:31.890825 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"925a46581d0bafde121764f90c80e65d5670ef52e74e5186bd2b0f9108780406"} err="failed to get container status \"925a46581d0bafde121764f90c80e65d5670ef52e74e5186bd2b0f9108780406\": rpc error: code = NotFound desc = could not find container \"925a46581d0bafde121764f90c80e65d5670ef52e74e5186bd2b0f9108780406\": container with ID starting with 925a46581d0bafde121764f90c80e65d5670ef52e74e5186bd2b0f9108780406 not found: ID does not exist" Jan 27 15:12:31 crc kubenswrapper[4697]: I0127 15:12:31.890837 4697 scope.go:117] "RemoveContainer" containerID="6d6ab85daf49ea0b7474b77fb5bd4170c7bdb80ac64fd6d166164f859208781c" Jan 27 15:12:31 crc kubenswrapper[4697]: E0127 15:12:31.891185 4697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d6ab85daf49ea0b7474b77fb5bd4170c7bdb80ac64fd6d166164f859208781c\": container with ID starting with 6d6ab85daf49ea0b7474b77fb5bd4170c7bdb80ac64fd6d166164f859208781c not found: ID does not exist" containerID="6d6ab85daf49ea0b7474b77fb5bd4170c7bdb80ac64fd6d166164f859208781c" Jan 27 15:12:31 crc kubenswrapper[4697]: I0127 15:12:31.891208 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d6ab85daf49ea0b7474b77fb5bd4170c7bdb80ac64fd6d166164f859208781c"} err="failed to get container status \"6d6ab85daf49ea0b7474b77fb5bd4170c7bdb80ac64fd6d166164f859208781c\": rpc error: code = NotFound desc = could not find container \"6d6ab85daf49ea0b7474b77fb5bd4170c7bdb80ac64fd6d166164f859208781c\": container with ID starting with 6d6ab85daf49ea0b7474b77fb5bd4170c7bdb80ac64fd6d166164f859208781c not found: ID does not exist" Jan 27 15:12:32 crc kubenswrapper[4697]: I0127 15:12:32.574462 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2bffcd5-911f-4cd1-92b3-e70c361719c4" path="/var/lib/kubelet/pods/e2bffcd5-911f-4cd1-92b3-e70c361719c4/volumes" Jan 27 15:12:35 crc kubenswrapper[4697]: I0127 15:12:35.121914 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-tgccn"] Jan 27 15:12:35 crc kubenswrapper[4697]: I0127 15:12:35.904219 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-wg52h" Jan 27 15:12:35 crc kubenswrapper[4697]: I0127 15:12:35.904577 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-wg52h" Jan 27 15:12:35 crc kubenswrapper[4697]: I0127 15:12:35.942799 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-wg52h" Jan 27 15:12:36 crc kubenswrapper[4697]: I0127 15:12:36.885132 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-wg52h" Jan 27 15:12:36 crc kubenswrapper[4697]: I0127 15:12:36.928001 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wg52h"] Jan 27 15:12:37 crc kubenswrapper[4697]: I0127 15:12:37.474334 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-cbv2z" Jan 27 15:12:37 crc kubenswrapper[4697]: I0127 15:12:37.474728 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-cbv2z" Jan 27 15:12:37 crc kubenswrapper[4697]: I0127 15:12:37.520734 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-cbv2z" Jan 27 15:12:37 crc kubenswrapper[4697]: I0127 15:12:37.899959 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-cbv2z" Jan 27 15:12:38 crc kubenswrapper[4697]: I0127 15:12:38.321027 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-wst7l" Jan 27 15:12:38 crc kubenswrapper[4697]: I0127 15:12:38.361579 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-wst7l" Jan 27 15:12:38 crc kubenswrapper[4697]: I0127 15:12:38.856573 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-wg52h" podUID="9f446277-5df0-4b04-9f9b-cce248835bcd" containerName="registry-server" containerID="cri-o://ced72411403dbf9bc7fe7070b78e09bbe0921886c2b5f0350a18aa73a6a3d982" gracePeriod=2 Jan 27 15:12:39 crc kubenswrapper[4697]: I0127 15:12:39.415632 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wg52h" Jan 27 15:12:39 crc kubenswrapper[4697]: I0127 15:12:39.441024 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f446277-5df0-4b04-9f9b-cce248835bcd-utilities\") pod \"9f446277-5df0-4b04-9f9b-cce248835bcd\" (UID: \"9f446277-5df0-4b04-9f9b-cce248835bcd\") " Jan 27 15:12:39 crc kubenswrapper[4697]: I0127 15:12:39.441106 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f446277-5df0-4b04-9f9b-cce248835bcd-catalog-content\") pod \"9f446277-5df0-4b04-9f9b-cce248835bcd\" (UID: \"9f446277-5df0-4b04-9f9b-cce248835bcd\") " Jan 27 15:12:39 crc kubenswrapper[4697]: I0127 15:12:39.441184 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rtldz\" (UniqueName: \"kubernetes.io/projected/9f446277-5df0-4b04-9f9b-cce248835bcd-kube-api-access-rtldz\") pod \"9f446277-5df0-4b04-9f9b-cce248835bcd\" (UID: \"9f446277-5df0-4b04-9f9b-cce248835bcd\") " Jan 27 15:12:39 crc kubenswrapper[4697]: I0127 15:12:39.442248 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f446277-5df0-4b04-9f9b-cce248835bcd-utilities" (OuterVolumeSpecName: "utilities") pod "9f446277-5df0-4b04-9f9b-cce248835bcd" (UID: "9f446277-5df0-4b04-9f9b-cce248835bcd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:12:39 crc kubenswrapper[4697]: I0127 15:12:39.448224 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f446277-5df0-4b04-9f9b-cce248835bcd-kube-api-access-rtldz" (OuterVolumeSpecName: "kube-api-access-rtldz") pod "9f446277-5df0-4b04-9f9b-cce248835bcd" (UID: "9f446277-5df0-4b04-9f9b-cce248835bcd"). InnerVolumeSpecName "kube-api-access-rtldz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:12:39 crc kubenswrapper[4697]: I0127 15:12:39.489443 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f446277-5df0-4b04-9f9b-cce248835bcd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9f446277-5df0-4b04-9f9b-cce248835bcd" (UID: "9f446277-5df0-4b04-9f9b-cce248835bcd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:12:39 crc kubenswrapper[4697]: I0127 15:12:39.542008 4697 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f446277-5df0-4b04-9f9b-cce248835bcd-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 15:12:39 crc kubenswrapper[4697]: I0127 15:12:39.542040 4697 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f446277-5df0-4b04-9f9b-cce248835bcd-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 15:12:39 crc kubenswrapper[4697]: I0127 15:12:39.542051 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rtldz\" (UniqueName: \"kubernetes.io/projected/9f446277-5df0-4b04-9f9b-cce248835bcd-kube-api-access-rtldz\") on node \"crc\" DevicePath \"\"" Jan 27 15:12:39 crc kubenswrapper[4697]: I0127 15:12:39.872331 4697 generic.go:334] "Generic (PLEG): container finished" podID="9f446277-5df0-4b04-9f9b-cce248835bcd" containerID="ced72411403dbf9bc7fe7070b78e09bbe0921886c2b5f0350a18aa73a6a3d982" exitCode=0 Jan 27 15:12:39 crc kubenswrapper[4697]: I0127 15:12:39.872370 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wg52h" event={"ID":"9f446277-5df0-4b04-9f9b-cce248835bcd","Type":"ContainerDied","Data":"ced72411403dbf9bc7fe7070b78e09bbe0921886c2b5f0350a18aa73a6a3d982"} Jan 27 15:12:39 crc kubenswrapper[4697]: I0127 15:12:39.872645 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wg52h" event={"ID":"9f446277-5df0-4b04-9f9b-cce248835bcd","Type":"ContainerDied","Data":"a812afe0a72b6cdfb23d4e9eeeb4fde1feca2263015b42cee84612f178454cda"} Jan 27 15:12:39 crc kubenswrapper[4697]: I0127 15:12:39.872393 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wg52h" Jan 27 15:12:39 crc kubenswrapper[4697]: I0127 15:12:39.872700 4697 scope.go:117] "RemoveContainer" containerID="ced72411403dbf9bc7fe7070b78e09bbe0921886c2b5f0350a18aa73a6a3d982" Jan 27 15:12:39 crc kubenswrapper[4697]: I0127 15:12:39.886447 4697 scope.go:117] "RemoveContainer" containerID="13424e52e8677931055388dd4d9ab772e7dfd571cd9b02fbd39120cb65864710" Jan 27 15:12:39 crc kubenswrapper[4697]: I0127 15:12:39.907103 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wg52h"] Jan 27 15:12:39 crc kubenswrapper[4697]: I0127 15:12:39.910413 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-wg52h"] Jan 27 15:12:39 crc kubenswrapper[4697]: I0127 15:12:39.918612 4697 scope.go:117] "RemoveContainer" containerID="be769011b69038e2900f3c7a43da6a2bde3956badf2d2689c9120e5063a6df6c" Jan 27 15:12:39 crc kubenswrapper[4697]: I0127 15:12:39.931181 4697 scope.go:117] "RemoveContainer" containerID="ced72411403dbf9bc7fe7070b78e09bbe0921886c2b5f0350a18aa73a6a3d982" Jan 27 15:12:39 crc kubenswrapper[4697]: E0127 15:12:39.931615 4697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ced72411403dbf9bc7fe7070b78e09bbe0921886c2b5f0350a18aa73a6a3d982\": container with ID starting with ced72411403dbf9bc7fe7070b78e09bbe0921886c2b5f0350a18aa73a6a3d982 not found: ID does not exist" containerID="ced72411403dbf9bc7fe7070b78e09bbe0921886c2b5f0350a18aa73a6a3d982" Jan 27 15:12:39 crc kubenswrapper[4697]: I0127 15:12:39.931651 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ced72411403dbf9bc7fe7070b78e09bbe0921886c2b5f0350a18aa73a6a3d982"} err="failed to get container status \"ced72411403dbf9bc7fe7070b78e09bbe0921886c2b5f0350a18aa73a6a3d982\": rpc error: code = NotFound desc = could not find container \"ced72411403dbf9bc7fe7070b78e09bbe0921886c2b5f0350a18aa73a6a3d982\": container with ID starting with ced72411403dbf9bc7fe7070b78e09bbe0921886c2b5f0350a18aa73a6a3d982 not found: ID does not exist" Jan 27 15:12:39 crc kubenswrapper[4697]: I0127 15:12:39.931678 4697 scope.go:117] "RemoveContainer" containerID="13424e52e8677931055388dd4d9ab772e7dfd571cd9b02fbd39120cb65864710" Jan 27 15:12:39 crc kubenswrapper[4697]: E0127 15:12:39.932160 4697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"13424e52e8677931055388dd4d9ab772e7dfd571cd9b02fbd39120cb65864710\": container with ID starting with 13424e52e8677931055388dd4d9ab772e7dfd571cd9b02fbd39120cb65864710 not found: ID does not exist" containerID="13424e52e8677931055388dd4d9ab772e7dfd571cd9b02fbd39120cb65864710" Jan 27 15:12:39 crc kubenswrapper[4697]: I0127 15:12:39.932197 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13424e52e8677931055388dd4d9ab772e7dfd571cd9b02fbd39120cb65864710"} err="failed to get container status \"13424e52e8677931055388dd4d9ab772e7dfd571cd9b02fbd39120cb65864710\": rpc error: code = NotFound desc = could not find container \"13424e52e8677931055388dd4d9ab772e7dfd571cd9b02fbd39120cb65864710\": container with ID starting with 13424e52e8677931055388dd4d9ab772e7dfd571cd9b02fbd39120cb65864710 not found: ID does not exist" Jan 27 15:12:39 crc kubenswrapper[4697]: I0127 15:12:39.932237 4697 scope.go:117] "RemoveContainer" containerID="be769011b69038e2900f3c7a43da6a2bde3956badf2d2689c9120e5063a6df6c" Jan 27 15:12:39 crc kubenswrapper[4697]: E0127 15:12:39.932537 4697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be769011b69038e2900f3c7a43da6a2bde3956badf2d2689c9120e5063a6df6c\": container with ID starting with be769011b69038e2900f3c7a43da6a2bde3956badf2d2689c9120e5063a6df6c not found: ID does not exist" containerID="be769011b69038e2900f3c7a43da6a2bde3956badf2d2689c9120e5063a6df6c" Jan 27 15:12:39 crc kubenswrapper[4697]: I0127 15:12:39.932558 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be769011b69038e2900f3c7a43da6a2bde3956badf2d2689c9120e5063a6df6c"} err="failed to get container status \"be769011b69038e2900f3c7a43da6a2bde3956badf2d2689c9120e5063a6df6c\": rpc error: code = NotFound desc = could not find container \"be769011b69038e2900f3c7a43da6a2bde3956badf2d2689c9120e5063a6df6c\": container with ID starting with be769011b69038e2900f3c7a43da6a2bde3956badf2d2689c9120e5063a6df6c not found: ID does not exist" Jan 27 15:12:40 crc kubenswrapper[4697]: I0127 15:12:40.574215 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f446277-5df0-4b04-9f9b-cce248835bcd" path="/var/lib/kubelet/pods/9f446277-5df0-4b04-9f9b-cce248835bcd/volumes" Jan 27 15:12:40 crc kubenswrapper[4697]: E0127 15:12:40.673144 4697 file.go:109] "Unable to process watch event" err="can't process config file \"/etc/kubernetes/manifests/kube-apiserver-startup-monitor-pod.yaml\": /etc/kubernetes/manifests/kube-apiserver-startup-monitor-pod.yaml: couldn't parse as pod(Object 'Kind' is missing in 'null'), please check config file" Jan 27 15:12:40 crc kubenswrapper[4697]: I0127 15:12:40.674430 4697 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 27 15:12:40 crc kubenswrapper[4697]: E0127 15:12:40.674675 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8945ffcc-ee9c-46ab-b2dd-474253d4ba03" containerName="extract-utilities" Jan 27 15:12:40 crc kubenswrapper[4697]: I0127 15:12:40.674697 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="8945ffcc-ee9c-46ab-b2dd-474253d4ba03" containerName="extract-utilities" Jan 27 15:12:40 crc kubenswrapper[4697]: E0127 15:12:40.674710 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8945ffcc-ee9c-46ab-b2dd-474253d4ba03" containerName="registry-server" Jan 27 15:12:40 crc kubenswrapper[4697]: I0127 15:12:40.674719 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="8945ffcc-ee9c-46ab-b2dd-474253d4ba03" containerName="registry-server" Jan 27 15:12:40 crc kubenswrapper[4697]: E0127 15:12:40.674731 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2bffcd5-911f-4cd1-92b3-e70c361719c4" containerName="extract-utilities" Jan 27 15:12:40 crc kubenswrapper[4697]: I0127 15:12:40.674739 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2bffcd5-911f-4cd1-92b3-e70c361719c4" containerName="extract-utilities" Jan 27 15:12:40 crc kubenswrapper[4697]: E0127 15:12:40.674750 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e1946c0-832f-4b77-8e87-a716e9a10a8f" containerName="extract-content" Jan 27 15:12:40 crc kubenswrapper[4697]: I0127 15:12:40.674757 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e1946c0-832f-4b77-8e87-a716e9a10a8f" containerName="extract-content" Jan 27 15:12:40 crc kubenswrapper[4697]: E0127 15:12:40.674768 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f446277-5df0-4b04-9f9b-cce248835bcd" containerName="registry-server" Jan 27 15:12:40 crc kubenswrapper[4697]: I0127 15:12:40.674775 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f446277-5df0-4b04-9f9b-cce248835bcd" containerName="registry-server" Jan 27 15:12:40 crc kubenswrapper[4697]: E0127 15:12:40.674820 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0fca15d-774a-465e-9da1-686eed214cd7" containerName="pruner" Jan 27 15:12:40 crc kubenswrapper[4697]: I0127 15:12:40.674829 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0fca15d-774a-465e-9da1-686eed214cd7" containerName="pruner" Jan 27 15:12:40 crc kubenswrapper[4697]: E0127 15:12:40.674840 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f446277-5df0-4b04-9f9b-cce248835bcd" containerName="extract-utilities" Jan 27 15:12:40 crc kubenswrapper[4697]: I0127 15:12:40.674847 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f446277-5df0-4b04-9f9b-cce248835bcd" containerName="extract-utilities" Jan 27 15:12:40 crc kubenswrapper[4697]: E0127 15:12:40.674859 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2bffcd5-911f-4cd1-92b3-e70c361719c4" containerName="extract-content" Jan 27 15:12:40 crc kubenswrapper[4697]: I0127 15:12:40.674866 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2bffcd5-911f-4cd1-92b3-e70c361719c4" containerName="extract-content" Jan 27 15:12:40 crc kubenswrapper[4697]: E0127 15:12:40.674877 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2bffcd5-911f-4cd1-92b3-e70c361719c4" containerName="registry-server" Jan 27 15:12:40 crc kubenswrapper[4697]: I0127 15:12:40.674884 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2bffcd5-911f-4cd1-92b3-e70c361719c4" containerName="registry-server" Jan 27 15:12:40 crc kubenswrapper[4697]: E0127 15:12:40.674897 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e1946c0-832f-4b77-8e87-a716e9a10a8f" containerName="registry-server" Jan 27 15:12:40 crc kubenswrapper[4697]: I0127 15:12:40.674905 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e1946c0-832f-4b77-8e87-a716e9a10a8f" containerName="registry-server" Jan 27 15:12:40 crc kubenswrapper[4697]: E0127 15:12:40.674913 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f446277-5df0-4b04-9f9b-cce248835bcd" containerName="extract-content" Jan 27 15:12:40 crc kubenswrapper[4697]: I0127 15:12:40.674920 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f446277-5df0-4b04-9f9b-cce248835bcd" containerName="extract-content" Jan 27 15:12:40 crc kubenswrapper[4697]: E0127 15:12:40.674932 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e1946c0-832f-4b77-8e87-a716e9a10a8f" containerName="extract-utilities" Jan 27 15:12:40 crc kubenswrapper[4697]: I0127 15:12:40.674940 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e1946c0-832f-4b77-8e87-a716e9a10a8f" containerName="extract-utilities" Jan 27 15:12:40 crc kubenswrapper[4697]: E0127 15:12:40.674949 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8945ffcc-ee9c-46ab-b2dd-474253d4ba03" containerName="extract-content" Jan 27 15:12:40 crc kubenswrapper[4697]: I0127 15:12:40.674956 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="8945ffcc-ee9c-46ab-b2dd-474253d4ba03" containerName="extract-content" Jan 27 15:12:40 crc kubenswrapper[4697]: I0127 15:12:40.675065 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0fca15d-774a-465e-9da1-686eed214cd7" containerName="pruner" Jan 27 15:12:40 crc kubenswrapper[4697]: I0127 15:12:40.675083 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f446277-5df0-4b04-9f9b-cce248835bcd" containerName="registry-server" Jan 27 15:12:40 crc kubenswrapper[4697]: I0127 15:12:40.675096 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="8945ffcc-ee9c-46ab-b2dd-474253d4ba03" containerName="registry-server" Jan 27 15:12:40 crc kubenswrapper[4697]: I0127 15:12:40.675107 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2bffcd5-911f-4cd1-92b3-e70c361719c4" containerName="registry-server" Jan 27 15:12:40 crc kubenswrapper[4697]: I0127 15:12:40.675117 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e1946c0-832f-4b77-8e87-a716e9a10a8f" containerName="registry-server" Jan 27 15:12:40 crc kubenswrapper[4697]: I0127 15:12:40.675473 4697 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 27 15:12:40 crc kubenswrapper[4697]: I0127 15:12:40.675609 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 15:12:40 crc kubenswrapper[4697]: I0127 15:12:40.675969 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://841fe2379065903ddc38b4968c1764a6c83d13f42c7587f20be81d8539199c94" gracePeriod=15 Jan 27 15:12:40 crc kubenswrapper[4697]: I0127 15:12:40.676053 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://772509e08b1dcc68190d81e10a93fe348af55fdc71dbab2f0cadffd65089c044" gracePeriod=15 Jan 27 15:12:40 crc kubenswrapper[4697]: I0127 15:12:40.676142 4697 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 27 15:12:40 crc kubenswrapper[4697]: I0127 15:12:40.676158 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://9d1140be76b3f274b414e158153723d043089cb9b01d27733976db83dc4601f9" gracePeriod=15 Jan 27 15:12:40 crc kubenswrapper[4697]: I0127 15:12:40.676232 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://dc09ec12a81a4e2954a0d1146819e9f9b4fc1fd442a3e9c930ea213aff875eb9" gracePeriod=15 Jan 27 15:12:40 crc kubenswrapper[4697]: I0127 15:12:40.676286 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://aa7833382543ce12d026eb8bbc6fb93276a1105a0cc34d215e719591be740f80" gracePeriod=15 Jan 27 15:12:40 crc kubenswrapper[4697]: E0127 15:12:40.676338 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 27 15:12:40 crc kubenswrapper[4697]: I0127 15:12:40.676348 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 27 15:12:40 crc kubenswrapper[4697]: E0127 15:12:40.676358 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 27 15:12:40 crc kubenswrapper[4697]: I0127 15:12:40.676365 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 27 15:12:40 crc kubenswrapper[4697]: E0127 15:12:40.676390 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 27 15:12:40 crc kubenswrapper[4697]: I0127 15:12:40.676395 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 27 15:12:40 crc kubenswrapper[4697]: E0127 15:12:40.676402 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 27 15:12:40 crc kubenswrapper[4697]: I0127 15:12:40.676408 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 27 15:12:40 crc kubenswrapper[4697]: E0127 15:12:40.676418 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 15:12:40 crc kubenswrapper[4697]: I0127 15:12:40.676424 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 15:12:40 crc kubenswrapper[4697]: E0127 15:12:40.676436 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 15:12:40 crc kubenswrapper[4697]: I0127 15:12:40.676442 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 15:12:40 crc kubenswrapper[4697]: E0127 15:12:40.676464 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 27 15:12:40 crc kubenswrapper[4697]: I0127 15:12:40.676470 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 27 15:12:40 crc kubenswrapper[4697]: E0127 15:12:40.676478 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 15:12:40 crc kubenswrapper[4697]: I0127 15:12:40.676485 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 15:12:40 crc kubenswrapper[4697]: I0127 15:12:40.676596 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 15:12:40 crc kubenswrapper[4697]: I0127 15:12:40.676623 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 27 15:12:40 crc kubenswrapper[4697]: I0127 15:12:40.676631 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 27 15:12:40 crc kubenswrapper[4697]: I0127 15:12:40.676640 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 27 15:12:40 crc kubenswrapper[4697]: I0127 15:12:40.676650 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 27 15:12:40 crc kubenswrapper[4697]: I0127 15:12:40.676658 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 15:12:40 crc kubenswrapper[4697]: I0127 15:12:40.676896 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 15:12:40 crc kubenswrapper[4697]: I0127 15:12:40.680039 4697 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Jan 27 15:12:40 crc kubenswrapper[4697]: I0127 15:12:40.859918 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 15:12:40 crc kubenswrapper[4697]: I0127 15:12:40.859961 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 15:12:40 crc kubenswrapper[4697]: I0127 15:12:40.859980 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 15:12:40 crc kubenswrapper[4697]: I0127 15:12:40.860003 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 15:12:40 crc kubenswrapper[4697]: I0127 15:12:40.860017 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 15:12:40 crc kubenswrapper[4697]: I0127 15:12:40.860032 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 15:12:40 crc kubenswrapper[4697]: I0127 15:12:40.860076 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 15:12:40 crc kubenswrapper[4697]: I0127 15:12:40.860090 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 15:12:40 crc kubenswrapper[4697]: I0127 15:12:40.961718 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 15:12:40 crc kubenswrapper[4697]: I0127 15:12:40.961995 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 15:12:40 crc kubenswrapper[4697]: I0127 15:12:40.962084 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 15:12:40 crc kubenswrapper[4697]: I0127 15:12:40.961880 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 15:12:40 crc kubenswrapper[4697]: I0127 15:12:40.962314 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 15:12:40 crc kubenswrapper[4697]: I0127 15:12:40.962349 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 15:12:40 crc kubenswrapper[4697]: I0127 15:12:40.962370 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 15:12:40 crc kubenswrapper[4697]: I0127 15:12:40.962385 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 15:12:40 crc kubenswrapper[4697]: I0127 15:12:40.962412 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 15:12:40 crc kubenswrapper[4697]: I0127 15:12:40.962441 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 15:12:40 crc kubenswrapper[4697]: I0127 15:12:40.962457 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 15:12:40 crc kubenswrapper[4697]: I0127 15:12:40.962460 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 15:12:40 crc kubenswrapper[4697]: I0127 15:12:40.962489 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 15:12:40 crc kubenswrapper[4697]: I0127 15:12:40.962508 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 15:12:40 crc kubenswrapper[4697]: I0127 15:12:40.962534 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 15:12:40 crc kubenswrapper[4697]: I0127 15:12:40.962539 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 15:12:41 crc kubenswrapper[4697]: I0127 15:12:41.884339 4697 generic.go:334] "Generic (PLEG): container finished" podID="01d02477-81a8-4453-bac0-0aaed3d659b5" containerID="bf15b21a484b4733ef86ec42c189cc9d9446085c9a752a3c462e69d2e66518c6" exitCode=0 Jan 27 15:12:41 crc kubenswrapper[4697]: I0127 15:12:41.884438 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"01d02477-81a8-4453-bac0-0aaed3d659b5","Type":"ContainerDied","Data":"bf15b21a484b4733ef86ec42c189cc9d9446085c9a752a3c462e69d2e66518c6"} Jan 27 15:12:41 crc kubenswrapper[4697]: I0127 15:12:41.885331 4697 status_manager.go:851] "Failed to get status for pod" podUID="01d02477-81a8-4453-bac0-0aaed3d659b5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.245:6443: connect: connection refused" Jan 27 15:12:41 crc kubenswrapper[4697]: I0127 15:12:41.886234 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 27 15:12:41 crc kubenswrapper[4697]: I0127 15:12:41.887423 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 27 15:12:41 crc kubenswrapper[4697]: I0127 15:12:41.888066 4697 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="9d1140be76b3f274b414e158153723d043089cb9b01d27733976db83dc4601f9" exitCode=0 Jan 27 15:12:41 crc kubenswrapper[4697]: I0127 15:12:41.888092 4697 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="772509e08b1dcc68190d81e10a93fe348af55fdc71dbab2f0cadffd65089c044" exitCode=0 Jan 27 15:12:41 crc kubenswrapper[4697]: I0127 15:12:41.888103 4697 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="dc09ec12a81a4e2954a0d1146819e9f9b4fc1fd442a3e9c930ea213aff875eb9" exitCode=0 Jan 27 15:12:41 crc kubenswrapper[4697]: I0127 15:12:41.888112 4697 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="aa7833382543ce12d026eb8bbc6fb93276a1105a0cc34d215e719591be740f80" exitCode=2 Jan 27 15:12:41 crc kubenswrapper[4697]: I0127 15:12:41.888162 4697 scope.go:117] "RemoveContainer" containerID="3144c28de6be75231118993ba779a42bcc9032d51e927df649d3abb602ffa5dd" Jan 27 15:12:42 crc kubenswrapper[4697]: I0127 15:12:42.896455 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 27 15:12:43 crc kubenswrapper[4697]: I0127 15:12:43.149485 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 27 15:12:43 crc kubenswrapper[4697]: I0127 15:12:43.151006 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 15:12:43 crc kubenswrapper[4697]: I0127 15:12:43.151586 4697 status_manager.go:851] "Failed to get status for pod" podUID="01d02477-81a8-4453-bac0-0aaed3d659b5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.245:6443: connect: connection refused" Jan 27 15:12:43 crc kubenswrapper[4697]: I0127 15:12:43.152013 4697 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.245:6443: connect: connection refused" Jan 27 15:12:43 crc kubenswrapper[4697]: I0127 15:12:43.190500 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 27 15:12:43 crc kubenswrapper[4697]: I0127 15:12:43.191068 4697 status_manager.go:851] "Failed to get status for pod" podUID="01d02477-81a8-4453-bac0-0aaed3d659b5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.245:6443: connect: connection refused" Jan 27 15:12:43 crc kubenswrapper[4697]: I0127 15:12:43.191595 4697 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.245:6443: connect: connection refused" Jan 27 15:12:43 crc kubenswrapper[4697]: I0127 15:12:43.311413 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/01d02477-81a8-4453-bac0-0aaed3d659b5-var-lock\") pod \"01d02477-81a8-4453-bac0-0aaed3d659b5\" (UID: \"01d02477-81a8-4453-bac0-0aaed3d659b5\") " Jan 27 15:12:43 crc kubenswrapper[4697]: I0127 15:12:43.311474 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 27 15:12:43 crc kubenswrapper[4697]: I0127 15:12:43.311509 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01d02477-81a8-4453-bac0-0aaed3d659b5-var-lock" (OuterVolumeSpecName: "var-lock") pod "01d02477-81a8-4453-bac0-0aaed3d659b5" (UID: "01d02477-81a8-4453-bac0-0aaed3d659b5"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 15:12:43 crc kubenswrapper[4697]: I0127 15:12:43.311526 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/01d02477-81a8-4453-bac0-0aaed3d659b5-kube-api-access\") pod \"01d02477-81a8-4453-bac0-0aaed3d659b5\" (UID: \"01d02477-81a8-4453-bac0-0aaed3d659b5\") " Jan 27 15:12:43 crc kubenswrapper[4697]: I0127 15:12:43.311549 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 27 15:12:43 crc kubenswrapper[4697]: I0127 15:12:43.311553 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 15:12:43 crc kubenswrapper[4697]: I0127 15:12:43.311607 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 27 15:12:43 crc kubenswrapper[4697]: I0127 15:12:43.311701 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/01d02477-81a8-4453-bac0-0aaed3d659b5-kubelet-dir\") pod \"01d02477-81a8-4453-bac0-0aaed3d659b5\" (UID: \"01d02477-81a8-4453-bac0-0aaed3d659b5\") " Jan 27 15:12:43 crc kubenswrapper[4697]: I0127 15:12:43.311729 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 15:12:43 crc kubenswrapper[4697]: I0127 15:12:43.311756 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 15:12:43 crc kubenswrapper[4697]: I0127 15:12:43.311824 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01d02477-81a8-4453-bac0-0aaed3d659b5-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "01d02477-81a8-4453-bac0-0aaed3d659b5" (UID: "01d02477-81a8-4453-bac0-0aaed3d659b5"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 15:12:43 crc kubenswrapper[4697]: I0127 15:12:43.311965 4697 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 27 15:12:43 crc kubenswrapper[4697]: I0127 15:12:43.311981 4697 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/01d02477-81a8-4453-bac0-0aaed3d659b5-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 27 15:12:43 crc kubenswrapper[4697]: I0127 15:12:43.311992 4697 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/01d02477-81a8-4453-bac0-0aaed3d659b5-var-lock\") on node \"crc\" DevicePath \"\"" Jan 27 15:12:43 crc kubenswrapper[4697]: I0127 15:12:43.312002 4697 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 27 15:12:43 crc kubenswrapper[4697]: I0127 15:12:43.312013 4697 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 27 15:12:43 crc kubenswrapper[4697]: I0127 15:12:43.316158 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01d02477-81a8-4453-bac0-0aaed3d659b5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "01d02477-81a8-4453-bac0-0aaed3d659b5" (UID: "01d02477-81a8-4453-bac0-0aaed3d659b5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:12:43 crc kubenswrapper[4697]: I0127 15:12:43.413360 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/01d02477-81a8-4453-bac0-0aaed3d659b5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 15:12:43 crc kubenswrapper[4697]: I0127 15:12:43.905099 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 27 15:12:43 crc kubenswrapper[4697]: I0127 15:12:43.905878 4697 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="841fe2379065903ddc38b4968c1764a6c83d13f42c7587f20be81d8539199c94" exitCode=0 Jan 27 15:12:43 crc kubenswrapper[4697]: I0127 15:12:43.905967 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 15:12:43 crc kubenswrapper[4697]: I0127 15:12:43.905974 4697 scope.go:117] "RemoveContainer" containerID="9d1140be76b3f274b414e158153723d043089cb9b01d27733976db83dc4601f9" Jan 27 15:12:43 crc kubenswrapper[4697]: I0127 15:12:43.910801 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"01d02477-81a8-4453-bac0-0aaed3d659b5","Type":"ContainerDied","Data":"35f5960c095e39ea7a83b0d0310a81ac98711fb402f3a30c50a8e5d4cf7d0497"} Jan 27 15:12:43 crc kubenswrapper[4697]: I0127 15:12:43.910847 4697 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="35f5960c095e39ea7a83b0d0310a81ac98711fb402f3a30c50a8e5d4cf7d0497" Jan 27 15:12:43 crc kubenswrapper[4697]: I0127 15:12:43.910893 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 27 15:12:43 crc kubenswrapper[4697]: I0127 15:12:43.921942 4697 scope.go:117] "RemoveContainer" containerID="772509e08b1dcc68190d81e10a93fe348af55fdc71dbab2f0cadffd65089c044" Jan 27 15:12:43 crc kubenswrapper[4697]: I0127 15:12:43.925443 4697 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.245:6443: connect: connection refused" Jan 27 15:12:43 crc kubenswrapper[4697]: I0127 15:12:43.925752 4697 status_manager.go:851] "Failed to get status for pod" podUID="01d02477-81a8-4453-bac0-0aaed3d659b5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.245:6443: connect: connection refused" Jan 27 15:12:43 crc kubenswrapper[4697]: I0127 15:12:43.929398 4697 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.245:6443: connect: connection refused" Jan 27 15:12:43 crc kubenswrapper[4697]: I0127 15:12:43.929917 4697 status_manager.go:851] "Failed to get status for pod" podUID="01d02477-81a8-4453-bac0-0aaed3d659b5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.245:6443: connect: connection refused" Jan 27 15:12:43 crc kubenswrapper[4697]: I0127 15:12:43.944568 4697 scope.go:117] "RemoveContainer" containerID="dc09ec12a81a4e2954a0d1146819e9f9b4fc1fd442a3e9c930ea213aff875eb9" Jan 27 15:12:43 crc kubenswrapper[4697]: I0127 15:12:43.961012 4697 scope.go:117] "RemoveContainer" containerID="aa7833382543ce12d026eb8bbc6fb93276a1105a0cc34d215e719591be740f80" Jan 27 15:12:43 crc kubenswrapper[4697]: I0127 15:12:43.974378 4697 scope.go:117] "RemoveContainer" containerID="841fe2379065903ddc38b4968c1764a6c83d13f42c7587f20be81d8539199c94" Jan 27 15:12:43 crc kubenswrapper[4697]: I0127 15:12:43.993983 4697 scope.go:117] "RemoveContainer" containerID="1d9c79b1675802dcd1800cdbf3562832c4d201ff1b4d7ab4504118a41a245453" Jan 27 15:12:44 crc kubenswrapper[4697]: I0127 15:12:44.010407 4697 scope.go:117] "RemoveContainer" containerID="9d1140be76b3f274b414e158153723d043089cb9b01d27733976db83dc4601f9" Jan 27 15:12:44 crc kubenswrapper[4697]: E0127 15:12:44.010851 4697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d1140be76b3f274b414e158153723d043089cb9b01d27733976db83dc4601f9\": container with ID starting with 9d1140be76b3f274b414e158153723d043089cb9b01d27733976db83dc4601f9 not found: ID does not exist" containerID="9d1140be76b3f274b414e158153723d043089cb9b01d27733976db83dc4601f9" Jan 27 15:12:44 crc kubenswrapper[4697]: I0127 15:12:44.010880 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d1140be76b3f274b414e158153723d043089cb9b01d27733976db83dc4601f9"} err="failed to get container status \"9d1140be76b3f274b414e158153723d043089cb9b01d27733976db83dc4601f9\": rpc error: code = NotFound desc = could not find container \"9d1140be76b3f274b414e158153723d043089cb9b01d27733976db83dc4601f9\": container with ID starting with 9d1140be76b3f274b414e158153723d043089cb9b01d27733976db83dc4601f9 not found: ID does not exist" Jan 27 15:12:44 crc kubenswrapper[4697]: I0127 15:12:44.010903 4697 scope.go:117] "RemoveContainer" containerID="772509e08b1dcc68190d81e10a93fe348af55fdc71dbab2f0cadffd65089c044" Jan 27 15:12:44 crc kubenswrapper[4697]: E0127 15:12:44.011357 4697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"772509e08b1dcc68190d81e10a93fe348af55fdc71dbab2f0cadffd65089c044\": container with ID starting with 772509e08b1dcc68190d81e10a93fe348af55fdc71dbab2f0cadffd65089c044 not found: ID does not exist" containerID="772509e08b1dcc68190d81e10a93fe348af55fdc71dbab2f0cadffd65089c044" Jan 27 15:12:44 crc kubenswrapper[4697]: I0127 15:12:44.011409 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"772509e08b1dcc68190d81e10a93fe348af55fdc71dbab2f0cadffd65089c044"} err="failed to get container status \"772509e08b1dcc68190d81e10a93fe348af55fdc71dbab2f0cadffd65089c044\": rpc error: code = NotFound desc = could not find container \"772509e08b1dcc68190d81e10a93fe348af55fdc71dbab2f0cadffd65089c044\": container with ID starting with 772509e08b1dcc68190d81e10a93fe348af55fdc71dbab2f0cadffd65089c044 not found: ID does not exist" Jan 27 15:12:44 crc kubenswrapper[4697]: I0127 15:12:44.011442 4697 scope.go:117] "RemoveContainer" containerID="dc09ec12a81a4e2954a0d1146819e9f9b4fc1fd442a3e9c930ea213aff875eb9" Jan 27 15:12:44 crc kubenswrapper[4697]: E0127 15:12:44.012040 4697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc09ec12a81a4e2954a0d1146819e9f9b4fc1fd442a3e9c930ea213aff875eb9\": container with ID starting with dc09ec12a81a4e2954a0d1146819e9f9b4fc1fd442a3e9c930ea213aff875eb9 not found: ID does not exist" containerID="dc09ec12a81a4e2954a0d1146819e9f9b4fc1fd442a3e9c930ea213aff875eb9" Jan 27 15:12:44 crc kubenswrapper[4697]: I0127 15:12:44.012070 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc09ec12a81a4e2954a0d1146819e9f9b4fc1fd442a3e9c930ea213aff875eb9"} err="failed to get container status \"dc09ec12a81a4e2954a0d1146819e9f9b4fc1fd442a3e9c930ea213aff875eb9\": rpc error: code = NotFound desc = could not find container \"dc09ec12a81a4e2954a0d1146819e9f9b4fc1fd442a3e9c930ea213aff875eb9\": container with ID starting with dc09ec12a81a4e2954a0d1146819e9f9b4fc1fd442a3e9c930ea213aff875eb9 not found: ID does not exist" Jan 27 15:12:44 crc kubenswrapper[4697]: I0127 15:12:44.012094 4697 scope.go:117] "RemoveContainer" containerID="aa7833382543ce12d026eb8bbc6fb93276a1105a0cc34d215e719591be740f80" Jan 27 15:12:44 crc kubenswrapper[4697]: E0127 15:12:44.012569 4697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa7833382543ce12d026eb8bbc6fb93276a1105a0cc34d215e719591be740f80\": container with ID starting with aa7833382543ce12d026eb8bbc6fb93276a1105a0cc34d215e719591be740f80 not found: ID does not exist" containerID="aa7833382543ce12d026eb8bbc6fb93276a1105a0cc34d215e719591be740f80" Jan 27 15:12:44 crc kubenswrapper[4697]: I0127 15:12:44.012593 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa7833382543ce12d026eb8bbc6fb93276a1105a0cc34d215e719591be740f80"} err="failed to get container status \"aa7833382543ce12d026eb8bbc6fb93276a1105a0cc34d215e719591be740f80\": rpc error: code = NotFound desc = could not find container \"aa7833382543ce12d026eb8bbc6fb93276a1105a0cc34d215e719591be740f80\": container with ID starting with aa7833382543ce12d026eb8bbc6fb93276a1105a0cc34d215e719591be740f80 not found: ID does not exist" Jan 27 15:12:44 crc kubenswrapper[4697]: I0127 15:12:44.012609 4697 scope.go:117] "RemoveContainer" containerID="841fe2379065903ddc38b4968c1764a6c83d13f42c7587f20be81d8539199c94" Jan 27 15:12:44 crc kubenswrapper[4697]: E0127 15:12:44.013213 4697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"841fe2379065903ddc38b4968c1764a6c83d13f42c7587f20be81d8539199c94\": container with ID starting with 841fe2379065903ddc38b4968c1764a6c83d13f42c7587f20be81d8539199c94 not found: ID does not exist" containerID="841fe2379065903ddc38b4968c1764a6c83d13f42c7587f20be81d8539199c94" Jan 27 15:12:44 crc kubenswrapper[4697]: I0127 15:12:44.013243 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"841fe2379065903ddc38b4968c1764a6c83d13f42c7587f20be81d8539199c94"} err="failed to get container status \"841fe2379065903ddc38b4968c1764a6c83d13f42c7587f20be81d8539199c94\": rpc error: code = NotFound desc = could not find container \"841fe2379065903ddc38b4968c1764a6c83d13f42c7587f20be81d8539199c94\": container with ID starting with 841fe2379065903ddc38b4968c1764a6c83d13f42c7587f20be81d8539199c94 not found: ID does not exist" Jan 27 15:12:44 crc kubenswrapper[4697]: I0127 15:12:44.013264 4697 scope.go:117] "RemoveContainer" containerID="1d9c79b1675802dcd1800cdbf3562832c4d201ff1b4d7ab4504118a41a245453" Jan 27 15:12:44 crc kubenswrapper[4697]: E0127 15:12:44.013618 4697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d9c79b1675802dcd1800cdbf3562832c4d201ff1b4d7ab4504118a41a245453\": container with ID starting with 1d9c79b1675802dcd1800cdbf3562832c4d201ff1b4d7ab4504118a41a245453 not found: ID does not exist" containerID="1d9c79b1675802dcd1800cdbf3562832c4d201ff1b4d7ab4504118a41a245453" Jan 27 15:12:44 crc kubenswrapper[4697]: I0127 15:12:44.013645 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d9c79b1675802dcd1800cdbf3562832c4d201ff1b4d7ab4504118a41a245453"} err="failed to get container status \"1d9c79b1675802dcd1800cdbf3562832c4d201ff1b4d7ab4504118a41a245453\": rpc error: code = NotFound desc = could not find container \"1d9c79b1675802dcd1800cdbf3562832c4d201ff1b4d7ab4504118a41a245453\": container with ID starting with 1d9c79b1675802dcd1800cdbf3562832c4d201ff1b4d7ab4504118a41a245453 not found: ID does not exist" Jan 27 15:12:44 crc kubenswrapper[4697]: I0127 15:12:44.569854 4697 status_manager.go:851] "Failed to get status for pod" podUID="01d02477-81a8-4453-bac0-0aaed3d659b5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.245:6443: connect: connection refused" Jan 27 15:12:44 crc kubenswrapper[4697]: I0127 15:12:44.570176 4697 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.245:6443: connect: connection refused" Jan 27 15:12:44 crc kubenswrapper[4697]: I0127 15:12:44.574925 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 27 15:12:45 crc kubenswrapper[4697]: E0127 15:12:45.151683 4697 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:12:45Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:12:45Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:12:45Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:12:45Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.245:6443: connect: connection refused" Jan 27 15:12:45 crc kubenswrapper[4697]: E0127 15:12:45.152442 4697 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.245:6443: connect: connection refused" Jan 27 15:12:45 crc kubenswrapper[4697]: E0127 15:12:45.152853 4697 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.245:6443: connect: connection refused" Jan 27 15:12:45 crc kubenswrapper[4697]: E0127 15:12:45.153131 4697 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.245:6443: connect: connection refused" Jan 27 15:12:45 crc kubenswrapper[4697]: E0127 15:12:45.153469 4697 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.245:6443: connect: connection refused" Jan 27 15:12:45 crc kubenswrapper[4697]: E0127 15:12:45.153503 4697 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 15:12:45 crc kubenswrapper[4697]: E0127 15:12:45.617310 4697 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.245:6443: connect: connection refused" Jan 27 15:12:45 crc kubenswrapper[4697]: E0127 15:12:45.617718 4697 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.245:6443: connect: connection refused" Jan 27 15:12:45 crc kubenswrapper[4697]: E0127 15:12:45.617946 4697 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.245:6443: connect: connection refused" Jan 27 15:12:45 crc kubenswrapper[4697]: E0127 15:12:45.618219 4697 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.245:6443: connect: connection refused" Jan 27 15:12:45 crc kubenswrapper[4697]: E0127 15:12:45.618596 4697 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.245:6443: connect: connection refused" Jan 27 15:12:45 crc kubenswrapper[4697]: I0127 15:12:45.618627 4697 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 27 15:12:45 crc kubenswrapper[4697]: E0127 15:12:45.618878 4697 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.245:6443: connect: connection refused" interval="200ms" Jan 27 15:12:45 crc kubenswrapper[4697]: E0127 15:12:45.715552 4697 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.245:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 15:12:45 crc kubenswrapper[4697]: I0127 15:12:45.716043 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 15:12:45 crc kubenswrapper[4697]: W0127 15:12:45.736017 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-dccb11893d17b288697f3c0c705fdc58bbb6abd4f2caa7c733c13a76b9d862ea WatchSource:0}: Error finding container dccb11893d17b288697f3c0c705fdc58bbb6abd4f2caa7c733c13a76b9d862ea: Status 404 returned error can't find the container with id dccb11893d17b288697f3c0c705fdc58bbb6abd4f2caa7c733c13a76b9d862ea Jan 27 15:12:45 crc kubenswrapper[4697]: E0127 15:12:45.738072 4697 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.245:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188e9f37d454f29c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-27 15:12:45.737538204 +0000 UTC m=+261.909937985,LastTimestamp:2026-01-27 15:12:45.737538204 +0000 UTC m=+261.909937985,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 27 15:12:45 crc kubenswrapper[4697]: E0127 15:12:45.819668 4697 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.245:6443: connect: connection refused" interval="400ms" Jan 27 15:12:45 crc kubenswrapper[4697]: I0127 15:12:45.928341 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"dccb11893d17b288697f3c0c705fdc58bbb6abd4f2caa7c733c13a76b9d862ea"} Jan 27 15:12:46 crc kubenswrapper[4697]: E0127 15:12:46.220675 4697 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.245:6443: connect: connection refused" interval="800ms" Jan 27 15:12:46 crc kubenswrapper[4697]: I0127 15:12:46.934356 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"8ba8e6a601928789e2e3281bae11d875b8a6c0acec19e1a79779b980f0567b78"} Jan 27 15:12:46 crc kubenswrapper[4697]: E0127 15:12:46.934906 4697 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.245:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 15:12:46 crc kubenswrapper[4697]: I0127 15:12:46.935583 4697 status_manager.go:851] "Failed to get status for pod" podUID="01d02477-81a8-4453-bac0-0aaed3d659b5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.245:6443: connect: connection refused" Jan 27 15:12:47 crc kubenswrapper[4697]: E0127 15:12:47.021103 4697 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.245:6443: connect: connection refused" interval="1.6s" Jan 27 15:12:47 crc kubenswrapper[4697]: E0127 15:12:47.619616 4697 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.245:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188e9f37d454f29c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-27 15:12:45.737538204 +0000 UTC m=+261.909937985,LastTimestamp:2026-01-27 15:12:45.737538204 +0000 UTC m=+261.909937985,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 27 15:12:47 crc kubenswrapper[4697]: E0127 15:12:47.940338 4697 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.245:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 15:12:48 crc kubenswrapper[4697]: E0127 15:12:48.622687 4697 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.245:6443: connect: connection refused" interval="3.2s" Jan 27 15:12:51 crc kubenswrapper[4697]: E0127 15:12:51.824035 4697 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.245:6443: connect: connection refused" interval="6.4s" Jan 27 15:12:53 crc kubenswrapper[4697]: I0127 15:12:53.978746 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 27 15:12:53 crc kubenswrapper[4697]: I0127 15:12:53.979144 4697 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="1b23c092c5d493951a1f6dbbf0482f102f36a830133d843f3c574afba2e1d50d" exitCode=1 Jan 27 15:12:53 crc kubenswrapper[4697]: I0127 15:12:53.979190 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"1b23c092c5d493951a1f6dbbf0482f102f36a830133d843f3c574afba2e1d50d"} Jan 27 15:12:53 crc kubenswrapper[4697]: I0127 15:12:53.979886 4697 scope.go:117] "RemoveContainer" containerID="1b23c092c5d493951a1f6dbbf0482f102f36a830133d843f3c574afba2e1d50d" Jan 27 15:12:53 crc kubenswrapper[4697]: I0127 15:12:53.980112 4697 status_manager.go:851] "Failed to get status for pod" podUID="01d02477-81a8-4453-bac0-0aaed3d659b5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.245:6443: connect: connection refused" Jan 27 15:12:53 crc kubenswrapper[4697]: I0127 15:12:53.980669 4697 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.245:6443: connect: connection refused" Jan 27 15:12:54 crc kubenswrapper[4697]: I0127 15:12:54.355941 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 15:12:54 crc kubenswrapper[4697]: I0127 15:12:54.570870 4697 status_manager.go:851] "Failed to get status for pod" podUID="01d02477-81a8-4453-bac0-0aaed3d659b5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.245:6443: connect: connection refused" Jan 27 15:12:54 crc kubenswrapper[4697]: I0127 15:12:54.571375 4697 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.245:6443: connect: connection refused" Jan 27 15:12:54 crc kubenswrapper[4697]: I0127 15:12:54.992077 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 27 15:12:54 crc kubenswrapper[4697]: I0127 15:12:54.992414 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"825cc26f2fd162f056c8089c26055d7c76ed4da900be9aca7d2371abcdd4d7b2"} Jan 27 15:12:54 crc kubenswrapper[4697]: I0127 15:12:54.994210 4697 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.245:6443: connect: connection refused" Jan 27 15:12:54 crc kubenswrapper[4697]: I0127 15:12:54.994840 4697 status_manager.go:851] "Failed to get status for pod" podUID="01d02477-81a8-4453-bac0-0aaed3d659b5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.245:6443: connect: connection refused" Jan 27 15:12:55 crc kubenswrapper[4697]: I0127 15:12:55.568367 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 15:12:55 crc kubenswrapper[4697]: I0127 15:12:55.569207 4697 status_manager.go:851] "Failed to get status for pod" podUID="01d02477-81a8-4453-bac0-0aaed3d659b5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.245:6443: connect: connection refused" Jan 27 15:12:55 crc kubenswrapper[4697]: I0127 15:12:55.569470 4697 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.245:6443: connect: connection refused" Jan 27 15:12:55 crc kubenswrapper[4697]: I0127 15:12:55.585883 4697 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="30821478-065e-48b2-85f3-ae69260477fb" Jan 27 15:12:55 crc kubenswrapper[4697]: I0127 15:12:55.585912 4697 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="30821478-065e-48b2-85f3-ae69260477fb" Jan 27 15:12:55 crc kubenswrapper[4697]: E0127 15:12:55.587240 4697 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.245:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 15:12:55 crc kubenswrapper[4697]: I0127 15:12:55.587910 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 15:12:55 crc kubenswrapper[4697]: W0127 15:12:55.609810 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-32d62487d9b89d59f09faccb9461c2b7caa29804f25183eaea2acb00a4547f18 WatchSource:0}: Error finding container 32d62487d9b89d59f09faccb9461c2b7caa29804f25183eaea2acb00a4547f18: Status 404 returned error can't find the container with id 32d62487d9b89d59f09faccb9461c2b7caa29804f25183eaea2acb00a4547f18 Jan 27 15:12:56 crc kubenswrapper[4697]: I0127 15:12:55.999730 4697 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="3806944d937ceb8e9de0f8f73d2b2ab04755e4d1965e093411ad2543154136c1" exitCode=0 Jan 27 15:12:56 crc kubenswrapper[4697]: I0127 15:12:55.999880 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"3806944d937ceb8e9de0f8f73d2b2ab04755e4d1965e093411ad2543154136c1"} Jan 27 15:12:56 crc kubenswrapper[4697]: I0127 15:12:56.000145 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"32d62487d9b89d59f09faccb9461c2b7caa29804f25183eaea2acb00a4547f18"} Jan 27 15:12:56 crc kubenswrapper[4697]: I0127 15:12:56.000619 4697 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="30821478-065e-48b2-85f3-ae69260477fb" Jan 27 15:12:56 crc kubenswrapper[4697]: I0127 15:12:56.000649 4697 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="30821478-065e-48b2-85f3-ae69260477fb" Jan 27 15:12:56 crc kubenswrapper[4697]: E0127 15:12:56.001081 4697 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.245:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 15:12:56 crc kubenswrapper[4697]: I0127 15:12:56.001148 4697 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.245:6443: connect: connection refused" Jan 27 15:12:56 crc kubenswrapper[4697]: I0127 15:12:56.001580 4697 status_manager.go:851] "Failed to get status for pod" podUID="01d02477-81a8-4453-bac0-0aaed3d659b5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.245:6443: connect: connection refused" Jan 27 15:12:57 crc kubenswrapper[4697]: I0127 15:12:57.009622 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"10b9ddfa67b375402cba7d52f42be8db90032fb4680ef0aa115e17855e3ed4c0"} Jan 27 15:12:57 crc kubenswrapper[4697]: I0127 15:12:57.009971 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"8c96b0767eb16e2b91f798c74c07c2a62cd1a310e95e187bc7e3370b8bf05878"} Jan 27 15:12:57 crc kubenswrapper[4697]: I0127 15:12:57.009984 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"89fa41c5a706110b2f1b6b5a287006991d1073929d1c616ee271a1b97ab7f62e"} Jan 27 15:12:57 crc kubenswrapper[4697]: I0127 15:12:57.009992 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"aacece7be4dbb102965687fd8d0ff8ea67d7f6e4e6a5a636bd2aa75c0d8979cd"} Jan 27 15:12:58 crc kubenswrapper[4697]: I0127 15:12:58.016868 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"be09124d361689d32965078290eb1708d203b4af4a423a1fd90245b5bf30f41c"} Jan 27 15:12:58 crc kubenswrapper[4697]: I0127 15:12:58.017052 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 15:12:58 crc kubenswrapper[4697]: I0127 15:12:58.017146 4697 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="30821478-065e-48b2-85f3-ae69260477fb" Jan 27 15:12:58 crc kubenswrapper[4697]: I0127 15:12:58.017164 4697 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="30821478-065e-48b2-85f3-ae69260477fb" Jan 27 15:12:58 crc kubenswrapper[4697]: I0127 15:12:58.059350 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 15:12:58 crc kubenswrapper[4697]: I0127 15:12:58.064918 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 15:12:59 crc kubenswrapper[4697]: I0127 15:12:59.021250 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 15:13:00 crc kubenswrapper[4697]: I0127 15:13:00.148129 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-tgccn" podUID="d1d0b154-f221-4132-9d6f-a17173841b1f" containerName="oauth-openshift" containerID="cri-o://b8e00742347289d7c9777900c7b68957702d88e6f88a26acc7ec4d270d416efe" gracePeriod=15 Jan 27 15:13:00 crc kubenswrapper[4697]: I0127 15:13:00.588360 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 15:13:00 crc kubenswrapper[4697]: I0127 15:13:00.588626 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 15:13:00 crc kubenswrapper[4697]: I0127 15:13:00.590242 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-tgccn" Jan 27 15:13:00 crc kubenswrapper[4697]: I0127 15:13:00.592499 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 15:13:00 crc kubenswrapper[4697]: I0127 15:13:00.737075 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d1d0b154-f221-4132-9d6f-a17173841b1f-v4-0-config-user-template-login\") pod \"d1d0b154-f221-4132-9d6f-a17173841b1f\" (UID: \"d1d0b154-f221-4132-9d6f-a17173841b1f\") " Jan 27 15:13:00 crc kubenswrapper[4697]: I0127 15:13:00.737130 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d1d0b154-f221-4132-9d6f-a17173841b1f-v4-0-config-system-ocp-branding-template\") pod \"d1d0b154-f221-4132-9d6f-a17173841b1f\" (UID: \"d1d0b154-f221-4132-9d6f-a17173841b1f\") " Jan 27 15:13:00 crc kubenswrapper[4697]: I0127 15:13:00.737173 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5k96j\" (UniqueName: \"kubernetes.io/projected/d1d0b154-f221-4132-9d6f-a17173841b1f-kube-api-access-5k96j\") pod \"d1d0b154-f221-4132-9d6f-a17173841b1f\" (UID: \"d1d0b154-f221-4132-9d6f-a17173841b1f\") " Jan 27 15:13:00 crc kubenswrapper[4697]: I0127 15:13:00.737205 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d1d0b154-f221-4132-9d6f-a17173841b1f-v4-0-config-system-serving-cert\") pod \"d1d0b154-f221-4132-9d6f-a17173841b1f\" (UID: \"d1d0b154-f221-4132-9d6f-a17173841b1f\") " Jan 27 15:13:00 crc kubenswrapper[4697]: I0127 15:13:00.737227 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d1d0b154-f221-4132-9d6f-a17173841b1f-v4-0-config-user-template-error\") pod \"d1d0b154-f221-4132-9d6f-a17173841b1f\" (UID: \"d1d0b154-f221-4132-9d6f-a17173841b1f\") " Jan 27 15:13:00 crc kubenswrapper[4697]: I0127 15:13:00.737269 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d1d0b154-f221-4132-9d6f-a17173841b1f-audit-dir\") pod \"d1d0b154-f221-4132-9d6f-a17173841b1f\" (UID: \"d1d0b154-f221-4132-9d6f-a17173841b1f\") " Jan 27 15:13:00 crc kubenswrapper[4697]: I0127 15:13:00.737299 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d1d0b154-f221-4132-9d6f-a17173841b1f-v4-0-config-system-router-certs\") pod \"d1d0b154-f221-4132-9d6f-a17173841b1f\" (UID: \"d1d0b154-f221-4132-9d6f-a17173841b1f\") " Jan 27 15:13:00 crc kubenswrapper[4697]: I0127 15:13:00.737332 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d1d0b154-f221-4132-9d6f-a17173841b1f-v4-0-config-system-trusted-ca-bundle\") pod \"d1d0b154-f221-4132-9d6f-a17173841b1f\" (UID: \"d1d0b154-f221-4132-9d6f-a17173841b1f\") " Jan 27 15:13:00 crc kubenswrapper[4697]: I0127 15:13:00.737354 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d1d0b154-f221-4132-9d6f-a17173841b1f-audit-policies\") pod \"d1d0b154-f221-4132-9d6f-a17173841b1f\" (UID: \"d1d0b154-f221-4132-9d6f-a17173841b1f\") " Jan 27 15:13:00 crc kubenswrapper[4697]: I0127 15:13:00.737379 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d1d0b154-f221-4132-9d6f-a17173841b1f-v4-0-config-system-session\") pod \"d1d0b154-f221-4132-9d6f-a17173841b1f\" (UID: \"d1d0b154-f221-4132-9d6f-a17173841b1f\") " Jan 27 15:13:00 crc kubenswrapper[4697]: I0127 15:13:00.737405 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d1d0b154-f221-4132-9d6f-a17173841b1f-v4-0-config-user-template-provider-selection\") pod \"d1d0b154-f221-4132-9d6f-a17173841b1f\" (UID: \"d1d0b154-f221-4132-9d6f-a17173841b1f\") " Jan 27 15:13:00 crc kubenswrapper[4697]: I0127 15:13:00.737427 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d1d0b154-f221-4132-9d6f-a17173841b1f-v4-0-config-system-service-ca\") pod \"d1d0b154-f221-4132-9d6f-a17173841b1f\" (UID: \"d1d0b154-f221-4132-9d6f-a17173841b1f\") " Jan 27 15:13:00 crc kubenswrapper[4697]: I0127 15:13:00.737451 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d1d0b154-f221-4132-9d6f-a17173841b1f-v4-0-config-system-cliconfig\") pod \"d1d0b154-f221-4132-9d6f-a17173841b1f\" (UID: \"d1d0b154-f221-4132-9d6f-a17173841b1f\") " Jan 27 15:13:00 crc kubenswrapper[4697]: I0127 15:13:00.737475 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d1d0b154-f221-4132-9d6f-a17173841b1f-v4-0-config-user-idp-0-file-data\") pod \"d1d0b154-f221-4132-9d6f-a17173841b1f\" (UID: \"d1d0b154-f221-4132-9d6f-a17173841b1f\") " Jan 27 15:13:00 crc kubenswrapper[4697]: I0127 15:13:00.738006 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d1d0b154-f221-4132-9d6f-a17173841b1f-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "d1d0b154-f221-4132-9d6f-a17173841b1f" (UID: "d1d0b154-f221-4132-9d6f-a17173841b1f"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 15:13:00 crc kubenswrapper[4697]: I0127 15:13:00.738737 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1d0b154-f221-4132-9d6f-a17173841b1f-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "d1d0b154-f221-4132-9d6f-a17173841b1f" (UID: "d1d0b154-f221-4132-9d6f-a17173841b1f"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:13:00 crc kubenswrapper[4697]: I0127 15:13:00.742244 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1d0b154-f221-4132-9d6f-a17173841b1f-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "d1d0b154-f221-4132-9d6f-a17173841b1f" (UID: "d1d0b154-f221-4132-9d6f-a17173841b1f"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:13:00 crc kubenswrapper[4697]: I0127 15:13:00.743058 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1d0b154-f221-4132-9d6f-a17173841b1f-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "d1d0b154-f221-4132-9d6f-a17173841b1f" (UID: "d1d0b154-f221-4132-9d6f-a17173841b1f"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:13:00 crc kubenswrapper[4697]: I0127 15:13:00.743633 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1d0b154-f221-4132-9d6f-a17173841b1f-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "d1d0b154-f221-4132-9d6f-a17173841b1f" (UID: "d1d0b154-f221-4132-9d6f-a17173841b1f"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:13:00 crc kubenswrapper[4697]: I0127 15:13:00.743654 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1d0b154-f221-4132-9d6f-a17173841b1f-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "d1d0b154-f221-4132-9d6f-a17173841b1f" (UID: "d1d0b154-f221-4132-9d6f-a17173841b1f"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:13:00 crc kubenswrapper[4697]: I0127 15:13:00.744079 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1d0b154-f221-4132-9d6f-a17173841b1f-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "d1d0b154-f221-4132-9d6f-a17173841b1f" (UID: "d1d0b154-f221-4132-9d6f-a17173841b1f"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:13:00 crc kubenswrapper[4697]: I0127 15:13:00.744592 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1d0b154-f221-4132-9d6f-a17173841b1f-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "d1d0b154-f221-4132-9d6f-a17173841b1f" (UID: "d1d0b154-f221-4132-9d6f-a17173841b1f"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:13:00 crc kubenswrapper[4697]: I0127 15:13:00.746047 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1d0b154-f221-4132-9d6f-a17173841b1f-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "d1d0b154-f221-4132-9d6f-a17173841b1f" (UID: "d1d0b154-f221-4132-9d6f-a17173841b1f"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:13:00 crc kubenswrapper[4697]: I0127 15:13:00.746275 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1d0b154-f221-4132-9d6f-a17173841b1f-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "d1d0b154-f221-4132-9d6f-a17173841b1f" (UID: "d1d0b154-f221-4132-9d6f-a17173841b1f"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:13:00 crc kubenswrapper[4697]: I0127 15:13:00.746493 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1d0b154-f221-4132-9d6f-a17173841b1f-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "d1d0b154-f221-4132-9d6f-a17173841b1f" (UID: "d1d0b154-f221-4132-9d6f-a17173841b1f"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:13:00 crc kubenswrapper[4697]: I0127 15:13:00.746645 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1d0b154-f221-4132-9d6f-a17173841b1f-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "d1d0b154-f221-4132-9d6f-a17173841b1f" (UID: "d1d0b154-f221-4132-9d6f-a17173841b1f"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:13:00 crc kubenswrapper[4697]: I0127 15:13:00.747116 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1d0b154-f221-4132-9d6f-a17173841b1f-kube-api-access-5k96j" (OuterVolumeSpecName: "kube-api-access-5k96j") pod "d1d0b154-f221-4132-9d6f-a17173841b1f" (UID: "d1d0b154-f221-4132-9d6f-a17173841b1f"). InnerVolumeSpecName "kube-api-access-5k96j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:13:00 crc kubenswrapper[4697]: I0127 15:13:00.751104 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1d0b154-f221-4132-9d6f-a17173841b1f-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "d1d0b154-f221-4132-9d6f-a17173841b1f" (UID: "d1d0b154-f221-4132-9d6f-a17173841b1f"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:13:00 crc kubenswrapper[4697]: I0127 15:13:00.838796 4697 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d1d0b154-f221-4132-9d6f-a17173841b1f-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 27 15:13:00 crc kubenswrapper[4697]: I0127 15:13:00.838831 4697 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d1d0b154-f221-4132-9d6f-a17173841b1f-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 27 15:13:00 crc kubenswrapper[4697]: I0127 15:13:00.838851 4697 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d1d0b154-f221-4132-9d6f-a17173841b1f-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 27 15:13:00 crc kubenswrapper[4697]: I0127 15:13:00.838866 4697 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d1d0b154-f221-4132-9d6f-a17173841b1f-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 27 15:13:00 crc kubenswrapper[4697]: I0127 15:13:00.838878 4697 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d1d0b154-f221-4132-9d6f-a17173841b1f-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 27 15:13:00 crc kubenswrapper[4697]: I0127 15:13:00.838890 4697 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d1d0b154-f221-4132-9d6f-a17173841b1f-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 27 15:13:00 crc kubenswrapper[4697]: I0127 15:13:00.838901 4697 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d1d0b154-f221-4132-9d6f-a17173841b1f-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 27 15:13:00 crc kubenswrapper[4697]: I0127 15:13:00.838911 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5k96j\" (UniqueName: \"kubernetes.io/projected/d1d0b154-f221-4132-9d6f-a17173841b1f-kube-api-access-5k96j\") on node \"crc\" DevicePath \"\"" Jan 27 15:13:00 crc kubenswrapper[4697]: I0127 15:13:00.838921 4697 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d1d0b154-f221-4132-9d6f-a17173841b1f-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 15:13:00 crc kubenswrapper[4697]: I0127 15:13:00.839084 4697 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d1d0b154-f221-4132-9d6f-a17173841b1f-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 27 15:13:00 crc kubenswrapper[4697]: I0127 15:13:00.839100 4697 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d1d0b154-f221-4132-9d6f-a17173841b1f-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 27 15:13:00 crc kubenswrapper[4697]: I0127 15:13:00.839113 4697 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d1d0b154-f221-4132-9d6f-a17173841b1f-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 27 15:13:00 crc kubenswrapper[4697]: I0127 15:13:00.839127 4697 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d1d0b154-f221-4132-9d6f-a17173841b1f-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 15:13:00 crc kubenswrapper[4697]: I0127 15:13:00.839140 4697 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d1d0b154-f221-4132-9d6f-a17173841b1f-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 27 15:13:01 crc kubenswrapper[4697]: I0127 15:13:01.033817 4697 generic.go:334] "Generic (PLEG): container finished" podID="d1d0b154-f221-4132-9d6f-a17173841b1f" containerID="b8e00742347289d7c9777900c7b68957702d88e6f88a26acc7ec4d270d416efe" exitCode=0 Jan 27 15:13:01 crc kubenswrapper[4697]: I0127 15:13:01.033875 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-tgccn" event={"ID":"d1d0b154-f221-4132-9d6f-a17173841b1f","Type":"ContainerDied","Data":"b8e00742347289d7c9777900c7b68957702d88e6f88a26acc7ec4d270d416efe"} Jan 27 15:13:01 crc kubenswrapper[4697]: I0127 15:13:01.033909 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-tgccn" event={"ID":"d1d0b154-f221-4132-9d6f-a17173841b1f","Type":"ContainerDied","Data":"f2f5b03e3fb3d7b9298e4628c6ddb631d95c3419b930c6c372fefbbbbb37ec85"} Jan 27 15:13:01 crc kubenswrapper[4697]: I0127 15:13:01.033931 4697 scope.go:117] "RemoveContainer" containerID="b8e00742347289d7c9777900c7b68957702d88e6f88a26acc7ec4d270d416efe" Jan 27 15:13:01 crc kubenswrapper[4697]: I0127 15:13:01.034044 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-tgccn" Jan 27 15:13:01 crc kubenswrapper[4697]: I0127 15:13:01.072259 4697 scope.go:117] "RemoveContainer" containerID="b8e00742347289d7c9777900c7b68957702d88e6f88a26acc7ec4d270d416efe" Jan 27 15:13:01 crc kubenswrapper[4697]: E0127 15:13:01.072927 4697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b8e00742347289d7c9777900c7b68957702d88e6f88a26acc7ec4d270d416efe\": container with ID starting with b8e00742347289d7c9777900c7b68957702d88e6f88a26acc7ec4d270d416efe not found: ID does not exist" containerID="b8e00742347289d7c9777900c7b68957702d88e6f88a26acc7ec4d270d416efe" Jan 27 15:13:01 crc kubenswrapper[4697]: I0127 15:13:01.072993 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b8e00742347289d7c9777900c7b68957702d88e6f88a26acc7ec4d270d416efe"} err="failed to get container status \"b8e00742347289d7c9777900c7b68957702d88e6f88a26acc7ec4d270d416efe\": rpc error: code = NotFound desc = could not find container \"b8e00742347289d7c9777900c7b68957702d88e6f88a26acc7ec4d270d416efe\": container with ID starting with b8e00742347289d7c9777900c7b68957702d88e6f88a26acc7ec4d270d416efe not found: ID does not exist" Jan 27 15:13:03 crc kubenswrapper[4697]: I0127 15:13:03.218387 4697 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 15:13:03 crc kubenswrapper[4697]: E0127 15:13:03.418778 4697 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-user-idp-0-file-data\": Failed to watch *v1.Secret: unknown (get secrets)" logger="UnhandledError" Jan 27 15:13:03 crc kubenswrapper[4697]: E0127 15:13:03.706018 4697 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: unknown (get configmaps)" logger="UnhandledError" Jan 27 15:13:04 crc kubenswrapper[4697]: I0127 15:13:04.050927 4697 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="30821478-065e-48b2-85f3-ae69260477fb" Jan 27 15:13:04 crc kubenswrapper[4697]: I0127 15:13:04.050958 4697 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="30821478-065e-48b2-85f3-ae69260477fb" Jan 27 15:13:04 crc kubenswrapper[4697]: I0127 15:13:04.054813 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 15:13:04 crc kubenswrapper[4697]: I0127 15:13:04.360321 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 15:13:04 crc kubenswrapper[4697]: I0127 15:13:04.586340 4697 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="2d7f92b8-c57f-451b-847b-bfc0b780e73b" Jan 27 15:13:05 crc kubenswrapper[4697]: I0127 15:13:05.057660 4697 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="30821478-065e-48b2-85f3-ae69260477fb" Jan 27 15:13:05 crc kubenswrapper[4697]: I0127 15:13:05.058002 4697 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="30821478-065e-48b2-85f3-ae69260477fb" Jan 27 15:13:05 crc kubenswrapper[4697]: I0127 15:13:05.062743 4697 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="2d7f92b8-c57f-451b-847b-bfc0b780e73b" Jan 27 15:13:12 crc kubenswrapper[4697]: I0127 15:13:12.870099 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 27 15:13:13 crc kubenswrapper[4697]: I0127 15:13:13.044807 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 27 15:13:13 crc kubenswrapper[4697]: I0127 15:13:13.106873 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 27 15:13:13 crc kubenswrapper[4697]: I0127 15:13:13.115329 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 27 15:13:13 crc kubenswrapper[4697]: I0127 15:13:13.647297 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 27 15:13:13 crc kubenswrapper[4697]: I0127 15:13:13.701651 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 27 15:13:13 crc kubenswrapper[4697]: I0127 15:13:13.820989 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 27 15:13:13 crc kubenswrapper[4697]: I0127 15:13:13.902485 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 27 15:13:13 crc kubenswrapper[4697]: I0127 15:13:13.954554 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 27 15:13:14 crc kubenswrapper[4697]: I0127 15:13:14.021849 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 27 15:13:14 crc kubenswrapper[4697]: I0127 15:13:14.116431 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 27 15:13:14 crc kubenswrapper[4697]: I0127 15:13:14.158500 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 27 15:13:14 crc kubenswrapper[4697]: I0127 15:13:14.438511 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 27 15:13:14 crc kubenswrapper[4697]: I0127 15:13:14.714845 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 27 15:13:14 crc kubenswrapper[4697]: I0127 15:13:14.802584 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 27 15:13:14 crc kubenswrapper[4697]: I0127 15:13:14.950594 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 27 15:13:15 crc kubenswrapper[4697]: I0127 15:13:15.139049 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 27 15:13:15 crc kubenswrapper[4697]: I0127 15:13:15.253027 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 27 15:13:15 crc kubenswrapper[4697]: I0127 15:13:15.475492 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 27 15:13:15 crc kubenswrapper[4697]: I0127 15:13:15.483313 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 27 15:13:15 crc kubenswrapper[4697]: I0127 15:13:15.596266 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 27 15:13:15 crc kubenswrapper[4697]: I0127 15:13:15.609630 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 27 15:13:15 crc kubenswrapper[4697]: I0127 15:13:15.667670 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 27 15:13:15 crc kubenswrapper[4697]: I0127 15:13:15.789361 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 27 15:13:15 crc kubenswrapper[4697]: I0127 15:13:15.802535 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 27 15:13:16 crc kubenswrapper[4697]: I0127 15:13:16.092948 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 27 15:13:16 crc kubenswrapper[4697]: I0127 15:13:16.108885 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 27 15:13:16 crc kubenswrapper[4697]: I0127 15:13:16.125057 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 27 15:13:16 crc kubenswrapper[4697]: I0127 15:13:16.231286 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 27 15:13:16 crc kubenswrapper[4697]: I0127 15:13:16.292329 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 27 15:13:16 crc kubenswrapper[4697]: I0127 15:13:16.299636 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 27 15:13:16 crc kubenswrapper[4697]: I0127 15:13:16.323050 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 27 15:13:16 crc kubenswrapper[4697]: I0127 15:13:16.353720 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 27 15:13:16 crc kubenswrapper[4697]: I0127 15:13:16.360918 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 27 15:13:16 crc kubenswrapper[4697]: I0127 15:13:16.441418 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 27 15:13:16 crc kubenswrapper[4697]: I0127 15:13:16.651032 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 27 15:13:16 crc kubenswrapper[4697]: I0127 15:13:16.672378 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 27 15:13:16 crc kubenswrapper[4697]: I0127 15:13:16.681375 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 27 15:13:16 crc kubenswrapper[4697]: I0127 15:13:16.690956 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 27 15:13:16 crc kubenswrapper[4697]: I0127 15:13:16.710048 4697 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 27 15:13:16 crc kubenswrapper[4697]: I0127 15:13:16.729075 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 27 15:13:16 crc kubenswrapper[4697]: I0127 15:13:16.753162 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 27 15:13:16 crc kubenswrapper[4697]: I0127 15:13:16.842364 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 27 15:13:17 crc kubenswrapper[4697]: I0127 15:13:17.053538 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 27 15:13:17 crc kubenswrapper[4697]: I0127 15:13:17.171399 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 27 15:13:17 crc kubenswrapper[4697]: I0127 15:13:17.248854 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 27 15:13:17 crc kubenswrapper[4697]: I0127 15:13:17.283310 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 27 15:13:17 crc kubenswrapper[4697]: I0127 15:13:17.370118 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 27 15:13:17 crc kubenswrapper[4697]: I0127 15:13:17.448565 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 27 15:13:17 crc kubenswrapper[4697]: I0127 15:13:17.502737 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 27 15:13:17 crc kubenswrapper[4697]: I0127 15:13:17.548768 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 27 15:13:17 crc kubenswrapper[4697]: I0127 15:13:17.638465 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 27 15:13:17 crc kubenswrapper[4697]: I0127 15:13:17.678129 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 27 15:13:17 crc kubenswrapper[4697]: I0127 15:13:17.681207 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 27 15:13:17 crc kubenswrapper[4697]: I0127 15:13:17.776281 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 27 15:13:17 crc kubenswrapper[4697]: I0127 15:13:17.879287 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 27 15:13:17 crc kubenswrapper[4697]: I0127 15:13:17.897750 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 27 15:13:17 crc kubenswrapper[4697]: I0127 15:13:17.920916 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 27 15:13:17 crc kubenswrapper[4697]: I0127 15:13:17.972730 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 27 15:13:18 crc kubenswrapper[4697]: I0127 15:13:18.019224 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 27 15:13:18 crc kubenswrapper[4697]: I0127 15:13:18.031189 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 27 15:13:18 crc kubenswrapper[4697]: I0127 15:13:18.086533 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 27 15:13:18 crc kubenswrapper[4697]: I0127 15:13:18.107373 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 27 15:13:18 crc kubenswrapper[4697]: I0127 15:13:18.173175 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 27 15:13:18 crc kubenswrapper[4697]: I0127 15:13:18.175327 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 27 15:13:18 crc kubenswrapper[4697]: I0127 15:13:18.196484 4697 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 27 15:13:18 crc kubenswrapper[4697]: I0127 15:13:18.254987 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 27 15:13:18 crc kubenswrapper[4697]: I0127 15:13:18.262929 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 27 15:13:18 crc kubenswrapper[4697]: I0127 15:13:18.312274 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 27 15:13:18 crc kubenswrapper[4697]: I0127 15:13:18.370486 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 27 15:13:18 crc kubenswrapper[4697]: I0127 15:13:18.389208 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 27 15:13:18 crc kubenswrapper[4697]: I0127 15:13:18.496234 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 27 15:13:18 crc kubenswrapper[4697]: I0127 15:13:18.532206 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 27 15:13:18 crc kubenswrapper[4697]: I0127 15:13:18.777220 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 27 15:13:18 crc kubenswrapper[4697]: I0127 15:13:18.804136 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 27 15:13:18 crc kubenswrapper[4697]: I0127 15:13:18.936481 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 27 15:13:18 crc kubenswrapper[4697]: I0127 15:13:18.987422 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 27 15:13:18 crc kubenswrapper[4697]: I0127 15:13:18.987475 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 27 15:13:19 crc kubenswrapper[4697]: I0127 15:13:19.001277 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 27 15:13:19 crc kubenswrapper[4697]: I0127 15:13:19.001276 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 27 15:13:19 crc kubenswrapper[4697]: I0127 15:13:19.002836 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 27 15:13:19 crc kubenswrapper[4697]: I0127 15:13:19.030342 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 27 15:13:19 crc kubenswrapper[4697]: I0127 15:13:19.094589 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 27 15:13:19 crc kubenswrapper[4697]: I0127 15:13:19.133858 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 27 15:13:19 crc kubenswrapper[4697]: I0127 15:13:19.171541 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 27 15:13:19 crc kubenswrapper[4697]: I0127 15:13:19.371887 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 27 15:13:19 crc kubenswrapper[4697]: I0127 15:13:19.420137 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 27 15:13:19 crc kubenswrapper[4697]: I0127 15:13:19.505239 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 27 15:13:19 crc kubenswrapper[4697]: I0127 15:13:19.556758 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 27 15:13:19 crc kubenswrapper[4697]: I0127 15:13:19.596545 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 27 15:13:19 crc kubenswrapper[4697]: I0127 15:13:19.610836 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 27 15:13:19 crc kubenswrapper[4697]: I0127 15:13:19.771408 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 27 15:13:19 crc kubenswrapper[4697]: I0127 15:13:19.807217 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 27 15:13:19 crc kubenswrapper[4697]: I0127 15:13:19.826257 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 27 15:13:19 crc kubenswrapper[4697]: I0127 15:13:19.851949 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 27 15:13:19 crc kubenswrapper[4697]: I0127 15:13:19.895686 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 27 15:13:19 crc kubenswrapper[4697]: I0127 15:13:19.926144 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 27 15:13:19 crc kubenswrapper[4697]: I0127 15:13:19.946870 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 27 15:13:20 crc kubenswrapper[4697]: I0127 15:13:20.034486 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 27 15:13:20 crc kubenswrapper[4697]: I0127 15:13:20.129539 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 27 15:13:20 crc kubenswrapper[4697]: I0127 15:13:20.538389 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 27 15:13:20 crc kubenswrapper[4697]: I0127 15:13:20.554919 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 27 15:13:20 crc kubenswrapper[4697]: I0127 15:13:20.661137 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 27 15:13:20 crc kubenswrapper[4697]: I0127 15:13:20.671555 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 27 15:13:20 crc kubenswrapper[4697]: I0127 15:13:20.762164 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 27 15:13:20 crc kubenswrapper[4697]: I0127 15:13:20.762201 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 27 15:13:21 crc kubenswrapper[4697]: I0127 15:13:21.006368 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 27 15:13:21 crc kubenswrapper[4697]: I0127 15:13:21.047727 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 27 15:13:21 crc kubenswrapper[4697]: I0127 15:13:21.061134 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 27 15:13:21 crc kubenswrapper[4697]: I0127 15:13:21.061676 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 27 15:13:21 crc kubenswrapper[4697]: I0127 15:13:21.074016 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 27 15:13:21 crc kubenswrapper[4697]: I0127 15:13:21.081891 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 27 15:13:21 crc kubenswrapper[4697]: I0127 15:13:21.125851 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 27 15:13:21 crc kubenswrapper[4697]: I0127 15:13:21.130558 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 27 15:13:21 crc kubenswrapper[4697]: I0127 15:13:21.138539 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 27 15:13:21 crc kubenswrapper[4697]: I0127 15:13:21.174292 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 27 15:13:21 crc kubenswrapper[4697]: I0127 15:13:21.246133 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 27 15:13:21 crc kubenswrapper[4697]: I0127 15:13:21.310304 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 27 15:13:21 crc kubenswrapper[4697]: I0127 15:13:21.373568 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 27 15:13:21 crc kubenswrapper[4697]: I0127 15:13:21.489556 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 27 15:13:21 crc kubenswrapper[4697]: I0127 15:13:21.538582 4697 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 27 15:13:21 crc kubenswrapper[4697]: I0127 15:13:21.586572 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 27 15:13:21 crc kubenswrapper[4697]: I0127 15:13:21.699395 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 27 15:13:21 crc kubenswrapper[4697]: I0127 15:13:21.729150 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 27 15:13:21 crc kubenswrapper[4697]: I0127 15:13:21.756428 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 27 15:13:21 crc kubenswrapper[4697]: I0127 15:13:21.786086 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 27 15:13:21 crc kubenswrapper[4697]: I0127 15:13:21.847590 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 27 15:13:21 crc kubenswrapper[4697]: I0127 15:13:21.914213 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 27 15:13:21 crc kubenswrapper[4697]: I0127 15:13:21.959361 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 27 15:13:21 crc kubenswrapper[4697]: I0127 15:13:21.961742 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 27 15:13:22 crc kubenswrapper[4697]: I0127 15:13:22.003066 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 27 15:13:22 crc kubenswrapper[4697]: I0127 15:13:22.015964 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 27 15:13:22 crc kubenswrapper[4697]: I0127 15:13:22.155644 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 27 15:13:22 crc kubenswrapper[4697]: I0127 15:13:22.223763 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 27 15:13:22 crc kubenswrapper[4697]: I0127 15:13:22.261726 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 27 15:13:22 crc kubenswrapper[4697]: I0127 15:13:22.277561 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 27 15:13:22 crc kubenswrapper[4697]: I0127 15:13:22.293617 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 27 15:13:22 crc kubenswrapper[4697]: I0127 15:13:22.322401 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 27 15:13:22 crc kubenswrapper[4697]: I0127 15:13:22.367481 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 27 15:13:22 crc kubenswrapper[4697]: I0127 15:13:22.617482 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 27 15:13:22 crc kubenswrapper[4697]: I0127 15:13:22.700421 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 27 15:13:22 crc kubenswrapper[4697]: I0127 15:13:22.808241 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 27 15:13:22 crc kubenswrapper[4697]: I0127 15:13:22.849183 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 27 15:13:22 crc kubenswrapper[4697]: I0127 15:13:22.858913 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 27 15:13:22 crc kubenswrapper[4697]: I0127 15:13:22.861958 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 27 15:13:22 crc kubenswrapper[4697]: I0127 15:13:22.864235 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 27 15:13:22 crc kubenswrapper[4697]: I0127 15:13:22.874070 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 27 15:13:22 crc kubenswrapper[4697]: I0127 15:13:22.935278 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 27 15:13:22 crc kubenswrapper[4697]: I0127 15:13:22.995192 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 27 15:13:23 crc kubenswrapper[4697]: I0127 15:13:23.051670 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 27 15:13:23 crc kubenswrapper[4697]: I0127 15:13:23.065868 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 27 15:13:23 crc kubenswrapper[4697]: I0127 15:13:23.166343 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 27 15:13:23 crc kubenswrapper[4697]: I0127 15:13:23.253061 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 27 15:13:23 crc kubenswrapper[4697]: I0127 15:13:23.323666 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 27 15:13:23 crc kubenswrapper[4697]: I0127 15:13:23.427402 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 27 15:13:23 crc kubenswrapper[4697]: I0127 15:13:23.431271 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 27 15:13:23 crc kubenswrapper[4697]: I0127 15:13:23.469047 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 27 15:13:23 crc kubenswrapper[4697]: I0127 15:13:23.479491 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 27 15:13:23 crc kubenswrapper[4697]: I0127 15:13:23.509277 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 27 15:13:23 crc kubenswrapper[4697]: I0127 15:13:23.509993 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 27 15:13:23 crc kubenswrapper[4697]: I0127 15:13:23.515128 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 27 15:13:23 crc kubenswrapper[4697]: I0127 15:13:23.559864 4697 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 27 15:13:23 crc kubenswrapper[4697]: I0127 15:13:23.563890 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-tgccn","openshift-kube-apiserver/kube-apiserver-crc"] Jan 27 15:13:23 crc kubenswrapper[4697]: I0127 15:13:23.563955 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 27 15:13:23 crc kubenswrapper[4697]: I0127 15:13:23.568374 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 15:13:23 crc kubenswrapper[4697]: I0127 15:13:23.585221 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=20.585200217 podStartE2EDuration="20.585200217s" podCreationTimestamp="2026-01-27 15:13:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:13:23.582340996 +0000 UTC m=+299.754740787" watchObservedRunningTime="2026-01-27 15:13:23.585200217 +0000 UTC m=+299.757599998" Jan 27 15:13:23 crc kubenswrapper[4697]: I0127 15:13:23.612885 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 27 15:13:23 crc kubenswrapper[4697]: I0127 15:13:23.636643 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 27 15:13:23 crc kubenswrapper[4697]: I0127 15:13:23.642119 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 27 15:13:23 crc kubenswrapper[4697]: I0127 15:13:23.720521 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 27 15:13:23 crc kubenswrapper[4697]: I0127 15:13:23.726877 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 27 15:13:23 crc kubenswrapper[4697]: I0127 15:13:23.792738 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 27 15:13:23 crc kubenswrapper[4697]: I0127 15:13:23.825711 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 27 15:13:23 crc kubenswrapper[4697]: I0127 15:13:23.831553 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 27 15:13:23 crc kubenswrapper[4697]: I0127 15:13:23.942992 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 27 15:13:23 crc kubenswrapper[4697]: I0127 15:13:23.943066 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 27 15:13:23 crc kubenswrapper[4697]: I0127 15:13:23.951080 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 27 15:13:23 crc kubenswrapper[4697]: I0127 15:13:23.954174 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 27 15:13:24 crc kubenswrapper[4697]: I0127 15:13:24.025908 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 27 15:13:24 crc kubenswrapper[4697]: I0127 15:13:24.128982 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 27 15:13:24 crc kubenswrapper[4697]: I0127 15:13:24.160435 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 27 15:13:24 crc kubenswrapper[4697]: I0127 15:13:24.316939 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 27 15:13:24 crc kubenswrapper[4697]: I0127 15:13:24.368490 4697 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 27 15:13:24 crc kubenswrapper[4697]: I0127 15:13:24.492622 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 27 15:13:24 crc kubenswrapper[4697]: I0127 15:13:24.537210 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 27 15:13:24 crc kubenswrapper[4697]: I0127 15:13:24.546380 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 27 15:13:24 crc kubenswrapper[4697]: I0127 15:13:24.580700 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1d0b154-f221-4132-9d6f-a17173841b1f" path="/var/lib/kubelet/pods/d1d0b154-f221-4132-9d6f-a17173841b1f/volumes" Jan 27 15:13:24 crc kubenswrapper[4697]: I0127 15:13:24.606424 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 27 15:13:24 crc kubenswrapper[4697]: I0127 15:13:24.639878 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 27 15:13:24 crc kubenswrapper[4697]: I0127 15:13:24.781112 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 27 15:13:24 crc kubenswrapper[4697]: I0127 15:13:24.793070 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 27 15:13:24 crc kubenswrapper[4697]: I0127 15:13:24.798381 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 27 15:13:24 crc kubenswrapper[4697]: I0127 15:13:24.843338 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 27 15:13:24 crc kubenswrapper[4697]: I0127 15:13:24.911827 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-6db46f4cd8-wgjdr"] Jan 27 15:13:24 crc kubenswrapper[4697]: E0127 15:13:24.912042 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01d02477-81a8-4453-bac0-0aaed3d659b5" containerName="installer" Jan 27 15:13:24 crc kubenswrapper[4697]: I0127 15:13:24.912053 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="01d02477-81a8-4453-bac0-0aaed3d659b5" containerName="installer" Jan 27 15:13:24 crc kubenswrapper[4697]: E0127 15:13:24.912065 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1d0b154-f221-4132-9d6f-a17173841b1f" containerName="oauth-openshift" Jan 27 15:13:24 crc kubenswrapper[4697]: I0127 15:13:24.912071 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1d0b154-f221-4132-9d6f-a17173841b1f" containerName="oauth-openshift" Jan 27 15:13:24 crc kubenswrapper[4697]: I0127 15:13:24.912152 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1d0b154-f221-4132-9d6f-a17173841b1f" containerName="oauth-openshift" Jan 27 15:13:24 crc kubenswrapper[4697]: I0127 15:13:24.912163 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="01d02477-81a8-4453-bac0-0aaed3d659b5" containerName="installer" Jan 27 15:13:24 crc kubenswrapper[4697]: I0127 15:13:24.912661 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6db46f4cd8-wgjdr" Jan 27 15:13:24 crc kubenswrapper[4697]: I0127 15:13:24.914083 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 27 15:13:24 crc kubenswrapper[4697]: I0127 15:13:24.916063 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 27 15:13:24 crc kubenswrapper[4697]: I0127 15:13:24.916245 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 27 15:13:24 crc kubenswrapper[4697]: I0127 15:13:24.918832 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 27 15:13:24 crc kubenswrapper[4697]: I0127 15:13:24.919043 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 27 15:13:24 crc kubenswrapper[4697]: I0127 15:13:24.919045 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 27 15:13:24 crc kubenswrapper[4697]: I0127 15:13:24.919159 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 27 15:13:24 crc kubenswrapper[4697]: I0127 15:13:24.919281 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 27 15:13:24 crc kubenswrapper[4697]: I0127 15:13:24.919412 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 27 15:13:24 crc kubenswrapper[4697]: I0127 15:13:24.919463 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 27 15:13:24 crc kubenswrapper[4697]: I0127 15:13:24.919637 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 27 15:13:24 crc kubenswrapper[4697]: I0127 15:13:24.919728 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 27 15:13:24 crc kubenswrapper[4697]: I0127 15:13:24.920662 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d2bccc58-19ea-48f0-968e-4510ac4ca415-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6db46f4cd8-wgjdr\" (UID: \"d2bccc58-19ea-48f0-968e-4510ac4ca415\") " pod="openshift-authentication/oauth-openshift-6db46f4cd8-wgjdr" Jan 27 15:13:24 crc kubenswrapper[4697]: I0127 15:13:24.920705 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d2bccc58-19ea-48f0-968e-4510ac4ca415-v4-0-config-system-router-certs\") pod \"oauth-openshift-6db46f4cd8-wgjdr\" (UID: \"d2bccc58-19ea-48f0-968e-4510ac4ca415\") " pod="openshift-authentication/oauth-openshift-6db46f4cd8-wgjdr" Jan 27 15:13:24 crc kubenswrapper[4697]: I0127 15:13:24.920763 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d2bccc58-19ea-48f0-968e-4510ac4ca415-v4-0-config-system-session\") pod \"oauth-openshift-6db46f4cd8-wgjdr\" (UID: \"d2bccc58-19ea-48f0-968e-4510ac4ca415\") " pod="openshift-authentication/oauth-openshift-6db46f4cd8-wgjdr" Jan 27 15:13:24 crc kubenswrapper[4697]: I0127 15:13:24.920808 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d2bccc58-19ea-48f0-968e-4510ac4ca415-v4-0-config-user-template-login\") pod \"oauth-openshift-6db46f4cd8-wgjdr\" (UID: \"d2bccc58-19ea-48f0-968e-4510ac4ca415\") " pod="openshift-authentication/oauth-openshift-6db46f4cd8-wgjdr" Jan 27 15:13:24 crc kubenswrapper[4697]: I0127 15:13:24.920832 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d2bccc58-19ea-48f0-968e-4510ac4ca415-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6db46f4cd8-wgjdr\" (UID: \"d2bccc58-19ea-48f0-968e-4510ac4ca415\") " pod="openshift-authentication/oauth-openshift-6db46f4cd8-wgjdr" Jan 27 15:13:24 crc kubenswrapper[4697]: I0127 15:13:24.920862 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d2bccc58-19ea-48f0-968e-4510ac4ca415-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6db46f4cd8-wgjdr\" (UID: \"d2bccc58-19ea-48f0-968e-4510ac4ca415\") " pod="openshift-authentication/oauth-openshift-6db46f4cd8-wgjdr" Jan 27 15:13:24 crc kubenswrapper[4697]: I0127 15:13:24.921237 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d2bccc58-19ea-48f0-968e-4510ac4ca415-audit-policies\") pod \"oauth-openshift-6db46f4cd8-wgjdr\" (UID: \"d2bccc58-19ea-48f0-968e-4510ac4ca415\") " pod="openshift-authentication/oauth-openshift-6db46f4cd8-wgjdr" Jan 27 15:13:24 crc kubenswrapper[4697]: I0127 15:13:24.921269 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d2bccc58-19ea-48f0-968e-4510ac4ca415-audit-dir\") pod \"oauth-openshift-6db46f4cd8-wgjdr\" (UID: \"d2bccc58-19ea-48f0-968e-4510ac4ca415\") " pod="openshift-authentication/oauth-openshift-6db46f4cd8-wgjdr" Jan 27 15:13:24 crc kubenswrapper[4697]: I0127 15:13:24.921297 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d2bccc58-19ea-48f0-968e-4510ac4ca415-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6db46f4cd8-wgjdr\" (UID: \"d2bccc58-19ea-48f0-968e-4510ac4ca415\") " pod="openshift-authentication/oauth-openshift-6db46f4cd8-wgjdr" Jan 27 15:13:24 crc kubenswrapper[4697]: I0127 15:13:24.921348 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d2bccc58-19ea-48f0-968e-4510ac4ca415-v4-0-config-system-service-ca\") pod \"oauth-openshift-6db46f4cd8-wgjdr\" (UID: \"d2bccc58-19ea-48f0-968e-4510ac4ca415\") " pod="openshift-authentication/oauth-openshift-6db46f4cd8-wgjdr" Jan 27 15:13:24 crc kubenswrapper[4697]: I0127 15:13:24.921434 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d2bccc58-19ea-48f0-968e-4510ac4ca415-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6db46f4cd8-wgjdr\" (UID: \"d2bccc58-19ea-48f0-968e-4510ac4ca415\") " pod="openshift-authentication/oauth-openshift-6db46f4cd8-wgjdr" Jan 27 15:13:24 crc kubenswrapper[4697]: I0127 15:13:24.921463 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d2bccc58-19ea-48f0-968e-4510ac4ca415-v4-0-config-user-template-error\") pod \"oauth-openshift-6db46f4cd8-wgjdr\" (UID: \"d2bccc58-19ea-48f0-968e-4510ac4ca415\") " pod="openshift-authentication/oauth-openshift-6db46f4cd8-wgjdr" Jan 27 15:13:24 crc kubenswrapper[4697]: I0127 15:13:24.921501 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nlmr7\" (UniqueName: \"kubernetes.io/projected/d2bccc58-19ea-48f0-968e-4510ac4ca415-kube-api-access-nlmr7\") pod \"oauth-openshift-6db46f4cd8-wgjdr\" (UID: \"d2bccc58-19ea-48f0-968e-4510ac4ca415\") " pod="openshift-authentication/oauth-openshift-6db46f4cd8-wgjdr" Jan 27 15:13:24 crc kubenswrapper[4697]: I0127 15:13:24.921535 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d2bccc58-19ea-48f0-968e-4510ac4ca415-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6db46f4cd8-wgjdr\" (UID: \"d2bccc58-19ea-48f0-968e-4510ac4ca415\") " pod="openshift-authentication/oauth-openshift-6db46f4cd8-wgjdr" Jan 27 15:13:24 crc kubenswrapper[4697]: I0127 15:13:24.930176 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 27 15:13:24 crc kubenswrapper[4697]: I0127 15:13:24.936099 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 27 15:13:24 crc kubenswrapper[4697]: I0127 15:13:24.936152 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 27 15:13:24 crc kubenswrapper[4697]: I0127 15:13:24.985274 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 27 15:13:25 crc kubenswrapper[4697]: I0127 15:13:25.008007 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 27 15:13:25 crc kubenswrapper[4697]: I0127 15:13:25.022237 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nlmr7\" (UniqueName: \"kubernetes.io/projected/d2bccc58-19ea-48f0-968e-4510ac4ca415-kube-api-access-nlmr7\") pod \"oauth-openshift-6db46f4cd8-wgjdr\" (UID: \"d2bccc58-19ea-48f0-968e-4510ac4ca415\") " pod="openshift-authentication/oauth-openshift-6db46f4cd8-wgjdr" Jan 27 15:13:25 crc kubenswrapper[4697]: I0127 15:13:25.022288 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d2bccc58-19ea-48f0-968e-4510ac4ca415-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6db46f4cd8-wgjdr\" (UID: \"d2bccc58-19ea-48f0-968e-4510ac4ca415\") " pod="openshift-authentication/oauth-openshift-6db46f4cd8-wgjdr" Jan 27 15:13:25 crc kubenswrapper[4697]: I0127 15:13:25.022315 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d2bccc58-19ea-48f0-968e-4510ac4ca415-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6db46f4cd8-wgjdr\" (UID: \"d2bccc58-19ea-48f0-968e-4510ac4ca415\") " pod="openshift-authentication/oauth-openshift-6db46f4cd8-wgjdr" Jan 27 15:13:25 crc kubenswrapper[4697]: I0127 15:13:25.022338 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d2bccc58-19ea-48f0-968e-4510ac4ca415-v4-0-config-system-router-certs\") pod \"oauth-openshift-6db46f4cd8-wgjdr\" (UID: \"d2bccc58-19ea-48f0-968e-4510ac4ca415\") " pod="openshift-authentication/oauth-openshift-6db46f4cd8-wgjdr" Jan 27 15:13:25 crc kubenswrapper[4697]: I0127 15:13:25.022360 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d2bccc58-19ea-48f0-968e-4510ac4ca415-v4-0-config-system-session\") pod \"oauth-openshift-6db46f4cd8-wgjdr\" (UID: \"d2bccc58-19ea-48f0-968e-4510ac4ca415\") " pod="openshift-authentication/oauth-openshift-6db46f4cd8-wgjdr" Jan 27 15:13:25 crc kubenswrapper[4697]: I0127 15:13:25.022379 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d2bccc58-19ea-48f0-968e-4510ac4ca415-v4-0-config-user-template-login\") pod \"oauth-openshift-6db46f4cd8-wgjdr\" (UID: \"d2bccc58-19ea-48f0-968e-4510ac4ca415\") " pod="openshift-authentication/oauth-openshift-6db46f4cd8-wgjdr" Jan 27 15:13:25 crc kubenswrapper[4697]: I0127 15:13:25.022393 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d2bccc58-19ea-48f0-968e-4510ac4ca415-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6db46f4cd8-wgjdr\" (UID: \"d2bccc58-19ea-48f0-968e-4510ac4ca415\") " pod="openshift-authentication/oauth-openshift-6db46f4cd8-wgjdr" Jan 27 15:13:25 crc kubenswrapper[4697]: I0127 15:13:25.022413 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d2bccc58-19ea-48f0-968e-4510ac4ca415-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6db46f4cd8-wgjdr\" (UID: \"d2bccc58-19ea-48f0-968e-4510ac4ca415\") " pod="openshift-authentication/oauth-openshift-6db46f4cd8-wgjdr" Jan 27 15:13:25 crc kubenswrapper[4697]: I0127 15:13:25.022427 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d2bccc58-19ea-48f0-968e-4510ac4ca415-audit-policies\") pod \"oauth-openshift-6db46f4cd8-wgjdr\" (UID: \"d2bccc58-19ea-48f0-968e-4510ac4ca415\") " pod="openshift-authentication/oauth-openshift-6db46f4cd8-wgjdr" Jan 27 15:13:25 crc kubenswrapper[4697]: I0127 15:13:25.022445 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d2bccc58-19ea-48f0-968e-4510ac4ca415-audit-dir\") pod \"oauth-openshift-6db46f4cd8-wgjdr\" (UID: \"d2bccc58-19ea-48f0-968e-4510ac4ca415\") " pod="openshift-authentication/oauth-openshift-6db46f4cd8-wgjdr" Jan 27 15:13:25 crc kubenswrapper[4697]: I0127 15:13:25.022467 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d2bccc58-19ea-48f0-968e-4510ac4ca415-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6db46f4cd8-wgjdr\" (UID: \"d2bccc58-19ea-48f0-968e-4510ac4ca415\") " pod="openshift-authentication/oauth-openshift-6db46f4cd8-wgjdr" Jan 27 15:13:25 crc kubenswrapper[4697]: I0127 15:13:25.022488 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d2bccc58-19ea-48f0-968e-4510ac4ca415-v4-0-config-system-service-ca\") pod \"oauth-openshift-6db46f4cd8-wgjdr\" (UID: \"d2bccc58-19ea-48f0-968e-4510ac4ca415\") " pod="openshift-authentication/oauth-openshift-6db46f4cd8-wgjdr" Jan 27 15:13:25 crc kubenswrapper[4697]: I0127 15:13:25.022510 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d2bccc58-19ea-48f0-968e-4510ac4ca415-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6db46f4cd8-wgjdr\" (UID: \"d2bccc58-19ea-48f0-968e-4510ac4ca415\") " pod="openshift-authentication/oauth-openshift-6db46f4cd8-wgjdr" Jan 27 15:13:25 crc kubenswrapper[4697]: I0127 15:13:25.022527 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d2bccc58-19ea-48f0-968e-4510ac4ca415-v4-0-config-user-template-error\") pod \"oauth-openshift-6db46f4cd8-wgjdr\" (UID: \"d2bccc58-19ea-48f0-968e-4510ac4ca415\") " pod="openshift-authentication/oauth-openshift-6db46f4cd8-wgjdr" Jan 27 15:13:25 crc kubenswrapper[4697]: I0127 15:13:25.022653 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d2bccc58-19ea-48f0-968e-4510ac4ca415-audit-dir\") pod \"oauth-openshift-6db46f4cd8-wgjdr\" (UID: \"d2bccc58-19ea-48f0-968e-4510ac4ca415\") " pod="openshift-authentication/oauth-openshift-6db46f4cd8-wgjdr" Jan 27 15:13:25 crc kubenswrapper[4697]: I0127 15:13:25.023225 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d2bccc58-19ea-48f0-968e-4510ac4ca415-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6db46f4cd8-wgjdr\" (UID: \"d2bccc58-19ea-48f0-968e-4510ac4ca415\") " pod="openshift-authentication/oauth-openshift-6db46f4cd8-wgjdr" Jan 27 15:13:25 crc kubenswrapper[4697]: I0127 15:13:25.023353 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d2bccc58-19ea-48f0-968e-4510ac4ca415-audit-policies\") pod \"oauth-openshift-6db46f4cd8-wgjdr\" (UID: \"d2bccc58-19ea-48f0-968e-4510ac4ca415\") " pod="openshift-authentication/oauth-openshift-6db46f4cd8-wgjdr" Jan 27 15:13:25 crc kubenswrapper[4697]: I0127 15:13:25.023429 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d2bccc58-19ea-48f0-968e-4510ac4ca415-v4-0-config-system-service-ca\") pod \"oauth-openshift-6db46f4cd8-wgjdr\" (UID: \"d2bccc58-19ea-48f0-968e-4510ac4ca415\") " pod="openshift-authentication/oauth-openshift-6db46f4cd8-wgjdr" Jan 27 15:13:25 crc kubenswrapper[4697]: I0127 15:13:25.024024 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d2bccc58-19ea-48f0-968e-4510ac4ca415-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6db46f4cd8-wgjdr\" (UID: \"d2bccc58-19ea-48f0-968e-4510ac4ca415\") " pod="openshift-authentication/oauth-openshift-6db46f4cd8-wgjdr" Jan 27 15:13:25 crc kubenswrapper[4697]: I0127 15:13:25.029004 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d2bccc58-19ea-48f0-968e-4510ac4ca415-v4-0-config-system-router-certs\") pod \"oauth-openshift-6db46f4cd8-wgjdr\" (UID: \"d2bccc58-19ea-48f0-968e-4510ac4ca415\") " pod="openshift-authentication/oauth-openshift-6db46f4cd8-wgjdr" Jan 27 15:13:25 crc kubenswrapper[4697]: I0127 15:13:25.029320 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d2bccc58-19ea-48f0-968e-4510ac4ca415-v4-0-config-system-session\") pod \"oauth-openshift-6db46f4cd8-wgjdr\" (UID: \"d2bccc58-19ea-48f0-968e-4510ac4ca415\") " pod="openshift-authentication/oauth-openshift-6db46f4cd8-wgjdr" Jan 27 15:13:25 crc kubenswrapper[4697]: I0127 15:13:25.029352 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d2bccc58-19ea-48f0-968e-4510ac4ca415-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6db46f4cd8-wgjdr\" (UID: \"d2bccc58-19ea-48f0-968e-4510ac4ca415\") " pod="openshift-authentication/oauth-openshift-6db46f4cd8-wgjdr" Jan 27 15:13:25 crc kubenswrapper[4697]: I0127 15:13:25.030233 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d2bccc58-19ea-48f0-968e-4510ac4ca415-v4-0-config-user-template-login\") pod \"oauth-openshift-6db46f4cd8-wgjdr\" (UID: \"d2bccc58-19ea-48f0-968e-4510ac4ca415\") " pod="openshift-authentication/oauth-openshift-6db46f4cd8-wgjdr" Jan 27 15:13:25 crc kubenswrapper[4697]: I0127 15:13:25.030964 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d2bccc58-19ea-48f0-968e-4510ac4ca415-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6db46f4cd8-wgjdr\" (UID: \"d2bccc58-19ea-48f0-968e-4510ac4ca415\") " pod="openshift-authentication/oauth-openshift-6db46f4cd8-wgjdr" Jan 27 15:13:25 crc kubenswrapper[4697]: I0127 15:13:25.032331 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d2bccc58-19ea-48f0-968e-4510ac4ca415-v4-0-config-user-template-error\") pod \"oauth-openshift-6db46f4cd8-wgjdr\" (UID: \"d2bccc58-19ea-48f0-968e-4510ac4ca415\") " pod="openshift-authentication/oauth-openshift-6db46f4cd8-wgjdr" Jan 27 15:13:25 crc kubenswrapper[4697]: I0127 15:13:25.033200 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d2bccc58-19ea-48f0-968e-4510ac4ca415-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6db46f4cd8-wgjdr\" (UID: \"d2bccc58-19ea-48f0-968e-4510ac4ca415\") " pod="openshift-authentication/oauth-openshift-6db46f4cd8-wgjdr" Jan 27 15:13:25 crc kubenswrapper[4697]: I0127 15:13:25.039155 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d2bccc58-19ea-48f0-968e-4510ac4ca415-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6db46f4cd8-wgjdr\" (UID: \"d2bccc58-19ea-48f0-968e-4510ac4ca415\") " pod="openshift-authentication/oauth-openshift-6db46f4cd8-wgjdr" Jan 27 15:13:25 crc kubenswrapper[4697]: I0127 15:13:25.039574 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nlmr7\" (UniqueName: \"kubernetes.io/projected/d2bccc58-19ea-48f0-968e-4510ac4ca415-kube-api-access-nlmr7\") pod \"oauth-openshift-6db46f4cd8-wgjdr\" (UID: \"d2bccc58-19ea-48f0-968e-4510ac4ca415\") " pod="openshift-authentication/oauth-openshift-6db46f4cd8-wgjdr" Jan 27 15:13:25 crc kubenswrapper[4697]: I0127 15:13:25.123384 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 27 15:13:25 crc kubenswrapper[4697]: I0127 15:13:25.142706 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 27 15:13:25 crc kubenswrapper[4697]: I0127 15:13:25.153656 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 27 15:13:25 crc kubenswrapper[4697]: I0127 15:13:25.236059 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6db46f4cd8-wgjdr" Jan 27 15:13:25 crc kubenswrapper[4697]: I0127 15:13:25.257824 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6db46f4cd8-wgjdr"] Jan 27 15:13:25 crc kubenswrapper[4697]: I0127 15:13:25.287874 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 27 15:13:25 crc kubenswrapper[4697]: I0127 15:13:25.347149 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 27 15:13:25 crc kubenswrapper[4697]: I0127 15:13:25.394401 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 27 15:13:25 crc kubenswrapper[4697]: I0127 15:13:25.499864 4697 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 27 15:13:25 crc kubenswrapper[4697]: I0127 15:13:25.535340 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 27 15:13:25 crc kubenswrapper[4697]: I0127 15:13:25.555272 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 27 15:13:25 crc kubenswrapper[4697]: I0127 15:13:25.637619 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6db46f4cd8-wgjdr"] Jan 27 15:13:25 crc kubenswrapper[4697]: I0127 15:13:25.707848 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 27 15:13:25 crc kubenswrapper[4697]: I0127 15:13:25.722147 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 27 15:13:25 crc kubenswrapper[4697]: I0127 15:13:25.732772 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 27 15:13:25 crc kubenswrapper[4697]: I0127 15:13:25.742085 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 27 15:13:25 crc kubenswrapper[4697]: I0127 15:13:25.848296 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 27 15:13:25 crc kubenswrapper[4697]: I0127 15:13:25.890589 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 27 15:13:25 crc kubenswrapper[4697]: I0127 15:13:25.904499 4697 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 27 15:13:25 crc kubenswrapper[4697]: I0127 15:13:25.904735 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://8ba8e6a601928789e2e3281bae11d875b8a6c0acec19e1a79779b980f0567b78" gracePeriod=5 Jan 27 15:13:25 crc kubenswrapper[4697]: I0127 15:13:25.939231 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 27 15:13:25 crc kubenswrapper[4697]: I0127 15:13:25.941276 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 27 15:13:25 crc kubenswrapper[4697]: I0127 15:13:25.960571 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 27 15:13:26 crc kubenswrapper[4697]: I0127 15:13:26.043751 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 27 15:13:26 crc kubenswrapper[4697]: I0127 15:13:26.174918 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6db46f4cd8-wgjdr" event={"ID":"d2bccc58-19ea-48f0-968e-4510ac4ca415","Type":"ContainerStarted","Data":"6abf2e8c6ee9569c784202c5aa17aaf1785922596c11db6800e50b19a9f51545"} Jan 27 15:13:26 crc kubenswrapper[4697]: I0127 15:13:26.174962 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6db46f4cd8-wgjdr" event={"ID":"d2bccc58-19ea-48f0-968e-4510ac4ca415","Type":"ContainerStarted","Data":"d4eb8dffe1b9a55bf1327c02aca85338b3abef7adf7678042dc28177705ec20c"} Jan 27 15:13:26 crc kubenswrapper[4697]: I0127 15:13:26.175242 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-6db46f4cd8-wgjdr" Jan 27 15:13:26 crc kubenswrapper[4697]: I0127 15:13:26.197103 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 27 15:13:26 crc kubenswrapper[4697]: I0127 15:13:26.197908 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-6db46f4cd8-wgjdr" podStartSLOduration=51.197881303 podStartE2EDuration="51.197881303s" podCreationTimestamp="2026-01-27 15:12:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:13:26.194694054 +0000 UTC m=+302.367093835" watchObservedRunningTime="2026-01-27 15:13:26.197881303 +0000 UTC m=+302.370281094" Jan 27 15:13:26 crc kubenswrapper[4697]: I0127 15:13:26.234161 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 27 15:13:26 crc kubenswrapper[4697]: I0127 15:13:26.246473 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 27 15:13:26 crc kubenswrapper[4697]: I0127 15:13:26.255985 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 27 15:13:26 crc kubenswrapper[4697]: I0127 15:13:26.268257 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 27 15:13:26 crc kubenswrapper[4697]: I0127 15:13:26.394966 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-6db46f4cd8-wgjdr" Jan 27 15:13:26 crc kubenswrapper[4697]: I0127 15:13:26.470011 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 27 15:13:26 crc kubenswrapper[4697]: I0127 15:13:26.487795 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 27 15:13:26 crc kubenswrapper[4697]: I0127 15:13:26.493979 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 27 15:13:26 crc kubenswrapper[4697]: I0127 15:13:26.591955 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 27 15:13:26 crc kubenswrapper[4697]: I0127 15:13:26.625270 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 27 15:13:26 crc kubenswrapper[4697]: I0127 15:13:26.771551 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 27 15:13:26 crc kubenswrapper[4697]: I0127 15:13:26.772075 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 27 15:13:26 crc kubenswrapper[4697]: I0127 15:13:26.981321 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 27 15:13:26 crc kubenswrapper[4697]: I0127 15:13:26.986874 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 27 15:13:27 crc kubenswrapper[4697]: I0127 15:13:27.144659 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 27 15:13:27 crc kubenswrapper[4697]: I0127 15:13:27.158776 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 27 15:13:27 crc kubenswrapper[4697]: I0127 15:13:27.163123 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 27 15:13:27 crc kubenswrapper[4697]: I0127 15:13:27.368830 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 27 15:13:27 crc kubenswrapper[4697]: I0127 15:13:27.509346 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 27 15:13:27 crc kubenswrapper[4697]: I0127 15:13:27.547988 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 27 15:13:27 crc kubenswrapper[4697]: I0127 15:13:27.657773 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 27 15:13:27 crc kubenswrapper[4697]: I0127 15:13:27.745182 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 27 15:13:27 crc kubenswrapper[4697]: I0127 15:13:27.805513 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 27 15:13:28 crc kubenswrapper[4697]: I0127 15:13:28.097079 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 27 15:13:28 crc kubenswrapper[4697]: I0127 15:13:28.346071 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 27 15:13:28 crc kubenswrapper[4697]: I0127 15:13:28.460574 4697 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 27 15:13:28 crc kubenswrapper[4697]: I0127 15:13:28.787257 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 27 15:13:29 crc kubenswrapper[4697]: I0127 15:13:29.082506 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 27 15:13:29 crc kubenswrapper[4697]: I0127 15:13:29.118746 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 27 15:13:31 crc kubenswrapper[4697]: I0127 15:13:31.203948 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 27 15:13:31 crc kubenswrapper[4697]: I0127 15:13:31.205152 4697 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="8ba8e6a601928789e2e3281bae11d875b8a6c0acec19e1a79779b980f0567b78" exitCode=137 Jan 27 15:13:31 crc kubenswrapper[4697]: I0127 15:13:31.552016 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 27 15:13:31 crc kubenswrapper[4697]: I0127 15:13:31.552078 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 15:13:31 crc kubenswrapper[4697]: I0127 15:13:31.719365 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 27 15:13:31 crc kubenswrapper[4697]: I0127 15:13:31.719466 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 27 15:13:31 crc kubenswrapper[4697]: I0127 15:13:31.719486 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 27 15:13:31 crc kubenswrapper[4697]: I0127 15:13:31.719504 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 27 15:13:31 crc kubenswrapper[4697]: I0127 15:13:31.719523 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 27 15:13:31 crc kubenswrapper[4697]: I0127 15:13:31.719608 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 15:13:31 crc kubenswrapper[4697]: I0127 15:13:31.719608 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 15:13:31 crc kubenswrapper[4697]: I0127 15:13:31.719678 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 15:13:31 crc kubenswrapper[4697]: I0127 15:13:31.719631 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 15:13:31 crc kubenswrapper[4697]: I0127 15:13:31.719874 4697 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 27 15:13:31 crc kubenswrapper[4697]: I0127 15:13:31.719892 4697 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 27 15:13:31 crc kubenswrapper[4697]: I0127 15:13:31.719903 4697 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 27 15:13:31 crc kubenswrapper[4697]: I0127 15:13:31.719913 4697 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 27 15:13:31 crc kubenswrapper[4697]: I0127 15:13:31.728743 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 15:13:31 crc kubenswrapper[4697]: I0127 15:13:31.820870 4697 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 27 15:13:32 crc kubenswrapper[4697]: I0127 15:13:32.213181 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 27 15:13:32 crc kubenswrapper[4697]: I0127 15:13:32.213245 4697 scope.go:117] "RemoveContainer" containerID="8ba8e6a601928789e2e3281bae11d875b8a6c0acec19e1a79779b980f0567b78" Jan 27 15:13:32 crc kubenswrapper[4697]: I0127 15:13:32.213340 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 15:13:32 crc kubenswrapper[4697]: I0127 15:13:32.578605 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 27 15:13:33 crc kubenswrapper[4697]: I0127 15:13:33.205219 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-84cff6fc4f-9ln2f"] Jan 27 15:13:33 crc kubenswrapper[4697]: I0127 15:13:33.205699 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-84cff6fc4f-9ln2f" podUID="f706f519-858d-4da1-8b59-5e40c35e0b0d" containerName="controller-manager" containerID="cri-o://a5417ef2176b3addde8457d25dd6026da6b4fe45cf204212aeb1ef2c707394bb" gracePeriod=30 Jan 27 15:13:33 crc kubenswrapper[4697]: I0127 15:13:33.314024 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-559f79478c-f5jqw"] Jan 27 15:13:33 crc kubenswrapper[4697]: I0127 15:13:33.314331 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-559f79478c-f5jqw" podUID="e25ade21-cfd1-429b-98a7-d4d886130348" containerName="route-controller-manager" containerID="cri-o://7f456b7ab5d8a3ac88ea78e3f4ffd16e24141851bd5ca5b23bd1ad0e54505bfc" gracePeriod=30 Jan 27 15:13:33 crc kubenswrapper[4697]: I0127 15:13:33.545520 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-84cff6fc4f-9ln2f" Jan 27 15:13:33 crc kubenswrapper[4697]: I0127 15:13:33.644086 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f706f519-858d-4da1-8b59-5e40c35e0b0d-config\") pod \"f706f519-858d-4da1-8b59-5e40c35e0b0d\" (UID: \"f706f519-858d-4da1-8b59-5e40c35e0b0d\") " Jan 27 15:13:33 crc kubenswrapper[4697]: I0127 15:13:33.645050 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f706f519-858d-4da1-8b59-5e40c35e0b0d-config" (OuterVolumeSpecName: "config") pod "f706f519-858d-4da1-8b59-5e40c35e0b0d" (UID: "f706f519-858d-4da1-8b59-5e40c35e0b0d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:13:33 crc kubenswrapper[4697]: I0127 15:13:33.645114 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f706f519-858d-4da1-8b59-5e40c35e0b0d-proxy-ca-bundles\") pod \"f706f519-858d-4da1-8b59-5e40c35e0b0d\" (UID: \"f706f519-858d-4da1-8b59-5e40c35e0b0d\") " Jan 27 15:13:33 crc kubenswrapper[4697]: I0127 15:13:33.645158 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f706f519-858d-4da1-8b59-5e40c35e0b0d-client-ca\") pod \"f706f519-858d-4da1-8b59-5e40c35e0b0d\" (UID: \"f706f519-858d-4da1-8b59-5e40c35e0b0d\") " Jan 27 15:13:33 crc kubenswrapper[4697]: I0127 15:13:33.645194 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f706f519-858d-4da1-8b59-5e40c35e0b0d-serving-cert\") pod \"f706f519-858d-4da1-8b59-5e40c35e0b0d\" (UID: \"f706f519-858d-4da1-8b59-5e40c35e0b0d\") " Jan 27 15:13:33 crc kubenswrapper[4697]: I0127 15:13:33.645214 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zhjp4\" (UniqueName: \"kubernetes.io/projected/f706f519-858d-4da1-8b59-5e40c35e0b0d-kube-api-access-zhjp4\") pod \"f706f519-858d-4da1-8b59-5e40c35e0b0d\" (UID: \"f706f519-858d-4da1-8b59-5e40c35e0b0d\") " Jan 27 15:13:33 crc kubenswrapper[4697]: I0127 15:13:33.645422 4697 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f706f519-858d-4da1-8b59-5e40c35e0b0d-config\") on node \"crc\" DevicePath \"\"" Jan 27 15:13:33 crc kubenswrapper[4697]: I0127 15:13:33.647023 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f706f519-858d-4da1-8b59-5e40c35e0b0d-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "f706f519-858d-4da1-8b59-5e40c35e0b0d" (UID: "f706f519-858d-4da1-8b59-5e40c35e0b0d"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:13:33 crc kubenswrapper[4697]: I0127 15:13:33.647374 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f706f519-858d-4da1-8b59-5e40c35e0b0d-client-ca" (OuterVolumeSpecName: "client-ca") pod "f706f519-858d-4da1-8b59-5e40c35e0b0d" (UID: "f706f519-858d-4da1-8b59-5e40c35e0b0d"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:13:33 crc kubenswrapper[4697]: I0127 15:13:33.651151 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f706f519-858d-4da1-8b59-5e40c35e0b0d-kube-api-access-zhjp4" (OuterVolumeSpecName: "kube-api-access-zhjp4") pod "f706f519-858d-4da1-8b59-5e40c35e0b0d" (UID: "f706f519-858d-4da1-8b59-5e40c35e0b0d"). InnerVolumeSpecName "kube-api-access-zhjp4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:13:33 crc kubenswrapper[4697]: I0127 15:13:33.658165 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f706f519-858d-4da1-8b59-5e40c35e0b0d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f706f519-858d-4da1-8b59-5e40c35e0b0d" (UID: "f706f519-858d-4da1-8b59-5e40c35e0b0d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:13:33 crc kubenswrapper[4697]: I0127 15:13:33.746253 4697 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f706f519-858d-4da1-8b59-5e40c35e0b0d-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 15:13:33 crc kubenswrapper[4697]: I0127 15:13:33.746277 4697 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f706f519-858d-4da1-8b59-5e40c35e0b0d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 15:13:33 crc kubenswrapper[4697]: I0127 15:13:33.746287 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zhjp4\" (UniqueName: \"kubernetes.io/projected/f706f519-858d-4da1-8b59-5e40c35e0b0d-kube-api-access-zhjp4\") on node \"crc\" DevicePath \"\"" Jan 27 15:13:33 crc kubenswrapper[4697]: I0127 15:13:33.746296 4697 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f706f519-858d-4da1-8b59-5e40c35e0b0d-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 27 15:13:33 crc kubenswrapper[4697]: I0127 15:13:33.783618 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-559f79478c-f5jqw" Jan 27 15:13:33 crc kubenswrapper[4697]: I0127 15:13:33.950761 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e25ade21-cfd1-429b-98a7-d4d886130348-config\") pod \"e25ade21-cfd1-429b-98a7-d4d886130348\" (UID: \"e25ade21-cfd1-429b-98a7-d4d886130348\") " Jan 27 15:13:33 crc kubenswrapper[4697]: I0127 15:13:33.950817 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e25ade21-cfd1-429b-98a7-d4d886130348-serving-cert\") pod \"e25ade21-cfd1-429b-98a7-d4d886130348\" (UID: \"e25ade21-cfd1-429b-98a7-d4d886130348\") " Jan 27 15:13:33 crc kubenswrapper[4697]: I0127 15:13:33.950859 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bngd8\" (UniqueName: \"kubernetes.io/projected/e25ade21-cfd1-429b-98a7-d4d886130348-kube-api-access-bngd8\") pod \"e25ade21-cfd1-429b-98a7-d4d886130348\" (UID: \"e25ade21-cfd1-429b-98a7-d4d886130348\") " Jan 27 15:13:33 crc kubenswrapper[4697]: I0127 15:13:33.950903 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e25ade21-cfd1-429b-98a7-d4d886130348-client-ca\") pod \"e25ade21-cfd1-429b-98a7-d4d886130348\" (UID: \"e25ade21-cfd1-429b-98a7-d4d886130348\") " Jan 27 15:13:33 crc kubenswrapper[4697]: I0127 15:13:33.951449 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e25ade21-cfd1-429b-98a7-d4d886130348-config" (OuterVolumeSpecName: "config") pod "e25ade21-cfd1-429b-98a7-d4d886130348" (UID: "e25ade21-cfd1-429b-98a7-d4d886130348"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:13:33 crc kubenswrapper[4697]: I0127 15:13:33.951458 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e25ade21-cfd1-429b-98a7-d4d886130348-client-ca" (OuterVolumeSpecName: "client-ca") pod "e25ade21-cfd1-429b-98a7-d4d886130348" (UID: "e25ade21-cfd1-429b-98a7-d4d886130348"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:13:33 crc kubenswrapper[4697]: I0127 15:13:33.957415 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e25ade21-cfd1-429b-98a7-d4d886130348-kube-api-access-bngd8" (OuterVolumeSpecName: "kube-api-access-bngd8") pod "e25ade21-cfd1-429b-98a7-d4d886130348" (UID: "e25ade21-cfd1-429b-98a7-d4d886130348"). InnerVolumeSpecName "kube-api-access-bngd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:13:33 crc kubenswrapper[4697]: I0127 15:13:33.957757 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e25ade21-cfd1-429b-98a7-d4d886130348-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e25ade21-cfd1-429b-98a7-d4d886130348" (UID: "e25ade21-cfd1-429b-98a7-d4d886130348"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:13:34 crc kubenswrapper[4697]: I0127 15:13:34.051627 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bngd8\" (UniqueName: \"kubernetes.io/projected/e25ade21-cfd1-429b-98a7-d4d886130348-kube-api-access-bngd8\") on node \"crc\" DevicePath \"\"" Jan 27 15:13:34 crc kubenswrapper[4697]: I0127 15:13:34.051660 4697 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e25ade21-cfd1-429b-98a7-d4d886130348-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 15:13:34 crc kubenswrapper[4697]: I0127 15:13:34.051672 4697 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e25ade21-cfd1-429b-98a7-d4d886130348-config\") on node \"crc\" DevicePath \"\"" Jan 27 15:13:34 crc kubenswrapper[4697]: I0127 15:13:34.051680 4697 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e25ade21-cfd1-429b-98a7-d4d886130348-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 15:13:34 crc kubenswrapper[4697]: I0127 15:13:34.230517 4697 generic.go:334] "Generic (PLEG): container finished" podID="e25ade21-cfd1-429b-98a7-d4d886130348" containerID="7f456b7ab5d8a3ac88ea78e3f4ffd16e24141851bd5ca5b23bd1ad0e54505bfc" exitCode=0 Jan 27 15:13:34 crc kubenswrapper[4697]: I0127 15:13:34.230566 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-559f79478c-f5jqw" event={"ID":"e25ade21-cfd1-429b-98a7-d4d886130348","Type":"ContainerDied","Data":"7f456b7ab5d8a3ac88ea78e3f4ffd16e24141851bd5ca5b23bd1ad0e54505bfc"} Jan 27 15:13:34 crc kubenswrapper[4697]: I0127 15:13:34.230879 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-559f79478c-f5jqw" event={"ID":"e25ade21-cfd1-429b-98a7-d4d886130348","Type":"ContainerDied","Data":"e66384b144357903c71e566c55e2135f6c22aa0e87bd3c3284f7762e5c7a1c69"} Jan 27 15:13:34 crc kubenswrapper[4697]: I0127 15:13:34.230900 4697 scope.go:117] "RemoveContainer" containerID="7f456b7ab5d8a3ac88ea78e3f4ffd16e24141851bd5ca5b23bd1ad0e54505bfc" Jan 27 15:13:34 crc kubenswrapper[4697]: I0127 15:13:34.230618 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-559f79478c-f5jqw" Jan 27 15:13:34 crc kubenswrapper[4697]: I0127 15:13:34.236479 4697 generic.go:334] "Generic (PLEG): container finished" podID="f706f519-858d-4da1-8b59-5e40c35e0b0d" containerID="a5417ef2176b3addde8457d25dd6026da6b4fe45cf204212aeb1ef2c707394bb" exitCode=0 Jan 27 15:13:34 crc kubenswrapper[4697]: I0127 15:13:34.236534 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-84cff6fc4f-9ln2f" event={"ID":"f706f519-858d-4da1-8b59-5e40c35e0b0d","Type":"ContainerDied","Data":"a5417ef2176b3addde8457d25dd6026da6b4fe45cf204212aeb1ef2c707394bb"} Jan 27 15:13:34 crc kubenswrapper[4697]: I0127 15:13:34.236602 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-84cff6fc4f-9ln2f" event={"ID":"f706f519-858d-4da1-8b59-5e40c35e0b0d","Type":"ContainerDied","Data":"6acb07c3c6342364d0f4722530f8b7a6046ed49ddcaa69f4e381a4679a471461"} Jan 27 15:13:34 crc kubenswrapper[4697]: I0127 15:13:34.236602 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-84cff6fc4f-9ln2f" Jan 27 15:13:34 crc kubenswrapper[4697]: I0127 15:13:34.264089 4697 scope.go:117] "RemoveContainer" containerID="7f456b7ab5d8a3ac88ea78e3f4ffd16e24141851bd5ca5b23bd1ad0e54505bfc" Jan 27 15:13:34 crc kubenswrapper[4697]: E0127 15:13:34.265563 4697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f456b7ab5d8a3ac88ea78e3f4ffd16e24141851bd5ca5b23bd1ad0e54505bfc\": container with ID starting with 7f456b7ab5d8a3ac88ea78e3f4ffd16e24141851bd5ca5b23bd1ad0e54505bfc not found: ID does not exist" containerID="7f456b7ab5d8a3ac88ea78e3f4ffd16e24141851bd5ca5b23bd1ad0e54505bfc" Jan 27 15:13:34 crc kubenswrapper[4697]: I0127 15:13:34.265639 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f456b7ab5d8a3ac88ea78e3f4ffd16e24141851bd5ca5b23bd1ad0e54505bfc"} err="failed to get container status \"7f456b7ab5d8a3ac88ea78e3f4ffd16e24141851bd5ca5b23bd1ad0e54505bfc\": rpc error: code = NotFound desc = could not find container \"7f456b7ab5d8a3ac88ea78e3f4ffd16e24141851bd5ca5b23bd1ad0e54505bfc\": container with ID starting with 7f456b7ab5d8a3ac88ea78e3f4ffd16e24141851bd5ca5b23bd1ad0e54505bfc not found: ID does not exist" Jan 27 15:13:34 crc kubenswrapper[4697]: I0127 15:13:34.265697 4697 scope.go:117] "RemoveContainer" containerID="a5417ef2176b3addde8457d25dd6026da6b4fe45cf204212aeb1ef2c707394bb" Jan 27 15:13:34 crc kubenswrapper[4697]: I0127 15:13:34.278229 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-559f79478c-f5jqw"] Jan 27 15:13:34 crc kubenswrapper[4697]: I0127 15:13:34.283891 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-559f79478c-f5jqw"] Jan 27 15:13:34 crc kubenswrapper[4697]: I0127 15:13:34.294647 4697 scope.go:117] "RemoveContainer" containerID="a5417ef2176b3addde8457d25dd6026da6b4fe45cf204212aeb1ef2c707394bb" Jan 27 15:13:34 crc kubenswrapper[4697]: E0127 15:13:34.295001 4697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a5417ef2176b3addde8457d25dd6026da6b4fe45cf204212aeb1ef2c707394bb\": container with ID starting with a5417ef2176b3addde8457d25dd6026da6b4fe45cf204212aeb1ef2c707394bb not found: ID does not exist" containerID="a5417ef2176b3addde8457d25dd6026da6b4fe45cf204212aeb1ef2c707394bb" Jan 27 15:13:34 crc kubenswrapper[4697]: I0127 15:13:34.295052 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a5417ef2176b3addde8457d25dd6026da6b4fe45cf204212aeb1ef2c707394bb"} err="failed to get container status \"a5417ef2176b3addde8457d25dd6026da6b4fe45cf204212aeb1ef2c707394bb\": rpc error: code = NotFound desc = could not find container \"a5417ef2176b3addde8457d25dd6026da6b4fe45cf204212aeb1ef2c707394bb\": container with ID starting with a5417ef2176b3addde8457d25dd6026da6b4fe45cf204212aeb1ef2c707394bb not found: ID does not exist" Jan 27 15:13:34 crc kubenswrapper[4697]: I0127 15:13:34.295768 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-84cff6fc4f-9ln2f"] Jan 27 15:13:34 crc kubenswrapper[4697]: I0127 15:13:34.301353 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-84cff6fc4f-9ln2f"] Jan 27 15:13:34 crc kubenswrapper[4697]: I0127 15:13:34.581514 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e25ade21-cfd1-429b-98a7-d4d886130348" path="/var/lib/kubelet/pods/e25ade21-cfd1-429b-98a7-d4d886130348/volumes" Jan 27 15:13:34 crc kubenswrapper[4697]: I0127 15:13:34.582850 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f706f519-858d-4da1-8b59-5e40c35e0b0d" path="/var/lib/kubelet/pods/f706f519-858d-4da1-8b59-5e40c35e0b0d/volumes" Jan 27 15:13:34 crc kubenswrapper[4697]: I0127 15:13:34.926347 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-74cb6546b7-s2q7p"] Jan 27 15:13:34 crc kubenswrapper[4697]: E0127 15:13:34.926992 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 27 15:13:34 crc kubenswrapper[4697]: I0127 15:13:34.927141 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 27 15:13:34 crc kubenswrapper[4697]: E0127 15:13:34.927264 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f706f519-858d-4da1-8b59-5e40c35e0b0d" containerName="controller-manager" Jan 27 15:13:34 crc kubenswrapper[4697]: I0127 15:13:34.927370 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="f706f519-858d-4da1-8b59-5e40c35e0b0d" containerName="controller-manager" Jan 27 15:13:34 crc kubenswrapper[4697]: E0127 15:13:34.927494 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e25ade21-cfd1-429b-98a7-d4d886130348" containerName="route-controller-manager" Jan 27 15:13:34 crc kubenswrapper[4697]: I0127 15:13:34.927599 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="e25ade21-cfd1-429b-98a7-d4d886130348" containerName="route-controller-manager" Jan 27 15:13:34 crc kubenswrapper[4697]: I0127 15:13:34.927887 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="f706f519-858d-4da1-8b59-5e40c35e0b0d" containerName="controller-manager" Jan 27 15:13:34 crc kubenswrapper[4697]: I0127 15:13:34.928035 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="e25ade21-cfd1-429b-98a7-d4d886130348" containerName="route-controller-manager" Jan 27 15:13:34 crc kubenswrapper[4697]: I0127 15:13:34.928184 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 27 15:13:34 crc kubenswrapper[4697]: I0127 15:13:34.928892 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-74cb6546b7-s2q7p" Jan 27 15:13:34 crc kubenswrapper[4697]: I0127 15:13:34.937602 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8bbc94894-k62l8"] Jan 27 15:13:34 crc kubenswrapper[4697]: I0127 15:13:34.938258 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-74cb6546b7-s2q7p"] Jan 27 15:13:34 crc kubenswrapper[4697]: I0127 15:13:34.938337 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-8bbc94894-k62l8" Jan 27 15:13:34 crc kubenswrapper[4697]: I0127 15:13:34.961982 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0ab1fe32-596c-4980-ba28-0fe3b4505893-client-ca\") pod \"controller-manager-74cb6546b7-s2q7p\" (UID: \"0ab1fe32-596c-4980-ba28-0fe3b4505893\") " pod="openshift-controller-manager/controller-manager-74cb6546b7-s2q7p" Jan 27 15:13:34 crc kubenswrapper[4697]: I0127 15:13:34.962017 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0ab1fe32-596c-4980-ba28-0fe3b4505893-proxy-ca-bundles\") pod \"controller-manager-74cb6546b7-s2q7p\" (UID: \"0ab1fe32-596c-4980-ba28-0fe3b4505893\") " pod="openshift-controller-manager/controller-manager-74cb6546b7-s2q7p" Jan 27 15:13:34 crc kubenswrapper[4697]: I0127 15:13:34.962055 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0ab1fe32-596c-4980-ba28-0fe3b4505893-serving-cert\") pod \"controller-manager-74cb6546b7-s2q7p\" (UID: \"0ab1fe32-596c-4980-ba28-0fe3b4505893\") " pod="openshift-controller-manager/controller-manager-74cb6546b7-s2q7p" Jan 27 15:13:34 crc kubenswrapper[4697]: I0127 15:13:34.962098 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ab1fe32-596c-4980-ba28-0fe3b4505893-config\") pod \"controller-manager-74cb6546b7-s2q7p\" (UID: \"0ab1fe32-596c-4980-ba28-0fe3b4505893\") " pod="openshift-controller-manager/controller-manager-74cb6546b7-s2q7p" Jan 27 15:13:34 crc kubenswrapper[4697]: I0127 15:13:34.962129 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zp5bd\" (UniqueName: \"kubernetes.io/projected/0ab1fe32-596c-4980-ba28-0fe3b4505893-kube-api-access-zp5bd\") pod \"controller-manager-74cb6546b7-s2q7p\" (UID: \"0ab1fe32-596c-4980-ba28-0fe3b4505893\") " pod="openshift-controller-manager/controller-manager-74cb6546b7-s2q7p" Jan 27 15:13:34 crc kubenswrapper[4697]: I0127 15:13:34.962165 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e6626193-f6ab-41ff-85d7-5ab64ff952a7-serving-cert\") pod \"route-controller-manager-8bbc94894-k62l8\" (UID: \"e6626193-f6ab-41ff-85d7-5ab64ff952a7\") " pod="openshift-route-controller-manager/route-controller-manager-8bbc94894-k62l8" Jan 27 15:13:34 crc kubenswrapper[4697]: I0127 15:13:34.962185 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e6626193-f6ab-41ff-85d7-5ab64ff952a7-client-ca\") pod \"route-controller-manager-8bbc94894-k62l8\" (UID: \"e6626193-f6ab-41ff-85d7-5ab64ff952a7\") " pod="openshift-route-controller-manager/route-controller-manager-8bbc94894-k62l8" Jan 27 15:13:34 crc kubenswrapper[4697]: I0127 15:13:34.962201 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6626193-f6ab-41ff-85d7-5ab64ff952a7-config\") pod \"route-controller-manager-8bbc94894-k62l8\" (UID: \"e6626193-f6ab-41ff-85d7-5ab64ff952a7\") " pod="openshift-route-controller-manager/route-controller-manager-8bbc94894-k62l8" Jan 27 15:13:34 crc kubenswrapper[4697]: I0127 15:13:34.962231 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdwmc\" (UniqueName: \"kubernetes.io/projected/e6626193-f6ab-41ff-85d7-5ab64ff952a7-kube-api-access-tdwmc\") pod \"route-controller-manager-8bbc94894-k62l8\" (UID: \"e6626193-f6ab-41ff-85d7-5ab64ff952a7\") " pod="openshift-route-controller-manager/route-controller-manager-8bbc94894-k62l8" Jan 27 15:13:34 crc kubenswrapper[4697]: I0127 15:13:34.964017 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 27 15:13:34 crc kubenswrapper[4697]: I0127 15:13:34.964053 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 27 15:13:34 crc kubenswrapper[4697]: I0127 15:13:34.965740 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 27 15:13:34 crc kubenswrapper[4697]: I0127 15:13:34.968735 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 27 15:13:34 crc kubenswrapper[4697]: I0127 15:13:34.973698 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8bbc94894-k62l8"] Jan 27 15:13:34 crc kubenswrapper[4697]: I0127 15:13:34.973864 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 27 15:13:34 crc kubenswrapper[4697]: I0127 15:13:34.973901 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 27 15:13:34 crc kubenswrapper[4697]: I0127 15:13:34.973925 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 27 15:13:34 crc kubenswrapper[4697]: I0127 15:13:34.973974 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 27 15:13:34 crc kubenswrapper[4697]: I0127 15:13:34.974054 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 27 15:13:34 crc kubenswrapper[4697]: I0127 15:13:34.974095 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 27 15:13:34 crc kubenswrapper[4697]: I0127 15:13:34.974148 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 27 15:13:34 crc kubenswrapper[4697]: I0127 15:13:34.974290 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 27 15:13:34 crc kubenswrapper[4697]: I0127 15:13:34.974333 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 27 15:13:35 crc kubenswrapper[4697]: I0127 15:13:35.063138 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e6626193-f6ab-41ff-85d7-5ab64ff952a7-serving-cert\") pod \"route-controller-manager-8bbc94894-k62l8\" (UID: \"e6626193-f6ab-41ff-85d7-5ab64ff952a7\") " pod="openshift-route-controller-manager/route-controller-manager-8bbc94894-k62l8" Jan 27 15:13:35 crc kubenswrapper[4697]: I0127 15:13:35.063187 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e6626193-f6ab-41ff-85d7-5ab64ff952a7-client-ca\") pod \"route-controller-manager-8bbc94894-k62l8\" (UID: \"e6626193-f6ab-41ff-85d7-5ab64ff952a7\") " pod="openshift-route-controller-manager/route-controller-manager-8bbc94894-k62l8" Jan 27 15:13:35 crc kubenswrapper[4697]: I0127 15:13:35.063214 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6626193-f6ab-41ff-85d7-5ab64ff952a7-config\") pod \"route-controller-manager-8bbc94894-k62l8\" (UID: \"e6626193-f6ab-41ff-85d7-5ab64ff952a7\") " pod="openshift-route-controller-manager/route-controller-manager-8bbc94894-k62l8" Jan 27 15:13:35 crc kubenswrapper[4697]: I0127 15:13:35.063896 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tdwmc\" (UniqueName: \"kubernetes.io/projected/e6626193-f6ab-41ff-85d7-5ab64ff952a7-kube-api-access-tdwmc\") pod \"route-controller-manager-8bbc94894-k62l8\" (UID: \"e6626193-f6ab-41ff-85d7-5ab64ff952a7\") " pod="openshift-route-controller-manager/route-controller-manager-8bbc94894-k62l8" Jan 27 15:13:35 crc kubenswrapper[4697]: I0127 15:13:35.064084 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0ab1fe32-596c-4980-ba28-0fe3b4505893-client-ca\") pod \"controller-manager-74cb6546b7-s2q7p\" (UID: \"0ab1fe32-596c-4980-ba28-0fe3b4505893\") " pod="openshift-controller-manager/controller-manager-74cb6546b7-s2q7p" Jan 27 15:13:35 crc kubenswrapper[4697]: I0127 15:13:35.064190 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0ab1fe32-596c-4980-ba28-0fe3b4505893-proxy-ca-bundles\") pod \"controller-manager-74cb6546b7-s2q7p\" (UID: \"0ab1fe32-596c-4980-ba28-0fe3b4505893\") " pod="openshift-controller-manager/controller-manager-74cb6546b7-s2q7p" Jan 27 15:13:35 crc kubenswrapper[4697]: I0127 15:13:35.064286 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6626193-f6ab-41ff-85d7-5ab64ff952a7-config\") pod \"route-controller-manager-8bbc94894-k62l8\" (UID: \"e6626193-f6ab-41ff-85d7-5ab64ff952a7\") " pod="openshift-route-controller-manager/route-controller-manager-8bbc94894-k62l8" Jan 27 15:13:35 crc kubenswrapper[4697]: I0127 15:13:35.064391 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0ab1fe32-596c-4980-ba28-0fe3b4505893-serving-cert\") pod \"controller-manager-74cb6546b7-s2q7p\" (UID: \"0ab1fe32-596c-4980-ba28-0fe3b4505893\") " pod="openshift-controller-manager/controller-manager-74cb6546b7-s2q7p" Jan 27 15:13:35 crc kubenswrapper[4697]: I0127 15:13:35.064499 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ab1fe32-596c-4980-ba28-0fe3b4505893-config\") pod \"controller-manager-74cb6546b7-s2q7p\" (UID: \"0ab1fe32-596c-4980-ba28-0fe3b4505893\") " pod="openshift-controller-manager/controller-manager-74cb6546b7-s2q7p" Jan 27 15:13:35 crc kubenswrapper[4697]: I0127 15:13:35.064649 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e6626193-f6ab-41ff-85d7-5ab64ff952a7-client-ca\") pod \"route-controller-manager-8bbc94894-k62l8\" (UID: \"e6626193-f6ab-41ff-85d7-5ab64ff952a7\") " pod="openshift-route-controller-manager/route-controller-manager-8bbc94894-k62l8" Jan 27 15:13:35 crc kubenswrapper[4697]: I0127 15:13:35.064810 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zp5bd\" (UniqueName: \"kubernetes.io/projected/0ab1fe32-596c-4980-ba28-0fe3b4505893-kube-api-access-zp5bd\") pod \"controller-manager-74cb6546b7-s2q7p\" (UID: \"0ab1fe32-596c-4980-ba28-0fe3b4505893\") " pod="openshift-controller-manager/controller-manager-74cb6546b7-s2q7p" Jan 27 15:13:35 crc kubenswrapper[4697]: I0127 15:13:35.065095 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0ab1fe32-596c-4980-ba28-0fe3b4505893-client-ca\") pod \"controller-manager-74cb6546b7-s2q7p\" (UID: \"0ab1fe32-596c-4980-ba28-0fe3b4505893\") " pod="openshift-controller-manager/controller-manager-74cb6546b7-s2q7p" Jan 27 15:13:35 crc kubenswrapper[4697]: I0127 15:13:35.065183 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0ab1fe32-596c-4980-ba28-0fe3b4505893-proxy-ca-bundles\") pod \"controller-manager-74cb6546b7-s2q7p\" (UID: \"0ab1fe32-596c-4980-ba28-0fe3b4505893\") " pod="openshift-controller-manager/controller-manager-74cb6546b7-s2q7p" Jan 27 15:13:35 crc kubenswrapper[4697]: I0127 15:13:35.067476 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ab1fe32-596c-4980-ba28-0fe3b4505893-config\") pod \"controller-manager-74cb6546b7-s2q7p\" (UID: \"0ab1fe32-596c-4980-ba28-0fe3b4505893\") " pod="openshift-controller-manager/controller-manager-74cb6546b7-s2q7p" Jan 27 15:13:35 crc kubenswrapper[4697]: I0127 15:13:35.068684 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0ab1fe32-596c-4980-ba28-0fe3b4505893-serving-cert\") pod \"controller-manager-74cb6546b7-s2q7p\" (UID: \"0ab1fe32-596c-4980-ba28-0fe3b4505893\") " pod="openshift-controller-manager/controller-manager-74cb6546b7-s2q7p" Jan 27 15:13:35 crc kubenswrapper[4697]: I0127 15:13:35.068925 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e6626193-f6ab-41ff-85d7-5ab64ff952a7-serving-cert\") pod \"route-controller-manager-8bbc94894-k62l8\" (UID: \"e6626193-f6ab-41ff-85d7-5ab64ff952a7\") " pod="openshift-route-controller-manager/route-controller-manager-8bbc94894-k62l8" Jan 27 15:13:35 crc kubenswrapper[4697]: I0127 15:13:35.079807 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tdwmc\" (UniqueName: \"kubernetes.io/projected/e6626193-f6ab-41ff-85d7-5ab64ff952a7-kube-api-access-tdwmc\") pod \"route-controller-manager-8bbc94894-k62l8\" (UID: \"e6626193-f6ab-41ff-85d7-5ab64ff952a7\") " pod="openshift-route-controller-manager/route-controller-manager-8bbc94894-k62l8" Jan 27 15:13:35 crc kubenswrapper[4697]: I0127 15:13:35.080319 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zp5bd\" (UniqueName: \"kubernetes.io/projected/0ab1fe32-596c-4980-ba28-0fe3b4505893-kube-api-access-zp5bd\") pod \"controller-manager-74cb6546b7-s2q7p\" (UID: \"0ab1fe32-596c-4980-ba28-0fe3b4505893\") " pod="openshift-controller-manager/controller-manager-74cb6546b7-s2q7p" Jan 27 15:13:35 crc kubenswrapper[4697]: I0127 15:13:35.281633 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-74cb6546b7-s2q7p" Jan 27 15:13:35 crc kubenswrapper[4697]: I0127 15:13:35.292333 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-8bbc94894-k62l8" Jan 27 15:13:35 crc kubenswrapper[4697]: I0127 15:13:35.520290 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8bbc94894-k62l8"] Jan 27 15:13:35 crc kubenswrapper[4697]: I0127 15:13:35.687557 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-74cb6546b7-s2q7p"] Jan 27 15:13:35 crc kubenswrapper[4697]: W0127 15:13:35.694736 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0ab1fe32_596c_4980_ba28_0fe3b4505893.slice/crio-87f1bc28f9756fb0285c41b4f8874d9e32c4183af62630627ca9384ce125ca54 WatchSource:0}: Error finding container 87f1bc28f9756fb0285c41b4f8874d9e32c4183af62630627ca9384ce125ca54: Status 404 returned error can't find the container with id 87f1bc28f9756fb0285c41b4f8874d9e32c4183af62630627ca9384ce125ca54 Jan 27 15:13:36 crc kubenswrapper[4697]: I0127 15:13:36.249274 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-8bbc94894-k62l8" event={"ID":"e6626193-f6ab-41ff-85d7-5ab64ff952a7","Type":"ContainerStarted","Data":"36cc6da3a5a976300b2a92d42f48598ec6870a295b82306c21a81a5ff15a0de5"} Jan 27 15:13:36 crc kubenswrapper[4697]: I0127 15:13:36.249642 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-8bbc94894-k62l8" Jan 27 15:13:36 crc kubenswrapper[4697]: I0127 15:13:36.249658 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-8bbc94894-k62l8" event={"ID":"e6626193-f6ab-41ff-85d7-5ab64ff952a7","Type":"ContainerStarted","Data":"64156f7a15043beb217f18d3bff2cd8f01f47c355dab46cdc33a9f78f8543421"} Jan 27 15:13:36 crc kubenswrapper[4697]: I0127 15:13:36.250919 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-74cb6546b7-s2q7p" event={"ID":"0ab1fe32-596c-4980-ba28-0fe3b4505893","Type":"ContainerStarted","Data":"6348e2e7ea616655c5dbcdd6ce5384775a0cbc71a4e6b982c142e411814f8fbe"} Jan 27 15:13:36 crc kubenswrapper[4697]: I0127 15:13:36.250960 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-74cb6546b7-s2q7p" event={"ID":"0ab1fe32-596c-4980-ba28-0fe3b4505893","Type":"ContainerStarted","Data":"87f1bc28f9756fb0285c41b4f8874d9e32c4183af62630627ca9384ce125ca54"} Jan 27 15:13:36 crc kubenswrapper[4697]: I0127 15:13:36.251350 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-74cb6546b7-s2q7p" Jan 27 15:13:36 crc kubenswrapper[4697]: I0127 15:13:36.271811 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-74cb6546b7-s2q7p" Jan 27 15:13:36 crc kubenswrapper[4697]: I0127 15:13:36.277580 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-8bbc94894-k62l8" Jan 27 15:13:36 crc kubenswrapper[4697]: I0127 15:13:36.287424 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-8bbc94894-k62l8" podStartSLOduration=3.28740402 podStartE2EDuration="3.28740402s" podCreationTimestamp="2026-01-27 15:13:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:13:36.283689147 +0000 UTC m=+312.456088938" watchObservedRunningTime="2026-01-27 15:13:36.28740402 +0000 UTC m=+312.459803811" Jan 27 15:13:36 crc kubenswrapper[4697]: I0127 15:13:36.303261 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-74cb6546b7-s2q7p" podStartSLOduration=3.303238903 podStartE2EDuration="3.303238903s" podCreationTimestamp="2026-01-27 15:13:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:13:36.301589532 +0000 UTC m=+312.473989313" watchObservedRunningTime="2026-01-27 15:13:36.303238903 +0000 UTC m=+312.475638694" Jan 27 15:13:54 crc kubenswrapper[4697]: I0127 15:13:54.137681 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-74cb6546b7-s2q7p"] Jan 27 15:13:54 crc kubenswrapper[4697]: I0127 15:13:54.138620 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-74cb6546b7-s2q7p" podUID="0ab1fe32-596c-4980-ba28-0fe3b4505893" containerName="controller-manager" containerID="cri-o://6348e2e7ea616655c5dbcdd6ce5384775a0cbc71a4e6b982c142e411814f8fbe" gracePeriod=30 Jan 27 15:13:54 crc kubenswrapper[4697]: I0127 15:13:54.143844 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8bbc94894-k62l8"] Jan 27 15:13:54 crc kubenswrapper[4697]: I0127 15:13:54.147371 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-8bbc94894-k62l8" podUID="e6626193-f6ab-41ff-85d7-5ab64ff952a7" containerName="route-controller-manager" containerID="cri-o://36cc6da3a5a976300b2a92d42f48598ec6870a295b82306c21a81a5ff15a0de5" gracePeriod=30 Jan 27 15:13:54 crc kubenswrapper[4697]: I0127 15:13:54.338380 4697 generic.go:334] "Generic (PLEG): container finished" podID="0ab1fe32-596c-4980-ba28-0fe3b4505893" containerID="6348e2e7ea616655c5dbcdd6ce5384775a0cbc71a4e6b982c142e411814f8fbe" exitCode=0 Jan 27 15:13:54 crc kubenswrapper[4697]: I0127 15:13:54.338451 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-74cb6546b7-s2q7p" event={"ID":"0ab1fe32-596c-4980-ba28-0fe3b4505893","Type":"ContainerDied","Data":"6348e2e7ea616655c5dbcdd6ce5384775a0cbc71a4e6b982c142e411814f8fbe"} Jan 27 15:13:54 crc kubenswrapper[4697]: I0127 15:13:54.339826 4697 generic.go:334] "Generic (PLEG): container finished" podID="baa7401d-bcad-4175-af1b-46414c003f9e" containerID="31c0ab84a5b388f859f6c45a8b145c924146deefae5e970ec7cb5133aa40ab83" exitCode=0 Jan 27 15:13:54 crc kubenswrapper[4697]: I0127 15:13:54.339879 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-45xm2" event={"ID":"baa7401d-bcad-4175-af1b-46414c003f9e","Type":"ContainerDied","Data":"31c0ab84a5b388f859f6c45a8b145c924146deefae5e970ec7cb5133aa40ab83"} Jan 27 15:13:54 crc kubenswrapper[4697]: I0127 15:13:54.340299 4697 scope.go:117] "RemoveContainer" containerID="31c0ab84a5b388f859f6c45a8b145c924146deefae5e970ec7cb5133aa40ab83" Jan 27 15:13:54 crc kubenswrapper[4697]: I0127 15:13:54.341934 4697 generic.go:334] "Generic (PLEG): container finished" podID="e6626193-f6ab-41ff-85d7-5ab64ff952a7" containerID="36cc6da3a5a976300b2a92d42f48598ec6870a295b82306c21a81a5ff15a0de5" exitCode=0 Jan 27 15:13:54 crc kubenswrapper[4697]: I0127 15:13:54.341973 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-8bbc94894-k62l8" event={"ID":"e6626193-f6ab-41ff-85d7-5ab64ff952a7","Type":"ContainerDied","Data":"36cc6da3a5a976300b2a92d42f48598ec6870a295b82306c21a81a5ff15a0de5"} Jan 27 15:13:54 crc kubenswrapper[4697]: I0127 15:13:54.677248 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-8bbc94894-k62l8" Jan 27 15:13:54 crc kubenswrapper[4697]: I0127 15:13:54.751643 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-74cb6546b7-s2q7p" Jan 27 15:13:54 crc kubenswrapper[4697]: I0127 15:13:54.828612 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tdwmc\" (UniqueName: \"kubernetes.io/projected/e6626193-f6ab-41ff-85d7-5ab64ff952a7-kube-api-access-tdwmc\") pod \"e6626193-f6ab-41ff-85d7-5ab64ff952a7\" (UID: \"e6626193-f6ab-41ff-85d7-5ab64ff952a7\") " Jan 27 15:13:54 crc kubenswrapper[4697]: I0127 15:13:54.828707 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e6626193-f6ab-41ff-85d7-5ab64ff952a7-serving-cert\") pod \"e6626193-f6ab-41ff-85d7-5ab64ff952a7\" (UID: \"e6626193-f6ab-41ff-85d7-5ab64ff952a7\") " Jan 27 15:13:54 crc kubenswrapper[4697]: I0127 15:13:54.828753 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6626193-f6ab-41ff-85d7-5ab64ff952a7-config\") pod \"e6626193-f6ab-41ff-85d7-5ab64ff952a7\" (UID: \"e6626193-f6ab-41ff-85d7-5ab64ff952a7\") " Jan 27 15:13:54 crc kubenswrapper[4697]: I0127 15:13:54.828816 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e6626193-f6ab-41ff-85d7-5ab64ff952a7-client-ca\") pod \"e6626193-f6ab-41ff-85d7-5ab64ff952a7\" (UID: \"e6626193-f6ab-41ff-85d7-5ab64ff952a7\") " Jan 27 15:13:54 crc kubenswrapper[4697]: I0127 15:13:54.829452 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e6626193-f6ab-41ff-85d7-5ab64ff952a7-client-ca" (OuterVolumeSpecName: "client-ca") pod "e6626193-f6ab-41ff-85d7-5ab64ff952a7" (UID: "e6626193-f6ab-41ff-85d7-5ab64ff952a7"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:13:54 crc kubenswrapper[4697]: I0127 15:13:54.829624 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e6626193-f6ab-41ff-85d7-5ab64ff952a7-config" (OuterVolumeSpecName: "config") pod "e6626193-f6ab-41ff-85d7-5ab64ff952a7" (UID: "e6626193-f6ab-41ff-85d7-5ab64ff952a7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:13:54 crc kubenswrapper[4697]: I0127 15:13:54.833369 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6626193-f6ab-41ff-85d7-5ab64ff952a7-kube-api-access-tdwmc" (OuterVolumeSpecName: "kube-api-access-tdwmc") pod "e6626193-f6ab-41ff-85d7-5ab64ff952a7" (UID: "e6626193-f6ab-41ff-85d7-5ab64ff952a7"). InnerVolumeSpecName "kube-api-access-tdwmc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:13:54 crc kubenswrapper[4697]: I0127 15:13:54.834048 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6626193-f6ab-41ff-85d7-5ab64ff952a7-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e6626193-f6ab-41ff-85d7-5ab64ff952a7" (UID: "e6626193-f6ab-41ff-85d7-5ab64ff952a7"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:13:54 crc kubenswrapper[4697]: I0127 15:13:54.929867 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ab1fe32-596c-4980-ba28-0fe3b4505893-config\") pod \"0ab1fe32-596c-4980-ba28-0fe3b4505893\" (UID: \"0ab1fe32-596c-4980-ba28-0fe3b4505893\") " Jan 27 15:13:54 crc kubenswrapper[4697]: I0127 15:13:54.929923 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0ab1fe32-596c-4980-ba28-0fe3b4505893-client-ca\") pod \"0ab1fe32-596c-4980-ba28-0fe3b4505893\" (UID: \"0ab1fe32-596c-4980-ba28-0fe3b4505893\") " Jan 27 15:13:54 crc kubenswrapper[4697]: I0127 15:13:54.929950 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0ab1fe32-596c-4980-ba28-0fe3b4505893-proxy-ca-bundles\") pod \"0ab1fe32-596c-4980-ba28-0fe3b4505893\" (UID: \"0ab1fe32-596c-4980-ba28-0fe3b4505893\") " Jan 27 15:13:54 crc kubenswrapper[4697]: I0127 15:13:54.929973 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zp5bd\" (UniqueName: \"kubernetes.io/projected/0ab1fe32-596c-4980-ba28-0fe3b4505893-kube-api-access-zp5bd\") pod \"0ab1fe32-596c-4980-ba28-0fe3b4505893\" (UID: \"0ab1fe32-596c-4980-ba28-0fe3b4505893\") " Jan 27 15:13:54 crc kubenswrapper[4697]: I0127 15:13:54.930067 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0ab1fe32-596c-4980-ba28-0fe3b4505893-serving-cert\") pod \"0ab1fe32-596c-4980-ba28-0fe3b4505893\" (UID: \"0ab1fe32-596c-4980-ba28-0fe3b4505893\") " Jan 27 15:13:54 crc kubenswrapper[4697]: I0127 15:13:54.930239 4697 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e6626193-f6ab-41ff-85d7-5ab64ff952a7-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 15:13:54 crc kubenswrapper[4697]: I0127 15:13:54.930250 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tdwmc\" (UniqueName: \"kubernetes.io/projected/e6626193-f6ab-41ff-85d7-5ab64ff952a7-kube-api-access-tdwmc\") on node \"crc\" DevicePath \"\"" Jan 27 15:13:54 crc kubenswrapper[4697]: I0127 15:13:54.930259 4697 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e6626193-f6ab-41ff-85d7-5ab64ff952a7-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 15:13:54 crc kubenswrapper[4697]: I0127 15:13:54.930268 4697 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6626193-f6ab-41ff-85d7-5ab64ff952a7-config\") on node \"crc\" DevicePath \"\"" Jan 27 15:13:54 crc kubenswrapper[4697]: I0127 15:13:54.930967 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ab1fe32-596c-4980-ba28-0fe3b4505893-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "0ab1fe32-596c-4980-ba28-0fe3b4505893" (UID: "0ab1fe32-596c-4980-ba28-0fe3b4505893"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:13:54 crc kubenswrapper[4697]: I0127 15:13:54.931014 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ab1fe32-596c-4980-ba28-0fe3b4505893-config" (OuterVolumeSpecName: "config") pod "0ab1fe32-596c-4980-ba28-0fe3b4505893" (UID: "0ab1fe32-596c-4980-ba28-0fe3b4505893"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:13:54 crc kubenswrapper[4697]: I0127 15:13:54.931208 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ab1fe32-596c-4980-ba28-0fe3b4505893-client-ca" (OuterVolumeSpecName: "client-ca") pod "0ab1fe32-596c-4980-ba28-0fe3b4505893" (UID: "0ab1fe32-596c-4980-ba28-0fe3b4505893"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:13:54 crc kubenswrapper[4697]: I0127 15:13:54.932689 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ab1fe32-596c-4980-ba28-0fe3b4505893-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0ab1fe32-596c-4980-ba28-0fe3b4505893" (UID: "0ab1fe32-596c-4980-ba28-0fe3b4505893"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:13:54 crc kubenswrapper[4697]: I0127 15:13:54.933077 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ab1fe32-596c-4980-ba28-0fe3b4505893-kube-api-access-zp5bd" (OuterVolumeSpecName: "kube-api-access-zp5bd") pod "0ab1fe32-596c-4980-ba28-0fe3b4505893" (UID: "0ab1fe32-596c-4980-ba28-0fe3b4505893"). InnerVolumeSpecName "kube-api-access-zp5bd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:13:55 crc kubenswrapper[4697]: I0127 15:13:55.031086 4697 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ab1fe32-596c-4980-ba28-0fe3b4505893-config\") on node \"crc\" DevicePath \"\"" Jan 27 15:13:55 crc kubenswrapper[4697]: I0127 15:13:55.031132 4697 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0ab1fe32-596c-4980-ba28-0fe3b4505893-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 15:13:55 crc kubenswrapper[4697]: I0127 15:13:55.031145 4697 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0ab1fe32-596c-4980-ba28-0fe3b4505893-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 27 15:13:55 crc kubenswrapper[4697]: I0127 15:13:55.031158 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zp5bd\" (UniqueName: \"kubernetes.io/projected/0ab1fe32-596c-4980-ba28-0fe3b4505893-kube-api-access-zp5bd\") on node \"crc\" DevicePath \"\"" Jan 27 15:13:55 crc kubenswrapper[4697]: I0127 15:13:55.031170 4697 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0ab1fe32-596c-4980-ba28-0fe3b4505893-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 15:13:55 crc kubenswrapper[4697]: I0127 15:13:55.348115 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-74cb6546b7-s2q7p" event={"ID":"0ab1fe32-596c-4980-ba28-0fe3b4505893","Type":"ContainerDied","Data":"87f1bc28f9756fb0285c41b4f8874d9e32c4183af62630627ca9384ce125ca54"} Jan 27 15:13:55 crc kubenswrapper[4697]: I0127 15:13:55.348168 4697 scope.go:117] "RemoveContainer" containerID="6348e2e7ea616655c5dbcdd6ce5384775a0cbc71a4e6b982c142e411814f8fbe" Jan 27 15:13:55 crc kubenswrapper[4697]: I0127 15:13:55.348278 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-74cb6546b7-s2q7p" Jan 27 15:13:55 crc kubenswrapper[4697]: I0127 15:13:55.352252 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-45xm2" event={"ID":"baa7401d-bcad-4175-af1b-46414c003f9e","Type":"ContainerStarted","Data":"5ea45aeff8eba19e51f7f03a1286de49e37eeeceb86a6a07a5cb5b98203b8d21"} Jan 27 15:13:55 crc kubenswrapper[4697]: I0127 15:13:55.352608 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-45xm2" Jan 27 15:13:55 crc kubenswrapper[4697]: I0127 15:13:55.353858 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-8bbc94894-k62l8" event={"ID":"e6626193-f6ab-41ff-85d7-5ab64ff952a7","Type":"ContainerDied","Data":"64156f7a15043beb217f18d3bff2cd8f01f47c355dab46cdc33a9f78f8543421"} Jan 27 15:13:55 crc kubenswrapper[4697]: I0127 15:13:55.353925 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-8bbc94894-k62l8" Jan 27 15:13:55 crc kubenswrapper[4697]: I0127 15:13:55.359977 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-45xm2" Jan 27 15:13:55 crc kubenswrapper[4697]: I0127 15:13:55.367013 4697 scope.go:117] "RemoveContainer" containerID="36cc6da3a5a976300b2a92d42f48598ec6870a295b82306c21a81a5ff15a0de5" Jan 27 15:13:55 crc kubenswrapper[4697]: I0127 15:13:55.383388 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-74cb6546b7-s2q7p"] Jan 27 15:13:55 crc kubenswrapper[4697]: I0127 15:13:55.387598 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-74cb6546b7-s2q7p"] Jan 27 15:13:55 crc kubenswrapper[4697]: I0127 15:13:55.425100 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8bbc94894-k62l8"] Jan 27 15:13:55 crc kubenswrapper[4697]: I0127 15:13:55.425976 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8bbc94894-k62l8"] Jan 27 15:13:55 crc kubenswrapper[4697]: I0127 15:13:55.933707 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-c6cc786fd-wk79v"] Jan 27 15:13:55 crc kubenswrapper[4697]: E0127 15:13:55.934618 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6626193-f6ab-41ff-85d7-5ab64ff952a7" containerName="route-controller-manager" Jan 27 15:13:55 crc kubenswrapper[4697]: I0127 15:13:55.934652 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6626193-f6ab-41ff-85d7-5ab64ff952a7" containerName="route-controller-manager" Jan 27 15:13:55 crc kubenswrapper[4697]: E0127 15:13:55.934680 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ab1fe32-596c-4980-ba28-0fe3b4505893" containerName="controller-manager" Jan 27 15:13:55 crc kubenswrapper[4697]: I0127 15:13:55.934694 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ab1fe32-596c-4980-ba28-0fe3b4505893" containerName="controller-manager" Jan 27 15:13:55 crc kubenswrapper[4697]: I0127 15:13:55.934941 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6626193-f6ab-41ff-85d7-5ab64ff952a7" containerName="route-controller-manager" Jan 27 15:13:55 crc kubenswrapper[4697]: I0127 15:13:55.934978 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ab1fe32-596c-4980-ba28-0fe3b4505893" containerName="controller-manager" Jan 27 15:13:55 crc kubenswrapper[4697]: I0127 15:13:55.935667 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-c6cc786fd-wk79v" Jan 27 15:13:55 crc kubenswrapper[4697]: I0127 15:13:55.939920 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 27 15:13:55 crc kubenswrapper[4697]: I0127 15:13:55.940237 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 27 15:13:55 crc kubenswrapper[4697]: I0127 15:13:55.940377 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 27 15:13:55 crc kubenswrapper[4697]: I0127 15:13:55.940498 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 27 15:13:55 crc kubenswrapper[4697]: I0127 15:13:55.940626 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 27 15:13:55 crc kubenswrapper[4697]: I0127 15:13:55.941197 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 27 15:13:55 crc kubenswrapper[4697]: I0127 15:13:55.942403 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7c8d4bc847-l2cj4"] Jan 27 15:13:55 crc kubenswrapper[4697]: I0127 15:13:55.942733 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f3206e6-eb45-4789-960d-dcf495b779df-config\") pod \"route-controller-manager-c6cc786fd-wk79v\" (UID: \"9f3206e6-eb45-4789-960d-dcf495b779df\") " pod="openshift-route-controller-manager/route-controller-manager-c6cc786fd-wk79v" Jan 27 15:13:55 crc kubenswrapper[4697]: I0127 15:13:55.942814 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9f3206e6-eb45-4789-960d-dcf495b779df-client-ca\") pod \"route-controller-manager-c6cc786fd-wk79v\" (UID: \"9f3206e6-eb45-4789-960d-dcf495b779df\") " pod="openshift-route-controller-manager/route-controller-manager-c6cc786fd-wk79v" Jan 27 15:13:55 crc kubenswrapper[4697]: I0127 15:13:55.947414 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7c8d4bc847-l2cj4" Jan 27 15:13:55 crc kubenswrapper[4697]: I0127 15:13:55.950251 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 27 15:13:55 crc kubenswrapper[4697]: I0127 15:13:55.950354 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 27 15:13:55 crc kubenswrapper[4697]: I0127 15:13:55.951116 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 27 15:13:55 crc kubenswrapper[4697]: I0127 15:13:55.951992 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7c8d4bc847-l2cj4"] Jan 27 15:13:55 crc kubenswrapper[4697]: I0127 15:13:55.953653 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 27 15:13:55 crc kubenswrapper[4697]: I0127 15:13:55.953870 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 27 15:13:55 crc kubenswrapper[4697]: I0127 15:13:55.953991 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 27 15:13:55 crc kubenswrapper[4697]: I0127 15:13:55.960362 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 27 15:13:56 crc kubenswrapper[4697]: I0127 15:13:56.002622 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-c6cc786fd-wk79v"] Jan 27 15:13:56 crc kubenswrapper[4697]: I0127 15:13:56.043606 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f3206e6-eb45-4789-960d-dcf495b779df-config\") pod \"route-controller-manager-c6cc786fd-wk79v\" (UID: \"9f3206e6-eb45-4789-960d-dcf495b779df\") " pod="openshift-route-controller-manager/route-controller-manager-c6cc786fd-wk79v" Jan 27 15:13:56 crc kubenswrapper[4697]: I0127 15:13:56.043663 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47khj\" (UniqueName: \"kubernetes.io/projected/9f3206e6-eb45-4789-960d-dcf495b779df-kube-api-access-47khj\") pod \"route-controller-manager-c6cc786fd-wk79v\" (UID: \"9f3206e6-eb45-4789-960d-dcf495b779df\") " pod="openshift-route-controller-manager/route-controller-manager-c6cc786fd-wk79v" Jan 27 15:13:56 crc kubenswrapper[4697]: I0127 15:13:56.043697 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9f3206e6-eb45-4789-960d-dcf495b779df-client-ca\") pod \"route-controller-manager-c6cc786fd-wk79v\" (UID: \"9f3206e6-eb45-4789-960d-dcf495b779df\") " pod="openshift-route-controller-manager/route-controller-manager-c6cc786fd-wk79v" Jan 27 15:13:56 crc kubenswrapper[4697]: I0127 15:13:56.043923 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9f3206e6-eb45-4789-960d-dcf495b779df-serving-cert\") pod \"route-controller-manager-c6cc786fd-wk79v\" (UID: \"9f3206e6-eb45-4789-960d-dcf495b779df\") " pod="openshift-route-controller-manager/route-controller-manager-c6cc786fd-wk79v" Jan 27 15:13:56 crc kubenswrapper[4697]: I0127 15:13:56.044675 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9f3206e6-eb45-4789-960d-dcf495b779df-client-ca\") pod \"route-controller-manager-c6cc786fd-wk79v\" (UID: \"9f3206e6-eb45-4789-960d-dcf495b779df\") " pod="openshift-route-controller-manager/route-controller-manager-c6cc786fd-wk79v" Jan 27 15:13:56 crc kubenswrapper[4697]: I0127 15:13:56.044952 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f3206e6-eb45-4789-960d-dcf495b779df-config\") pod \"route-controller-manager-c6cc786fd-wk79v\" (UID: \"9f3206e6-eb45-4789-960d-dcf495b779df\") " pod="openshift-route-controller-manager/route-controller-manager-c6cc786fd-wk79v" Jan 27 15:13:56 crc kubenswrapper[4697]: I0127 15:13:56.145269 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c44113b6-1f57-4210-9564-fdaa63500f94-serving-cert\") pod \"controller-manager-7c8d4bc847-l2cj4\" (UID: \"c44113b6-1f57-4210-9564-fdaa63500f94\") " pod="openshift-controller-manager/controller-manager-7c8d4bc847-l2cj4" Jan 27 15:13:56 crc kubenswrapper[4697]: I0127 15:13:56.145321 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c44113b6-1f57-4210-9564-fdaa63500f94-client-ca\") pod \"controller-manager-7c8d4bc847-l2cj4\" (UID: \"c44113b6-1f57-4210-9564-fdaa63500f94\") " pod="openshift-controller-manager/controller-manager-7c8d4bc847-l2cj4" Jan 27 15:13:56 crc kubenswrapper[4697]: I0127 15:13:56.145467 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9f3206e6-eb45-4789-960d-dcf495b779df-serving-cert\") pod \"route-controller-manager-c6cc786fd-wk79v\" (UID: \"9f3206e6-eb45-4789-960d-dcf495b779df\") " pod="openshift-route-controller-manager/route-controller-manager-c6cc786fd-wk79v" Jan 27 15:13:56 crc kubenswrapper[4697]: I0127 15:13:56.145633 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c44113b6-1f57-4210-9564-fdaa63500f94-proxy-ca-bundles\") pod \"controller-manager-7c8d4bc847-l2cj4\" (UID: \"c44113b6-1f57-4210-9564-fdaa63500f94\") " pod="openshift-controller-manager/controller-manager-7c8d4bc847-l2cj4" Jan 27 15:13:56 crc kubenswrapper[4697]: I0127 15:13:56.145703 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c44113b6-1f57-4210-9564-fdaa63500f94-config\") pod \"controller-manager-7c8d4bc847-l2cj4\" (UID: \"c44113b6-1f57-4210-9564-fdaa63500f94\") " pod="openshift-controller-manager/controller-manager-7c8d4bc847-l2cj4" Jan 27 15:13:56 crc kubenswrapper[4697]: I0127 15:13:56.145901 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vcdnn\" (UniqueName: \"kubernetes.io/projected/c44113b6-1f57-4210-9564-fdaa63500f94-kube-api-access-vcdnn\") pod \"controller-manager-7c8d4bc847-l2cj4\" (UID: \"c44113b6-1f57-4210-9564-fdaa63500f94\") " pod="openshift-controller-manager/controller-manager-7c8d4bc847-l2cj4" Jan 27 15:13:56 crc kubenswrapper[4697]: I0127 15:13:56.146078 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-47khj\" (UniqueName: \"kubernetes.io/projected/9f3206e6-eb45-4789-960d-dcf495b779df-kube-api-access-47khj\") pod \"route-controller-manager-c6cc786fd-wk79v\" (UID: \"9f3206e6-eb45-4789-960d-dcf495b779df\") " pod="openshift-route-controller-manager/route-controller-manager-c6cc786fd-wk79v" Jan 27 15:13:56 crc kubenswrapper[4697]: I0127 15:13:56.149891 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9f3206e6-eb45-4789-960d-dcf495b779df-serving-cert\") pod \"route-controller-manager-c6cc786fd-wk79v\" (UID: \"9f3206e6-eb45-4789-960d-dcf495b779df\") " pod="openshift-route-controller-manager/route-controller-manager-c6cc786fd-wk79v" Jan 27 15:13:56 crc kubenswrapper[4697]: I0127 15:13:56.161445 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-47khj\" (UniqueName: \"kubernetes.io/projected/9f3206e6-eb45-4789-960d-dcf495b779df-kube-api-access-47khj\") pod \"route-controller-manager-c6cc786fd-wk79v\" (UID: \"9f3206e6-eb45-4789-960d-dcf495b779df\") " pod="openshift-route-controller-manager/route-controller-manager-c6cc786fd-wk79v" Jan 27 15:13:56 crc kubenswrapper[4697]: I0127 15:13:56.247071 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vcdnn\" (UniqueName: \"kubernetes.io/projected/c44113b6-1f57-4210-9564-fdaa63500f94-kube-api-access-vcdnn\") pod \"controller-manager-7c8d4bc847-l2cj4\" (UID: \"c44113b6-1f57-4210-9564-fdaa63500f94\") " pod="openshift-controller-manager/controller-manager-7c8d4bc847-l2cj4" Jan 27 15:13:56 crc kubenswrapper[4697]: I0127 15:13:56.247173 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c44113b6-1f57-4210-9564-fdaa63500f94-serving-cert\") pod \"controller-manager-7c8d4bc847-l2cj4\" (UID: \"c44113b6-1f57-4210-9564-fdaa63500f94\") " pod="openshift-controller-manager/controller-manager-7c8d4bc847-l2cj4" Jan 27 15:13:56 crc kubenswrapper[4697]: I0127 15:13:56.247209 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c44113b6-1f57-4210-9564-fdaa63500f94-client-ca\") pod \"controller-manager-7c8d4bc847-l2cj4\" (UID: \"c44113b6-1f57-4210-9564-fdaa63500f94\") " pod="openshift-controller-manager/controller-manager-7c8d4bc847-l2cj4" Jan 27 15:13:56 crc kubenswrapper[4697]: I0127 15:13:56.247260 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c44113b6-1f57-4210-9564-fdaa63500f94-proxy-ca-bundles\") pod \"controller-manager-7c8d4bc847-l2cj4\" (UID: \"c44113b6-1f57-4210-9564-fdaa63500f94\") " pod="openshift-controller-manager/controller-manager-7c8d4bc847-l2cj4" Jan 27 15:13:56 crc kubenswrapper[4697]: I0127 15:13:56.247285 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c44113b6-1f57-4210-9564-fdaa63500f94-config\") pod \"controller-manager-7c8d4bc847-l2cj4\" (UID: \"c44113b6-1f57-4210-9564-fdaa63500f94\") " pod="openshift-controller-manager/controller-manager-7c8d4bc847-l2cj4" Jan 27 15:13:56 crc kubenswrapper[4697]: I0127 15:13:56.248442 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c44113b6-1f57-4210-9564-fdaa63500f94-client-ca\") pod \"controller-manager-7c8d4bc847-l2cj4\" (UID: \"c44113b6-1f57-4210-9564-fdaa63500f94\") " pod="openshift-controller-manager/controller-manager-7c8d4bc847-l2cj4" Jan 27 15:13:56 crc kubenswrapper[4697]: I0127 15:13:56.248796 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c44113b6-1f57-4210-9564-fdaa63500f94-proxy-ca-bundles\") pod \"controller-manager-7c8d4bc847-l2cj4\" (UID: \"c44113b6-1f57-4210-9564-fdaa63500f94\") " pod="openshift-controller-manager/controller-manager-7c8d4bc847-l2cj4" Jan 27 15:13:56 crc kubenswrapper[4697]: I0127 15:13:56.248923 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c44113b6-1f57-4210-9564-fdaa63500f94-config\") pod \"controller-manager-7c8d4bc847-l2cj4\" (UID: \"c44113b6-1f57-4210-9564-fdaa63500f94\") " pod="openshift-controller-manager/controller-manager-7c8d4bc847-l2cj4" Jan 27 15:13:56 crc kubenswrapper[4697]: I0127 15:13:56.251115 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c44113b6-1f57-4210-9564-fdaa63500f94-serving-cert\") pod \"controller-manager-7c8d4bc847-l2cj4\" (UID: \"c44113b6-1f57-4210-9564-fdaa63500f94\") " pod="openshift-controller-manager/controller-manager-7c8d4bc847-l2cj4" Jan 27 15:13:56 crc kubenswrapper[4697]: I0127 15:13:56.251328 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-c6cc786fd-wk79v" Jan 27 15:13:56 crc kubenswrapper[4697]: I0127 15:13:56.279720 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vcdnn\" (UniqueName: \"kubernetes.io/projected/c44113b6-1f57-4210-9564-fdaa63500f94-kube-api-access-vcdnn\") pod \"controller-manager-7c8d4bc847-l2cj4\" (UID: \"c44113b6-1f57-4210-9564-fdaa63500f94\") " pod="openshift-controller-manager/controller-manager-7c8d4bc847-l2cj4" Jan 27 15:13:56 crc kubenswrapper[4697]: I0127 15:13:56.309154 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7c8d4bc847-l2cj4" Jan 27 15:13:56 crc kubenswrapper[4697]: I0127 15:13:56.577027 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ab1fe32-596c-4980-ba28-0fe3b4505893" path="/var/lib/kubelet/pods/0ab1fe32-596c-4980-ba28-0fe3b4505893/volumes" Jan 27 15:13:56 crc kubenswrapper[4697]: I0127 15:13:56.578505 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6626193-f6ab-41ff-85d7-5ab64ff952a7" path="/var/lib/kubelet/pods/e6626193-f6ab-41ff-85d7-5ab64ff952a7/volumes" Jan 27 15:13:56 crc kubenswrapper[4697]: I0127 15:13:56.652932 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-c6cc786fd-wk79v"] Jan 27 15:13:56 crc kubenswrapper[4697]: W0127 15:13:56.655838 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9f3206e6_eb45_4789_960d_dcf495b779df.slice/crio-4fe726b1ae1051acf6743865ab0ac2914031a9b68fcdcc73d1c2042620b9003f WatchSource:0}: Error finding container 4fe726b1ae1051acf6743865ab0ac2914031a9b68fcdcc73d1c2042620b9003f: Status 404 returned error can't find the container with id 4fe726b1ae1051acf6743865ab0ac2914031a9b68fcdcc73d1c2042620b9003f Jan 27 15:13:56 crc kubenswrapper[4697]: I0127 15:13:56.727243 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7c8d4bc847-l2cj4"] Jan 27 15:13:56 crc kubenswrapper[4697]: W0127 15:13:56.745580 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc44113b6_1f57_4210_9564_fdaa63500f94.slice/crio-ca999e72f9f1ae998a5a070463beafdb1d608a6d9752ec71812e54abbba0d9f3 WatchSource:0}: Error finding container ca999e72f9f1ae998a5a070463beafdb1d608a6d9752ec71812e54abbba0d9f3: Status 404 returned error can't find the container with id ca999e72f9f1ae998a5a070463beafdb1d608a6d9752ec71812e54abbba0d9f3 Jan 27 15:13:57 crc kubenswrapper[4697]: I0127 15:13:57.377323 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-c6cc786fd-wk79v" event={"ID":"9f3206e6-eb45-4789-960d-dcf495b779df","Type":"ContainerStarted","Data":"e7cfd91f6848b10dfcf69e14058dda786aa56f27d3c24ba23e260c29d6132101"} Jan 27 15:13:57 crc kubenswrapper[4697]: I0127 15:13:57.377917 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-c6cc786fd-wk79v" event={"ID":"9f3206e6-eb45-4789-960d-dcf495b779df","Type":"ContainerStarted","Data":"4fe726b1ae1051acf6743865ab0ac2914031a9b68fcdcc73d1c2042620b9003f"} Jan 27 15:13:57 crc kubenswrapper[4697]: I0127 15:13:57.378362 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-c6cc786fd-wk79v" Jan 27 15:13:57 crc kubenswrapper[4697]: I0127 15:13:57.383791 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7c8d4bc847-l2cj4" event={"ID":"c44113b6-1f57-4210-9564-fdaa63500f94","Type":"ContainerStarted","Data":"47ebce5d45d9d158139725001b45755bb873d44693f68476086b6e4051d57d8c"} Jan 27 15:13:57 crc kubenswrapper[4697]: I0127 15:13:57.383832 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7c8d4bc847-l2cj4" event={"ID":"c44113b6-1f57-4210-9564-fdaa63500f94","Type":"ContainerStarted","Data":"ca999e72f9f1ae998a5a070463beafdb1d608a6d9752ec71812e54abbba0d9f3"} Jan 27 15:13:57 crc kubenswrapper[4697]: I0127 15:13:57.383867 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-c6cc786fd-wk79v" Jan 27 15:13:57 crc kubenswrapper[4697]: I0127 15:13:57.384263 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7c8d4bc847-l2cj4" Jan 27 15:13:57 crc kubenswrapper[4697]: I0127 15:13:57.387995 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7c8d4bc847-l2cj4" Jan 27 15:13:57 crc kubenswrapper[4697]: I0127 15:13:57.397028 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-c6cc786fd-wk79v" podStartSLOduration=3.397010825 podStartE2EDuration="3.397010825s" podCreationTimestamp="2026-01-27 15:13:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:13:57.394741399 +0000 UTC m=+333.567141190" watchObservedRunningTime="2026-01-27 15:13:57.397010825 +0000 UTC m=+333.569410606" Jan 27 15:14:13 crc kubenswrapper[4697]: I0127 15:14:13.190690 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7c8d4bc847-l2cj4" podStartSLOduration=19.190673056 podStartE2EDuration="19.190673056s" podCreationTimestamp="2026-01-27 15:13:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:13:57.429669448 +0000 UTC m=+333.602069249" watchObservedRunningTime="2026-01-27 15:14:13.190673056 +0000 UTC m=+349.363072837" Jan 27 15:14:13 crc kubenswrapper[4697]: I0127 15:14:13.195212 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7c8d4bc847-l2cj4"] Jan 27 15:14:13 crc kubenswrapper[4697]: I0127 15:14:13.195448 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-7c8d4bc847-l2cj4" podUID="c44113b6-1f57-4210-9564-fdaa63500f94" containerName="controller-manager" containerID="cri-o://47ebce5d45d9d158139725001b45755bb873d44693f68476086b6e4051d57d8c" gracePeriod=30 Jan 27 15:14:13 crc kubenswrapper[4697]: I0127 15:14:13.287276 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-c6cc786fd-wk79v"] Jan 27 15:14:13 crc kubenswrapper[4697]: I0127 15:14:13.288145 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-c6cc786fd-wk79v" podUID="9f3206e6-eb45-4789-960d-dcf495b779df" containerName="route-controller-manager" containerID="cri-o://e7cfd91f6848b10dfcf69e14058dda786aa56f27d3c24ba23e260c29d6132101" gracePeriod=30 Jan 27 15:14:13 crc kubenswrapper[4697]: I0127 15:14:13.475899 4697 generic.go:334] "Generic (PLEG): container finished" podID="9f3206e6-eb45-4789-960d-dcf495b779df" containerID="e7cfd91f6848b10dfcf69e14058dda786aa56f27d3c24ba23e260c29d6132101" exitCode=0 Jan 27 15:14:13 crc kubenswrapper[4697]: I0127 15:14:13.476043 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-c6cc786fd-wk79v" event={"ID":"9f3206e6-eb45-4789-960d-dcf495b779df","Type":"ContainerDied","Data":"e7cfd91f6848b10dfcf69e14058dda786aa56f27d3c24ba23e260c29d6132101"} Jan 27 15:14:13 crc kubenswrapper[4697]: I0127 15:14:13.477674 4697 generic.go:334] "Generic (PLEG): container finished" podID="c44113b6-1f57-4210-9564-fdaa63500f94" containerID="47ebce5d45d9d158139725001b45755bb873d44693f68476086b6e4051d57d8c" exitCode=0 Jan 27 15:14:13 crc kubenswrapper[4697]: I0127 15:14:13.477700 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7c8d4bc847-l2cj4" event={"ID":"c44113b6-1f57-4210-9564-fdaa63500f94","Type":"ContainerDied","Data":"47ebce5d45d9d158139725001b45755bb873d44693f68476086b6e4051d57d8c"} Jan 27 15:14:13 crc kubenswrapper[4697]: I0127 15:14:13.735824 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-c6cc786fd-wk79v" Jan 27 15:14:13 crc kubenswrapper[4697]: I0127 15:14:13.805101 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7c8d4bc847-l2cj4" Jan 27 15:14:13 crc kubenswrapper[4697]: I0127 15:14:13.929378 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vcdnn\" (UniqueName: \"kubernetes.io/projected/c44113b6-1f57-4210-9564-fdaa63500f94-kube-api-access-vcdnn\") pod \"c44113b6-1f57-4210-9564-fdaa63500f94\" (UID: \"c44113b6-1f57-4210-9564-fdaa63500f94\") " Jan 27 15:14:13 crc kubenswrapper[4697]: I0127 15:14:13.929475 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c44113b6-1f57-4210-9564-fdaa63500f94-proxy-ca-bundles\") pod \"c44113b6-1f57-4210-9564-fdaa63500f94\" (UID: \"c44113b6-1f57-4210-9564-fdaa63500f94\") " Jan 27 15:14:13 crc kubenswrapper[4697]: I0127 15:14:13.929500 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9f3206e6-eb45-4789-960d-dcf495b779df-client-ca\") pod \"9f3206e6-eb45-4789-960d-dcf495b779df\" (UID: \"9f3206e6-eb45-4789-960d-dcf495b779df\") " Jan 27 15:14:13 crc kubenswrapper[4697]: I0127 15:14:13.929547 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f3206e6-eb45-4789-960d-dcf495b779df-config\") pod \"9f3206e6-eb45-4789-960d-dcf495b779df\" (UID: \"9f3206e6-eb45-4789-960d-dcf495b779df\") " Jan 27 15:14:13 crc kubenswrapper[4697]: I0127 15:14:13.929596 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c44113b6-1f57-4210-9564-fdaa63500f94-client-ca\") pod \"c44113b6-1f57-4210-9564-fdaa63500f94\" (UID: \"c44113b6-1f57-4210-9564-fdaa63500f94\") " Jan 27 15:14:13 crc kubenswrapper[4697]: I0127 15:14:13.929621 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9f3206e6-eb45-4789-960d-dcf495b779df-serving-cert\") pod \"9f3206e6-eb45-4789-960d-dcf495b779df\" (UID: \"9f3206e6-eb45-4789-960d-dcf495b779df\") " Jan 27 15:14:13 crc kubenswrapper[4697]: I0127 15:14:13.929637 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c44113b6-1f57-4210-9564-fdaa63500f94-config\") pod \"c44113b6-1f57-4210-9564-fdaa63500f94\" (UID: \"c44113b6-1f57-4210-9564-fdaa63500f94\") " Jan 27 15:14:13 crc kubenswrapper[4697]: I0127 15:14:13.929669 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c44113b6-1f57-4210-9564-fdaa63500f94-serving-cert\") pod \"c44113b6-1f57-4210-9564-fdaa63500f94\" (UID: \"c44113b6-1f57-4210-9564-fdaa63500f94\") " Jan 27 15:14:13 crc kubenswrapper[4697]: I0127 15:14:13.929687 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-47khj\" (UniqueName: \"kubernetes.io/projected/9f3206e6-eb45-4789-960d-dcf495b779df-kube-api-access-47khj\") pod \"9f3206e6-eb45-4789-960d-dcf495b779df\" (UID: \"9f3206e6-eb45-4789-960d-dcf495b779df\") " Jan 27 15:14:13 crc kubenswrapper[4697]: I0127 15:14:13.930522 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f3206e6-eb45-4789-960d-dcf495b779df-config" (OuterVolumeSpecName: "config") pod "9f3206e6-eb45-4789-960d-dcf495b779df" (UID: "9f3206e6-eb45-4789-960d-dcf495b779df"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:14:13 crc kubenswrapper[4697]: I0127 15:14:13.932265 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c44113b6-1f57-4210-9564-fdaa63500f94-client-ca" (OuterVolumeSpecName: "client-ca") pod "c44113b6-1f57-4210-9564-fdaa63500f94" (UID: "c44113b6-1f57-4210-9564-fdaa63500f94"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:14:13 crc kubenswrapper[4697]: I0127 15:14:13.935036 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c44113b6-1f57-4210-9564-fdaa63500f94-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "c44113b6-1f57-4210-9564-fdaa63500f94" (UID: "c44113b6-1f57-4210-9564-fdaa63500f94"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:14:13 crc kubenswrapper[4697]: I0127 15:14:13.931360 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f3206e6-eb45-4789-960d-dcf495b779df-client-ca" (OuterVolumeSpecName: "client-ca") pod "9f3206e6-eb45-4789-960d-dcf495b779df" (UID: "9f3206e6-eb45-4789-960d-dcf495b779df"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:14:13 crc kubenswrapper[4697]: I0127 15:14:13.935501 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c44113b6-1f57-4210-9564-fdaa63500f94-config" (OuterVolumeSpecName: "config") pod "c44113b6-1f57-4210-9564-fdaa63500f94" (UID: "c44113b6-1f57-4210-9564-fdaa63500f94"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:14:13 crc kubenswrapper[4697]: I0127 15:14:13.935708 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f3206e6-eb45-4789-960d-dcf495b779df-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9f3206e6-eb45-4789-960d-dcf495b779df" (UID: "9f3206e6-eb45-4789-960d-dcf495b779df"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:14:13 crc kubenswrapper[4697]: I0127 15:14:13.935883 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c44113b6-1f57-4210-9564-fdaa63500f94-kube-api-access-vcdnn" (OuterVolumeSpecName: "kube-api-access-vcdnn") pod "c44113b6-1f57-4210-9564-fdaa63500f94" (UID: "c44113b6-1f57-4210-9564-fdaa63500f94"). InnerVolumeSpecName "kube-api-access-vcdnn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:14:13 crc kubenswrapper[4697]: I0127 15:14:13.941121 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f3206e6-eb45-4789-960d-dcf495b779df-kube-api-access-47khj" (OuterVolumeSpecName: "kube-api-access-47khj") pod "9f3206e6-eb45-4789-960d-dcf495b779df" (UID: "9f3206e6-eb45-4789-960d-dcf495b779df"). InnerVolumeSpecName "kube-api-access-47khj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:14:13 crc kubenswrapper[4697]: I0127 15:14:13.944867 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c44113b6-1f57-4210-9564-fdaa63500f94-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c44113b6-1f57-4210-9564-fdaa63500f94" (UID: "c44113b6-1f57-4210-9564-fdaa63500f94"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:14:14 crc kubenswrapper[4697]: I0127 15:14:14.030740 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vcdnn\" (UniqueName: \"kubernetes.io/projected/c44113b6-1f57-4210-9564-fdaa63500f94-kube-api-access-vcdnn\") on node \"crc\" DevicePath \"\"" Jan 27 15:14:14 crc kubenswrapper[4697]: I0127 15:14:14.030828 4697 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c44113b6-1f57-4210-9564-fdaa63500f94-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 27 15:14:14 crc kubenswrapper[4697]: I0127 15:14:14.030844 4697 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9f3206e6-eb45-4789-960d-dcf495b779df-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 15:14:14 crc kubenswrapper[4697]: I0127 15:14:14.030857 4697 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f3206e6-eb45-4789-960d-dcf495b779df-config\") on node \"crc\" DevicePath \"\"" Jan 27 15:14:14 crc kubenswrapper[4697]: I0127 15:14:14.030867 4697 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c44113b6-1f57-4210-9564-fdaa63500f94-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 15:14:14 crc kubenswrapper[4697]: I0127 15:14:14.030877 4697 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c44113b6-1f57-4210-9564-fdaa63500f94-config\") on node \"crc\" DevicePath \"\"" Jan 27 15:14:14 crc kubenswrapper[4697]: I0127 15:14:14.030888 4697 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9f3206e6-eb45-4789-960d-dcf495b779df-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 15:14:14 crc kubenswrapper[4697]: I0127 15:14:14.030897 4697 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c44113b6-1f57-4210-9564-fdaa63500f94-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 15:14:14 crc kubenswrapper[4697]: I0127 15:14:14.030907 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-47khj\" (UniqueName: \"kubernetes.io/projected/9f3206e6-eb45-4789-960d-dcf495b779df-kube-api-access-47khj\") on node \"crc\" DevicePath \"\"" Jan 27 15:14:14 crc kubenswrapper[4697]: I0127 15:14:14.485059 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-c6cc786fd-wk79v" Jan 27 15:14:14 crc kubenswrapper[4697]: I0127 15:14:14.485102 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-c6cc786fd-wk79v" event={"ID":"9f3206e6-eb45-4789-960d-dcf495b779df","Type":"ContainerDied","Data":"4fe726b1ae1051acf6743865ab0ac2914031a9b68fcdcc73d1c2042620b9003f"} Jan 27 15:14:14 crc kubenswrapper[4697]: I0127 15:14:14.485149 4697 scope.go:117] "RemoveContainer" containerID="e7cfd91f6848b10dfcf69e14058dda786aa56f27d3c24ba23e260c29d6132101" Jan 27 15:14:14 crc kubenswrapper[4697]: I0127 15:14:14.486716 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7c8d4bc847-l2cj4" event={"ID":"c44113b6-1f57-4210-9564-fdaa63500f94","Type":"ContainerDied","Data":"ca999e72f9f1ae998a5a070463beafdb1d608a6d9752ec71812e54abbba0d9f3"} Jan 27 15:14:14 crc kubenswrapper[4697]: I0127 15:14:14.486804 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7c8d4bc847-l2cj4" Jan 27 15:14:14 crc kubenswrapper[4697]: I0127 15:14:14.501474 4697 scope.go:117] "RemoveContainer" containerID="47ebce5d45d9d158139725001b45755bb873d44693f68476086b6e4051d57d8c" Jan 27 15:14:14 crc kubenswrapper[4697]: I0127 15:14:14.518557 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-c6cc786fd-wk79v"] Jan 27 15:14:14 crc kubenswrapper[4697]: I0127 15:14:14.527051 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-c6cc786fd-wk79v"] Jan 27 15:14:14 crc kubenswrapper[4697]: I0127 15:14:14.529992 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7c8d4bc847-l2cj4"] Jan 27 15:14:14 crc kubenswrapper[4697]: I0127 15:14:14.532526 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7c8d4bc847-l2cj4"] Jan 27 15:14:14 crc kubenswrapper[4697]: I0127 15:14:14.575649 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f3206e6-eb45-4789-960d-dcf495b779df" path="/var/lib/kubelet/pods/9f3206e6-eb45-4789-960d-dcf495b779df/volumes" Jan 27 15:14:14 crc kubenswrapper[4697]: I0127 15:14:14.576294 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c44113b6-1f57-4210-9564-fdaa63500f94" path="/var/lib/kubelet/pods/c44113b6-1f57-4210-9564-fdaa63500f94/volumes" Jan 27 15:14:14 crc kubenswrapper[4697]: I0127 15:14:14.951607 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5474b5bbd7-sxnrx"] Jan 27 15:14:14 crc kubenswrapper[4697]: E0127 15:14:14.954836 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c44113b6-1f57-4210-9564-fdaa63500f94" containerName="controller-manager" Jan 27 15:14:14 crc kubenswrapper[4697]: I0127 15:14:14.954866 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="c44113b6-1f57-4210-9564-fdaa63500f94" containerName="controller-manager" Jan 27 15:14:14 crc kubenswrapper[4697]: E0127 15:14:14.954893 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f3206e6-eb45-4789-960d-dcf495b779df" containerName="route-controller-manager" Jan 27 15:14:14 crc kubenswrapper[4697]: I0127 15:14:14.954906 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f3206e6-eb45-4789-960d-dcf495b779df" containerName="route-controller-manager" Jan 27 15:14:14 crc kubenswrapper[4697]: I0127 15:14:14.955069 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f3206e6-eb45-4789-960d-dcf495b779df" containerName="route-controller-manager" Jan 27 15:14:14 crc kubenswrapper[4697]: I0127 15:14:14.955093 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="c44113b6-1f57-4210-9564-fdaa63500f94" containerName="controller-manager" Jan 27 15:14:14 crc kubenswrapper[4697]: I0127 15:14:14.956177 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5474b5bbd7-sxnrx" Jan 27 15:14:14 crc kubenswrapper[4697]: I0127 15:14:14.958862 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 27 15:14:14 crc kubenswrapper[4697]: I0127 15:14:14.959045 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 27 15:14:14 crc kubenswrapper[4697]: I0127 15:14:14.959235 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 27 15:14:14 crc kubenswrapper[4697]: I0127 15:14:14.959664 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 27 15:14:14 crc kubenswrapper[4697]: I0127 15:14:14.959995 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 27 15:14:14 crc kubenswrapper[4697]: I0127 15:14:14.960235 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 27 15:14:14 crc kubenswrapper[4697]: I0127 15:14:14.960804 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7568f5d7c4-6b452"] Jan 27 15:14:14 crc kubenswrapper[4697]: I0127 15:14:14.961749 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7568f5d7c4-6b452" Jan 27 15:14:14 crc kubenswrapper[4697]: I0127 15:14:14.973698 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5474b5bbd7-sxnrx"] Jan 27 15:14:14 crc kubenswrapper[4697]: I0127 15:14:14.974008 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 27 15:14:14 crc kubenswrapper[4697]: I0127 15:14:14.977680 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 27 15:14:14 crc kubenswrapper[4697]: I0127 15:14:14.977962 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 27 15:14:14 crc kubenswrapper[4697]: I0127 15:14:14.978111 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 27 15:14:14 crc kubenswrapper[4697]: I0127 15:14:14.978188 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 27 15:14:14 crc kubenswrapper[4697]: I0127 15:14:14.979272 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 27 15:14:14 crc kubenswrapper[4697]: I0127 15:14:14.985206 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7568f5d7c4-6b452"] Jan 27 15:14:14 crc kubenswrapper[4697]: I0127 15:14:14.985927 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 27 15:14:15 crc kubenswrapper[4697]: I0127 15:14:15.050000 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/49aaa39e-82ad-44c7-b017-58b55e8f5f90-proxy-ca-bundles\") pod \"controller-manager-7568f5d7c4-6b452\" (UID: \"49aaa39e-82ad-44c7-b017-58b55e8f5f90\") " pod="openshift-controller-manager/controller-manager-7568f5d7c4-6b452" Jan 27 15:14:15 crc kubenswrapper[4697]: I0127 15:14:15.050071 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6lnp\" (UniqueName: \"kubernetes.io/projected/49aaa39e-82ad-44c7-b017-58b55e8f5f90-kube-api-access-n6lnp\") pod \"controller-manager-7568f5d7c4-6b452\" (UID: \"49aaa39e-82ad-44c7-b017-58b55e8f5f90\") " pod="openshift-controller-manager/controller-manager-7568f5d7c4-6b452" Jan 27 15:14:15 crc kubenswrapper[4697]: I0127 15:14:15.050102 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/49aaa39e-82ad-44c7-b017-58b55e8f5f90-client-ca\") pod \"controller-manager-7568f5d7c4-6b452\" (UID: \"49aaa39e-82ad-44c7-b017-58b55e8f5f90\") " pod="openshift-controller-manager/controller-manager-7568f5d7c4-6b452" Jan 27 15:14:15 crc kubenswrapper[4697]: I0127 15:14:15.050140 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce8da7f6-3656-484b-9784-e07362e25103-config\") pod \"route-controller-manager-5474b5bbd7-sxnrx\" (UID: \"ce8da7f6-3656-484b-9784-e07362e25103\") " pod="openshift-route-controller-manager/route-controller-manager-5474b5bbd7-sxnrx" Jan 27 15:14:15 crc kubenswrapper[4697]: I0127 15:14:15.050165 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ce8da7f6-3656-484b-9784-e07362e25103-client-ca\") pod \"route-controller-manager-5474b5bbd7-sxnrx\" (UID: \"ce8da7f6-3656-484b-9784-e07362e25103\") " pod="openshift-route-controller-manager/route-controller-manager-5474b5bbd7-sxnrx" Jan 27 15:14:15 crc kubenswrapper[4697]: I0127 15:14:15.050199 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ce8da7f6-3656-484b-9784-e07362e25103-serving-cert\") pod \"route-controller-manager-5474b5bbd7-sxnrx\" (UID: \"ce8da7f6-3656-484b-9784-e07362e25103\") " pod="openshift-route-controller-manager/route-controller-manager-5474b5bbd7-sxnrx" Jan 27 15:14:15 crc kubenswrapper[4697]: I0127 15:14:15.050226 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/49aaa39e-82ad-44c7-b017-58b55e8f5f90-serving-cert\") pod \"controller-manager-7568f5d7c4-6b452\" (UID: \"49aaa39e-82ad-44c7-b017-58b55e8f5f90\") " pod="openshift-controller-manager/controller-manager-7568f5d7c4-6b452" Jan 27 15:14:15 crc kubenswrapper[4697]: I0127 15:14:15.050257 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49aaa39e-82ad-44c7-b017-58b55e8f5f90-config\") pod \"controller-manager-7568f5d7c4-6b452\" (UID: \"49aaa39e-82ad-44c7-b017-58b55e8f5f90\") " pod="openshift-controller-manager/controller-manager-7568f5d7c4-6b452" Jan 27 15:14:15 crc kubenswrapper[4697]: I0127 15:14:15.050283 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qm2jn\" (UniqueName: \"kubernetes.io/projected/ce8da7f6-3656-484b-9784-e07362e25103-kube-api-access-qm2jn\") pod \"route-controller-manager-5474b5bbd7-sxnrx\" (UID: \"ce8da7f6-3656-484b-9784-e07362e25103\") " pod="openshift-route-controller-manager/route-controller-manager-5474b5bbd7-sxnrx" Jan 27 15:14:15 crc kubenswrapper[4697]: I0127 15:14:15.151737 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/49aaa39e-82ad-44c7-b017-58b55e8f5f90-proxy-ca-bundles\") pod \"controller-manager-7568f5d7c4-6b452\" (UID: \"49aaa39e-82ad-44c7-b017-58b55e8f5f90\") " pod="openshift-controller-manager/controller-manager-7568f5d7c4-6b452" Jan 27 15:14:15 crc kubenswrapper[4697]: I0127 15:14:15.152094 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6lnp\" (UniqueName: \"kubernetes.io/projected/49aaa39e-82ad-44c7-b017-58b55e8f5f90-kube-api-access-n6lnp\") pod \"controller-manager-7568f5d7c4-6b452\" (UID: \"49aaa39e-82ad-44c7-b017-58b55e8f5f90\") " pod="openshift-controller-manager/controller-manager-7568f5d7c4-6b452" Jan 27 15:14:15 crc kubenswrapper[4697]: I0127 15:14:15.152229 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/49aaa39e-82ad-44c7-b017-58b55e8f5f90-client-ca\") pod \"controller-manager-7568f5d7c4-6b452\" (UID: \"49aaa39e-82ad-44c7-b017-58b55e8f5f90\") " pod="openshift-controller-manager/controller-manager-7568f5d7c4-6b452" Jan 27 15:14:15 crc kubenswrapper[4697]: I0127 15:14:15.152327 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce8da7f6-3656-484b-9784-e07362e25103-config\") pod \"route-controller-manager-5474b5bbd7-sxnrx\" (UID: \"ce8da7f6-3656-484b-9784-e07362e25103\") " pod="openshift-route-controller-manager/route-controller-manager-5474b5bbd7-sxnrx" Jan 27 15:14:15 crc kubenswrapper[4697]: I0127 15:14:15.152422 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ce8da7f6-3656-484b-9784-e07362e25103-client-ca\") pod \"route-controller-manager-5474b5bbd7-sxnrx\" (UID: \"ce8da7f6-3656-484b-9784-e07362e25103\") " pod="openshift-route-controller-manager/route-controller-manager-5474b5bbd7-sxnrx" Jan 27 15:14:15 crc kubenswrapper[4697]: I0127 15:14:15.152517 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ce8da7f6-3656-484b-9784-e07362e25103-serving-cert\") pod \"route-controller-manager-5474b5bbd7-sxnrx\" (UID: \"ce8da7f6-3656-484b-9784-e07362e25103\") " pod="openshift-route-controller-manager/route-controller-manager-5474b5bbd7-sxnrx" Jan 27 15:14:15 crc kubenswrapper[4697]: I0127 15:14:15.152592 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/49aaa39e-82ad-44c7-b017-58b55e8f5f90-serving-cert\") pod \"controller-manager-7568f5d7c4-6b452\" (UID: \"49aaa39e-82ad-44c7-b017-58b55e8f5f90\") " pod="openshift-controller-manager/controller-manager-7568f5d7c4-6b452" Jan 27 15:14:15 crc kubenswrapper[4697]: I0127 15:14:15.152675 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49aaa39e-82ad-44c7-b017-58b55e8f5f90-config\") pod \"controller-manager-7568f5d7c4-6b452\" (UID: \"49aaa39e-82ad-44c7-b017-58b55e8f5f90\") " pod="openshift-controller-manager/controller-manager-7568f5d7c4-6b452" Jan 27 15:14:15 crc kubenswrapper[4697]: I0127 15:14:15.152769 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qm2jn\" (UniqueName: \"kubernetes.io/projected/ce8da7f6-3656-484b-9784-e07362e25103-kube-api-access-qm2jn\") pod \"route-controller-manager-5474b5bbd7-sxnrx\" (UID: \"ce8da7f6-3656-484b-9784-e07362e25103\") " pod="openshift-route-controller-manager/route-controller-manager-5474b5bbd7-sxnrx" Jan 27 15:14:15 crc kubenswrapper[4697]: I0127 15:14:15.153158 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ce8da7f6-3656-484b-9784-e07362e25103-client-ca\") pod \"route-controller-manager-5474b5bbd7-sxnrx\" (UID: \"ce8da7f6-3656-484b-9784-e07362e25103\") " pod="openshift-route-controller-manager/route-controller-manager-5474b5bbd7-sxnrx" Jan 27 15:14:15 crc kubenswrapper[4697]: I0127 15:14:15.153363 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce8da7f6-3656-484b-9784-e07362e25103-config\") pod \"route-controller-manager-5474b5bbd7-sxnrx\" (UID: \"ce8da7f6-3656-484b-9784-e07362e25103\") " pod="openshift-route-controller-manager/route-controller-manager-5474b5bbd7-sxnrx" Jan 27 15:14:15 crc kubenswrapper[4697]: I0127 15:14:15.153978 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/49aaa39e-82ad-44c7-b017-58b55e8f5f90-proxy-ca-bundles\") pod \"controller-manager-7568f5d7c4-6b452\" (UID: \"49aaa39e-82ad-44c7-b017-58b55e8f5f90\") " pod="openshift-controller-manager/controller-manager-7568f5d7c4-6b452" Jan 27 15:14:15 crc kubenswrapper[4697]: I0127 15:14:15.154440 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/49aaa39e-82ad-44c7-b017-58b55e8f5f90-client-ca\") pod \"controller-manager-7568f5d7c4-6b452\" (UID: \"49aaa39e-82ad-44c7-b017-58b55e8f5f90\") " pod="openshift-controller-manager/controller-manager-7568f5d7c4-6b452" Jan 27 15:14:15 crc kubenswrapper[4697]: I0127 15:14:15.154567 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49aaa39e-82ad-44c7-b017-58b55e8f5f90-config\") pod \"controller-manager-7568f5d7c4-6b452\" (UID: \"49aaa39e-82ad-44c7-b017-58b55e8f5f90\") " pod="openshift-controller-manager/controller-manager-7568f5d7c4-6b452" Jan 27 15:14:15 crc kubenswrapper[4697]: I0127 15:14:15.159206 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/49aaa39e-82ad-44c7-b017-58b55e8f5f90-serving-cert\") pod \"controller-manager-7568f5d7c4-6b452\" (UID: \"49aaa39e-82ad-44c7-b017-58b55e8f5f90\") " pod="openshift-controller-manager/controller-manager-7568f5d7c4-6b452" Jan 27 15:14:15 crc kubenswrapper[4697]: I0127 15:14:15.169361 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ce8da7f6-3656-484b-9784-e07362e25103-serving-cert\") pod \"route-controller-manager-5474b5bbd7-sxnrx\" (UID: \"ce8da7f6-3656-484b-9784-e07362e25103\") " pod="openshift-route-controller-manager/route-controller-manager-5474b5bbd7-sxnrx" Jan 27 15:14:15 crc kubenswrapper[4697]: I0127 15:14:15.178996 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qm2jn\" (UniqueName: \"kubernetes.io/projected/ce8da7f6-3656-484b-9784-e07362e25103-kube-api-access-qm2jn\") pod \"route-controller-manager-5474b5bbd7-sxnrx\" (UID: \"ce8da7f6-3656-484b-9784-e07362e25103\") " pod="openshift-route-controller-manager/route-controller-manager-5474b5bbd7-sxnrx" Jan 27 15:14:15 crc kubenswrapper[4697]: I0127 15:14:15.180647 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6lnp\" (UniqueName: \"kubernetes.io/projected/49aaa39e-82ad-44c7-b017-58b55e8f5f90-kube-api-access-n6lnp\") pod \"controller-manager-7568f5d7c4-6b452\" (UID: \"49aaa39e-82ad-44c7-b017-58b55e8f5f90\") " pod="openshift-controller-manager/controller-manager-7568f5d7c4-6b452" Jan 27 15:14:15 crc kubenswrapper[4697]: I0127 15:14:15.281257 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5474b5bbd7-sxnrx" Jan 27 15:14:15 crc kubenswrapper[4697]: I0127 15:14:15.285866 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7568f5d7c4-6b452" Jan 27 15:14:15 crc kubenswrapper[4697]: I0127 15:14:15.692562 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5474b5bbd7-sxnrx"] Jan 27 15:14:15 crc kubenswrapper[4697]: I0127 15:14:15.740534 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7568f5d7c4-6b452"] Jan 27 15:14:16 crc kubenswrapper[4697]: I0127 15:14:16.503244 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7568f5d7c4-6b452" event={"ID":"49aaa39e-82ad-44c7-b017-58b55e8f5f90","Type":"ContainerStarted","Data":"c04379beef8c58a2c568b5e7986b3103502d302dffc99ab2ed7319b7257b3bef"} Jan 27 15:14:16 crc kubenswrapper[4697]: I0127 15:14:16.503655 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7568f5d7c4-6b452" event={"ID":"49aaa39e-82ad-44c7-b017-58b55e8f5f90","Type":"ContainerStarted","Data":"e05f1a115132875c253478af5f00973fb2c4a47d8fb5126a001eb8c2f39dff23"} Jan 27 15:14:16 crc kubenswrapper[4697]: I0127 15:14:16.506355 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7568f5d7c4-6b452" Jan 27 15:14:16 crc kubenswrapper[4697]: I0127 15:14:16.511700 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5474b5bbd7-sxnrx" event={"ID":"ce8da7f6-3656-484b-9784-e07362e25103","Type":"ContainerStarted","Data":"e55bd5a497bd8ea9c5065c4db7c377140def4323dec57876384841553b5f4c5b"} Jan 27 15:14:16 crc kubenswrapper[4697]: I0127 15:14:16.511771 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5474b5bbd7-sxnrx" event={"ID":"ce8da7f6-3656-484b-9784-e07362e25103","Type":"ContainerStarted","Data":"af978ff23141fa46293619d77dda6c245477ed3ee8301a6b043ffe43454ab541"} Jan 27 15:14:16 crc kubenswrapper[4697]: I0127 15:14:16.519261 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7568f5d7c4-6b452" Jan 27 15:14:16 crc kubenswrapper[4697]: I0127 15:14:16.522310 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5474b5bbd7-sxnrx" Jan 27 15:14:16 crc kubenswrapper[4697]: I0127 15:14:16.536707 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5474b5bbd7-sxnrx" Jan 27 15:14:16 crc kubenswrapper[4697]: I0127 15:14:16.537255 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7568f5d7c4-6b452" podStartSLOduration=3.53724211 podStartE2EDuration="3.53724211s" podCreationTimestamp="2026-01-27 15:14:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:14:16.525230828 +0000 UTC m=+352.697630619" watchObservedRunningTime="2026-01-27 15:14:16.53724211 +0000 UTC m=+352.709641901" Jan 27 15:14:16 crc kubenswrapper[4697]: I0127 15:14:16.633192 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5474b5bbd7-sxnrx" podStartSLOduration=3.6331527 podStartE2EDuration="3.6331527s" podCreationTimestamp="2026-01-27 15:14:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:14:16.5855898 +0000 UTC m=+352.757989591" watchObservedRunningTime="2026-01-27 15:14:16.6331527 +0000 UTC m=+352.805552481" Jan 27 15:14:25 crc kubenswrapper[4697]: I0127 15:14:25.109203 4697 patch_prober.go:28] interesting pod/machine-config-daemon-wz495 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 15:14:25 crc kubenswrapper[4697]: I0127 15:14:25.109721 4697 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 15:14:46 crc kubenswrapper[4697]: I0127 15:14:46.351517 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-cq5lk"] Jan 27 15:14:46 crc kubenswrapper[4697]: I0127 15:14:46.352997 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-cq5lk" Jan 27 15:14:46 crc kubenswrapper[4697]: I0127 15:14:46.365120 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-cq5lk"] Jan 27 15:14:46 crc kubenswrapper[4697]: I0127 15:14:46.553498 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/a235329a-f0db-484d-bf3d-0d8c5b73010d-ca-trust-extracted\") pod \"image-registry-66df7c8f76-cq5lk\" (UID: \"a235329a-f0db-484d-bf3d-0d8c5b73010d\") " pod="openshift-image-registry/image-registry-66df7c8f76-cq5lk" Jan 27 15:14:46 crc kubenswrapper[4697]: I0127 15:14:46.553581 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrtfq\" (UniqueName: \"kubernetes.io/projected/a235329a-f0db-484d-bf3d-0d8c5b73010d-kube-api-access-jrtfq\") pod \"image-registry-66df7c8f76-cq5lk\" (UID: \"a235329a-f0db-484d-bf3d-0d8c5b73010d\") " pod="openshift-image-registry/image-registry-66df7c8f76-cq5lk" Jan 27 15:14:46 crc kubenswrapper[4697]: I0127 15:14:46.553643 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a235329a-f0db-484d-bf3d-0d8c5b73010d-trusted-ca\") pod \"image-registry-66df7c8f76-cq5lk\" (UID: \"a235329a-f0db-484d-bf3d-0d8c5b73010d\") " pod="openshift-image-registry/image-registry-66df7c8f76-cq5lk" Jan 27 15:14:46 crc kubenswrapper[4697]: I0127 15:14:46.553681 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/a235329a-f0db-484d-bf3d-0d8c5b73010d-installation-pull-secrets\") pod \"image-registry-66df7c8f76-cq5lk\" (UID: \"a235329a-f0db-484d-bf3d-0d8c5b73010d\") " pod="openshift-image-registry/image-registry-66df7c8f76-cq5lk" Jan 27 15:14:46 crc kubenswrapper[4697]: I0127 15:14:46.553707 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a235329a-f0db-484d-bf3d-0d8c5b73010d-bound-sa-token\") pod \"image-registry-66df7c8f76-cq5lk\" (UID: \"a235329a-f0db-484d-bf3d-0d8c5b73010d\") " pod="openshift-image-registry/image-registry-66df7c8f76-cq5lk" Jan 27 15:14:46 crc kubenswrapper[4697]: I0127 15:14:46.553735 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/a235329a-f0db-484d-bf3d-0d8c5b73010d-registry-certificates\") pod \"image-registry-66df7c8f76-cq5lk\" (UID: \"a235329a-f0db-484d-bf3d-0d8c5b73010d\") " pod="openshift-image-registry/image-registry-66df7c8f76-cq5lk" Jan 27 15:14:46 crc kubenswrapper[4697]: I0127 15:14:46.553761 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/a235329a-f0db-484d-bf3d-0d8c5b73010d-registry-tls\") pod \"image-registry-66df7c8f76-cq5lk\" (UID: \"a235329a-f0db-484d-bf3d-0d8c5b73010d\") " pod="openshift-image-registry/image-registry-66df7c8f76-cq5lk" Jan 27 15:14:46 crc kubenswrapper[4697]: I0127 15:14:46.553819 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-cq5lk\" (UID: \"a235329a-f0db-484d-bf3d-0d8c5b73010d\") " pod="openshift-image-registry/image-registry-66df7c8f76-cq5lk" Jan 27 15:14:46 crc kubenswrapper[4697]: I0127 15:14:46.585440 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-cq5lk\" (UID: \"a235329a-f0db-484d-bf3d-0d8c5b73010d\") " pod="openshift-image-registry/image-registry-66df7c8f76-cq5lk" Jan 27 15:14:46 crc kubenswrapper[4697]: I0127 15:14:46.654843 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/a235329a-f0db-484d-bf3d-0d8c5b73010d-installation-pull-secrets\") pod \"image-registry-66df7c8f76-cq5lk\" (UID: \"a235329a-f0db-484d-bf3d-0d8c5b73010d\") " pod="openshift-image-registry/image-registry-66df7c8f76-cq5lk" Jan 27 15:14:46 crc kubenswrapper[4697]: I0127 15:14:46.655230 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a235329a-f0db-484d-bf3d-0d8c5b73010d-bound-sa-token\") pod \"image-registry-66df7c8f76-cq5lk\" (UID: \"a235329a-f0db-484d-bf3d-0d8c5b73010d\") " pod="openshift-image-registry/image-registry-66df7c8f76-cq5lk" Jan 27 15:14:46 crc kubenswrapper[4697]: I0127 15:14:46.655343 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/a235329a-f0db-484d-bf3d-0d8c5b73010d-registry-certificates\") pod \"image-registry-66df7c8f76-cq5lk\" (UID: \"a235329a-f0db-484d-bf3d-0d8c5b73010d\") " pod="openshift-image-registry/image-registry-66df7c8f76-cq5lk" Jan 27 15:14:46 crc kubenswrapper[4697]: I0127 15:14:46.655447 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/a235329a-f0db-484d-bf3d-0d8c5b73010d-registry-tls\") pod \"image-registry-66df7c8f76-cq5lk\" (UID: \"a235329a-f0db-484d-bf3d-0d8c5b73010d\") " pod="openshift-image-registry/image-registry-66df7c8f76-cq5lk" Jan 27 15:14:46 crc kubenswrapper[4697]: I0127 15:14:46.655576 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/a235329a-f0db-484d-bf3d-0d8c5b73010d-ca-trust-extracted\") pod \"image-registry-66df7c8f76-cq5lk\" (UID: \"a235329a-f0db-484d-bf3d-0d8c5b73010d\") " pod="openshift-image-registry/image-registry-66df7c8f76-cq5lk" Jan 27 15:14:46 crc kubenswrapper[4697]: I0127 15:14:46.655677 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrtfq\" (UniqueName: \"kubernetes.io/projected/a235329a-f0db-484d-bf3d-0d8c5b73010d-kube-api-access-jrtfq\") pod \"image-registry-66df7c8f76-cq5lk\" (UID: \"a235329a-f0db-484d-bf3d-0d8c5b73010d\") " pod="openshift-image-registry/image-registry-66df7c8f76-cq5lk" Jan 27 15:14:46 crc kubenswrapper[4697]: I0127 15:14:46.655813 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a235329a-f0db-484d-bf3d-0d8c5b73010d-trusted-ca\") pod \"image-registry-66df7c8f76-cq5lk\" (UID: \"a235329a-f0db-484d-bf3d-0d8c5b73010d\") " pod="openshift-image-registry/image-registry-66df7c8f76-cq5lk" Jan 27 15:14:46 crc kubenswrapper[4697]: I0127 15:14:46.657441 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a235329a-f0db-484d-bf3d-0d8c5b73010d-trusted-ca\") pod \"image-registry-66df7c8f76-cq5lk\" (UID: \"a235329a-f0db-484d-bf3d-0d8c5b73010d\") " pod="openshift-image-registry/image-registry-66df7c8f76-cq5lk" Jan 27 15:14:46 crc kubenswrapper[4697]: I0127 15:14:46.659148 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/a235329a-f0db-484d-bf3d-0d8c5b73010d-ca-trust-extracted\") pod \"image-registry-66df7c8f76-cq5lk\" (UID: \"a235329a-f0db-484d-bf3d-0d8c5b73010d\") " pod="openshift-image-registry/image-registry-66df7c8f76-cq5lk" Jan 27 15:14:46 crc kubenswrapper[4697]: I0127 15:14:46.660163 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/a235329a-f0db-484d-bf3d-0d8c5b73010d-registry-certificates\") pod \"image-registry-66df7c8f76-cq5lk\" (UID: \"a235329a-f0db-484d-bf3d-0d8c5b73010d\") " pod="openshift-image-registry/image-registry-66df7c8f76-cq5lk" Jan 27 15:14:46 crc kubenswrapper[4697]: I0127 15:14:46.676546 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/a235329a-f0db-484d-bf3d-0d8c5b73010d-registry-tls\") pod \"image-registry-66df7c8f76-cq5lk\" (UID: \"a235329a-f0db-484d-bf3d-0d8c5b73010d\") " pod="openshift-image-registry/image-registry-66df7c8f76-cq5lk" Jan 27 15:14:46 crc kubenswrapper[4697]: I0127 15:14:46.677087 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/a235329a-f0db-484d-bf3d-0d8c5b73010d-installation-pull-secrets\") pod \"image-registry-66df7c8f76-cq5lk\" (UID: \"a235329a-f0db-484d-bf3d-0d8c5b73010d\") " pod="openshift-image-registry/image-registry-66df7c8f76-cq5lk" Jan 27 15:14:46 crc kubenswrapper[4697]: I0127 15:14:46.680193 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrtfq\" (UniqueName: \"kubernetes.io/projected/a235329a-f0db-484d-bf3d-0d8c5b73010d-kube-api-access-jrtfq\") pod \"image-registry-66df7c8f76-cq5lk\" (UID: \"a235329a-f0db-484d-bf3d-0d8c5b73010d\") " pod="openshift-image-registry/image-registry-66df7c8f76-cq5lk" Jan 27 15:14:46 crc kubenswrapper[4697]: I0127 15:14:46.684588 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a235329a-f0db-484d-bf3d-0d8c5b73010d-bound-sa-token\") pod \"image-registry-66df7c8f76-cq5lk\" (UID: \"a235329a-f0db-484d-bf3d-0d8c5b73010d\") " pod="openshift-image-registry/image-registry-66df7c8f76-cq5lk" Jan 27 15:14:46 crc kubenswrapper[4697]: I0127 15:14:46.740284 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-cq5lk" Jan 27 15:14:47 crc kubenswrapper[4697]: I0127 15:14:47.154849 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-cq5lk"] Jan 27 15:14:47 crc kubenswrapper[4697]: W0127 15:14:47.165959 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda235329a_f0db_484d_bf3d_0d8c5b73010d.slice/crio-781572b0e3873b916fe0f7f8278a6374a1386d9d070e2762baa91ce66f645c26 WatchSource:0}: Error finding container 781572b0e3873b916fe0f7f8278a6374a1386d9d070e2762baa91ce66f645c26: Status 404 returned error can't find the container with id 781572b0e3873b916fe0f7f8278a6374a1386d9d070e2762baa91ce66f645c26 Jan 27 15:14:47 crc kubenswrapper[4697]: I0127 15:14:47.677410 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-cq5lk" event={"ID":"a235329a-f0db-484d-bf3d-0d8c5b73010d","Type":"ContainerStarted","Data":"91627245b0be88d888f58e170fbe97034e928b999fae86a3697675c288f97efc"} Jan 27 15:14:47 crc kubenswrapper[4697]: I0127 15:14:47.677765 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-cq5lk" event={"ID":"a235329a-f0db-484d-bf3d-0d8c5b73010d","Type":"ContainerStarted","Data":"781572b0e3873b916fe0f7f8278a6374a1386d9d070e2762baa91ce66f645c26"} Jan 27 15:14:47 crc kubenswrapper[4697]: I0127 15:14:47.677808 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-cq5lk" Jan 27 15:14:47 crc kubenswrapper[4697]: I0127 15:14:47.697228 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-cq5lk" podStartSLOduration=1.697203584 podStartE2EDuration="1.697203584s" podCreationTimestamp="2026-01-27 15:14:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:14:47.692737029 +0000 UTC m=+383.865136810" watchObservedRunningTime="2026-01-27 15:14:47.697203584 +0000 UTC m=+383.869603405" Jan 27 15:14:51 crc kubenswrapper[4697]: I0127 15:14:51.662369 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-59htg"] Jan 27 15:14:51 crc kubenswrapper[4697]: I0127 15:14:51.663466 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-59htg" podUID="bf0813c1-e1e9-42a4-8cf9-8fd7fba35e3d" containerName="registry-server" containerID="cri-o://19278f20367c3d68412249f688ef50f3eeaa19d235ac0a41cc7b2050ebcd7af7" gracePeriod=30 Jan 27 15:14:51 crc kubenswrapper[4697]: I0127 15:14:51.688646 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-56255"] Jan 27 15:14:51 crc kubenswrapper[4697]: I0127 15:14:51.688999 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-56255" podUID="316f7102-a9a6-40c4-b38b-ba9c7736526a" containerName="registry-server" containerID="cri-o://748b048149034c965bdff46bd997b49b83543fff9b59ab2e8abeb9d6c62c789e" gracePeriod=30 Jan 27 15:14:51 crc kubenswrapper[4697]: I0127 15:14:51.714000 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-45xm2"] Jan 27 15:14:51 crc kubenswrapper[4697]: I0127 15:14:51.714270 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-45xm2" podUID="baa7401d-bcad-4175-af1b-46414c003f9e" containerName="marketplace-operator" containerID="cri-o://5ea45aeff8eba19e51f7f03a1286de49e37eeeceb86a6a07a5cb5b98203b8d21" gracePeriod=30 Jan 27 15:14:51 crc kubenswrapper[4697]: I0127 15:14:51.723019 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-cbv2z"] Jan 27 15:14:51 crc kubenswrapper[4697]: I0127 15:14:51.723291 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-cbv2z" podUID="d7864bf9-220d-402f-bb77-0240a422c2f8" containerName="registry-server" containerID="cri-o://6ad129e2b3ab491f043bb2b6f6f846788fa6f0f4397c088ff3445dbc07c48556" gracePeriod=30 Jan 27 15:14:51 crc kubenswrapper[4697]: I0127 15:14:51.727115 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wst7l"] Jan 27 15:14:51 crc kubenswrapper[4697]: I0127 15:14:51.727335 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-wst7l" podUID="20946332-e642-4802-b943-8c504ef8c3ec" containerName="registry-server" containerID="cri-o://ab904360537628b6277b9dbe62f06c910b7868c23a57255c36036686a33b3add" gracePeriod=30 Jan 27 15:14:51 crc kubenswrapper[4697]: I0127 15:14:51.748502 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-hwq4c"] Jan 27 15:14:51 crc kubenswrapper[4697]: I0127 15:14:51.749378 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-hwq4c" Jan 27 15:14:51 crc kubenswrapper[4697]: I0127 15:14:51.753916 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-hwq4c"] Jan 27 15:14:51 crc kubenswrapper[4697]: I0127 15:14:51.931395 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8ncq\" (UniqueName: \"kubernetes.io/projected/e4c801e2-39ef-4230-8bb0-fed36eccba1a-kube-api-access-q8ncq\") pod \"marketplace-operator-79b997595-hwq4c\" (UID: \"e4c801e2-39ef-4230-8bb0-fed36eccba1a\") " pod="openshift-marketplace/marketplace-operator-79b997595-hwq4c" Jan 27 15:14:51 crc kubenswrapper[4697]: I0127 15:14:51.931708 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e4c801e2-39ef-4230-8bb0-fed36eccba1a-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-hwq4c\" (UID: \"e4c801e2-39ef-4230-8bb0-fed36eccba1a\") " pod="openshift-marketplace/marketplace-operator-79b997595-hwq4c" Jan 27 15:14:51 crc kubenswrapper[4697]: I0127 15:14:51.931740 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/e4c801e2-39ef-4230-8bb0-fed36eccba1a-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-hwq4c\" (UID: \"e4c801e2-39ef-4230-8bb0-fed36eccba1a\") " pod="openshift-marketplace/marketplace-operator-79b997595-hwq4c" Jan 27 15:14:52 crc kubenswrapper[4697]: I0127 15:14:52.032336 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e4c801e2-39ef-4230-8bb0-fed36eccba1a-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-hwq4c\" (UID: \"e4c801e2-39ef-4230-8bb0-fed36eccba1a\") " pod="openshift-marketplace/marketplace-operator-79b997595-hwq4c" Jan 27 15:14:52 crc kubenswrapper[4697]: I0127 15:14:52.032390 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/e4c801e2-39ef-4230-8bb0-fed36eccba1a-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-hwq4c\" (UID: \"e4c801e2-39ef-4230-8bb0-fed36eccba1a\") " pod="openshift-marketplace/marketplace-operator-79b997595-hwq4c" Jan 27 15:14:52 crc kubenswrapper[4697]: I0127 15:14:52.032451 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q8ncq\" (UniqueName: \"kubernetes.io/projected/e4c801e2-39ef-4230-8bb0-fed36eccba1a-kube-api-access-q8ncq\") pod \"marketplace-operator-79b997595-hwq4c\" (UID: \"e4c801e2-39ef-4230-8bb0-fed36eccba1a\") " pod="openshift-marketplace/marketplace-operator-79b997595-hwq4c" Jan 27 15:14:52 crc kubenswrapper[4697]: I0127 15:14:52.034877 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e4c801e2-39ef-4230-8bb0-fed36eccba1a-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-hwq4c\" (UID: \"e4c801e2-39ef-4230-8bb0-fed36eccba1a\") " pod="openshift-marketplace/marketplace-operator-79b997595-hwq4c" Jan 27 15:14:52 crc kubenswrapper[4697]: I0127 15:14:52.038751 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/e4c801e2-39ef-4230-8bb0-fed36eccba1a-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-hwq4c\" (UID: \"e4c801e2-39ef-4230-8bb0-fed36eccba1a\") " pod="openshift-marketplace/marketplace-operator-79b997595-hwq4c" Jan 27 15:14:52 crc kubenswrapper[4697]: I0127 15:14:52.062056 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8ncq\" (UniqueName: \"kubernetes.io/projected/e4c801e2-39ef-4230-8bb0-fed36eccba1a-kube-api-access-q8ncq\") pod \"marketplace-operator-79b997595-hwq4c\" (UID: \"e4c801e2-39ef-4230-8bb0-fed36eccba1a\") " pod="openshift-marketplace/marketplace-operator-79b997595-hwq4c" Jan 27 15:14:52 crc kubenswrapper[4697]: I0127 15:14:52.239443 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-59htg" Jan 27 15:14:52 crc kubenswrapper[4697]: I0127 15:14:52.339183 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-hwq4c" Jan 27 15:14:52 crc kubenswrapper[4697]: I0127 15:14:52.350629 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf0813c1-e1e9-42a4-8cf9-8fd7fba35e3d-utilities\") pod \"bf0813c1-e1e9-42a4-8cf9-8fd7fba35e3d\" (UID: \"bf0813c1-e1e9-42a4-8cf9-8fd7fba35e3d\") " Jan 27 15:14:52 crc kubenswrapper[4697]: I0127 15:14:52.350717 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf0813c1-e1e9-42a4-8cf9-8fd7fba35e3d-catalog-content\") pod \"bf0813c1-e1e9-42a4-8cf9-8fd7fba35e3d\" (UID: \"bf0813c1-e1e9-42a4-8cf9-8fd7fba35e3d\") " Jan 27 15:14:52 crc kubenswrapper[4697]: I0127 15:14:52.350749 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-djj9k\" (UniqueName: \"kubernetes.io/projected/bf0813c1-e1e9-42a4-8cf9-8fd7fba35e3d-kube-api-access-djj9k\") pod \"bf0813c1-e1e9-42a4-8cf9-8fd7fba35e3d\" (UID: \"bf0813c1-e1e9-42a4-8cf9-8fd7fba35e3d\") " Jan 27 15:14:52 crc kubenswrapper[4697]: I0127 15:14:52.351447 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bf0813c1-e1e9-42a4-8cf9-8fd7fba35e3d-utilities" (OuterVolumeSpecName: "utilities") pod "bf0813c1-e1e9-42a4-8cf9-8fd7fba35e3d" (UID: "bf0813c1-e1e9-42a4-8cf9-8fd7fba35e3d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:14:52 crc kubenswrapper[4697]: I0127 15:14:52.354448 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf0813c1-e1e9-42a4-8cf9-8fd7fba35e3d-kube-api-access-djj9k" (OuterVolumeSpecName: "kube-api-access-djj9k") pod "bf0813c1-e1e9-42a4-8cf9-8fd7fba35e3d" (UID: "bf0813c1-e1e9-42a4-8cf9-8fd7fba35e3d"). InnerVolumeSpecName "kube-api-access-djj9k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:14:52 crc kubenswrapper[4697]: I0127 15:14:52.380911 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cbv2z" Jan 27 15:14:52 crc kubenswrapper[4697]: I0127 15:14:52.439934 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bf0813c1-e1e9-42a4-8cf9-8fd7fba35e3d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bf0813c1-e1e9-42a4-8cf9-8fd7fba35e3d" (UID: "bf0813c1-e1e9-42a4-8cf9-8fd7fba35e3d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:14:52 crc kubenswrapper[4697]: I0127 15:14:52.446628 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-45xm2" Jan 27 15:14:52 crc kubenswrapper[4697]: I0127 15:14:52.451726 4697 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf0813c1-e1e9-42a4-8cf9-8fd7fba35e3d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 15:14:52 crc kubenswrapper[4697]: I0127 15:14:52.451758 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-djj9k\" (UniqueName: \"kubernetes.io/projected/bf0813c1-e1e9-42a4-8cf9-8fd7fba35e3d-kube-api-access-djj9k\") on node \"crc\" DevicePath \"\"" Jan 27 15:14:52 crc kubenswrapper[4697]: I0127 15:14:52.451770 4697 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf0813c1-e1e9-42a4-8cf9-8fd7fba35e3d-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 15:14:52 crc kubenswrapper[4697]: I0127 15:14:52.453387 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wst7l" Jan 27 15:14:52 crc kubenswrapper[4697]: I0127 15:14:52.455977 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-56255" Jan 27 15:14:52 crc kubenswrapper[4697]: I0127 15:14:52.552186 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/baa7401d-bcad-4175-af1b-46414c003f9e-marketplace-trusted-ca\") pod \"baa7401d-bcad-4175-af1b-46414c003f9e\" (UID: \"baa7401d-bcad-4175-af1b-46414c003f9e\") " Jan 27 15:14:52 crc kubenswrapper[4697]: I0127 15:14:52.552457 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5hrcv\" (UniqueName: \"kubernetes.io/projected/d7864bf9-220d-402f-bb77-0240a422c2f8-kube-api-access-5hrcv\") pod \"d7864bf9-220d-402f-bb77-0240a422c2f8\" (UID: \"d7864bf9-220d-402f-bb77-0240a422c2f8\") " Jan 27 15:14:52 crc kubenswrapper[4697]: I0127 15:14:52.552539 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/baa7401d-bcad-4175-af1b-46414c003f9e-marketplace-operator-metrics\") pod \"baa7401d-bcad-4175-af1b-46414c003f9e\" (UID: \"baa7401d-bcad-4175-af1b-46414c003f9e\") " Jan 27 15:14:52 crc kubenswrapper[4697]: I0127 15:14:52.552566 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7864bf9-220d-402f-bb77-0240a422c2f8-catalog-content\") pod \"d7864bf9-220d-402f-bb77-0240a422c2f8\" (UID: \"d7864bf9-220d-402f-bb77-0240a422c2f8\") " Jan 27 15:14:52 crc kubenswrapper[4697]: I0127 15:14:52.552590 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kxdxh\" (UniqueName: \"kubernetes.io/projected/baa7401d-bcad-4175-af1b-46414c003f9e-kube-api-access-kxdxh\") pod \"baa7401d-bcad-4175-af1b-46414c003f9e\" (UID: \"baa7401d-bcad-4175-af1b-46414c003f9e\") " Jan 27 15:14:52 crc kubenswrapper[4697]: I0127 15:14:52.552607 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7864bf9-220d-402f-bb77-0240a422c2f8-utilities\") pod \"d7864bf9-220d-402f-bb77-0240a422c2f8\" (UID: \"d7864bf9-220d-402f-bb77-0240a422c2f8\") " Jan 27 15:14:52 crc kubenswrapper[4697]: I0127 15:14:52.553159 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/baa7401d-bcad-4175-af1b-46414c003f9e-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "baa7401d-bcad-4175-af1b-46414c003f9e" (UID: "baa7401d-bcad-4175-af1b-46414c003f9e"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:14:52 crc kubenswrapper[4697]: I0127 15:14:52.553883 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d7864bf9-220d-402f-bb77-0240a422c2f8-utilities" (OuterVolumeSpecName: "utilities") pod "d7864bf9-220d-402f-bb77-0240a422c2f8" (UID: "d7864bf9-220d-402f-bb77-0240a422c2f8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:14:52 crc kubenswrapper[4697]: I0127 15:14:52.554037 4697 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7864bf9-220d-402f-bb77-0240a422c2f8-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 15:14:52 crc kubenswrapper[4697]: I0127 15:14:52.554053 4697 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/baa7401d-bcad-4175-af1b-46414c003f9e-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 15:14:52 crc kubenswrapper[4697]: I0127 15:14:52.557300 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7864bf9-220d-402f-bb77-0240a422c2f8-kube-api-access-5hrcv" (OuterVolumeSpecName: "kube-api-access-5hrcv") pod "d7864bf9-220d-402f-bb77-0240a422c2f8" (UID: "d7864bf9-220d-402f-bb77-0240a422c2f8"). InnerVolumeSpecName "kube-api-access-5hrcv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:14:52 crc kubenswrapper[4697]: I0127 15:14:52.563689 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/baa7401d-bcad-4175-af1b-46414c003f9e-kube-api-access-kxdxh" (OuterVolumeSpecName: "kube-api-access-kxdxh") pod "baa7401d-bcad-4175-af1b-46414c003f9e" (UID: "baa7401d-bcad-4175-af1b-46414c003f9e"). InnerVolumeSpecName "kube-api-access-kxdxh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:14:52 crc kubenswrapper[4697]: I0127 15:14:52.563703 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/baa7401d-bcad-4175-af1b-46414c003f9e-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "baa7401d-bcad-4175-af1b-46414c003f9e" (UID: "baa7401d-bcad-4175-af1b-46414c003f9e"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:14:52 crc kubenswrapper[4697]: I0127 15:14:52.612061 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d7864bf9-220d-402f-bb77-0240a422c2f8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d7864bf9-220d-402f-bb77-0240a422c2f8" (UID: "d7864bf9-220d-402f-bb77-0240a422c2f8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:14:52 crc kubenswrapper[4697]: I0127 15:14:52.656772 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gvtd9\" (UniqueName: \"kubernetes.io/projected/316f7102-a9a6-40c4-b38b-ba9c7736526a-kube-api-access-gvtd9\") pod \"316f7102-a9a6-40c4-b38b-ba9c7736526a\" (UID: \"316f7102-a9a6-40c4-b38b-ba9c7736526a\") " Jan 27 15:14:52 crc kubenswrapper[4697]: I0127 15:14:52.656886 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/316f7102-a9a6-40c4-b38b-ba9c7736526a-catalog-content\") pod \"316f7102-a9a6-40c4-b38b-ba9c7736526a\" (UID: \"316f7102-a9a6-40c4-b38b-ba9c7736526a\") " Jan 27 15:14:52 crc kubenswrapper[4697]: I0127 15:14:52.656934 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/316f7102-a9a6-40c4-b38b-ba9c7736526a-utilities\") pod \"316f7102-a9a6-40c4-b38b-ba9c7736526a\" (UID: \"316f7102-a9a6-40c4-b38b-ba9c7736526a\") " Jan 27 15:14:52 crc kubenswrapper[4697]: I0127 15:14:52.657017 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20946332-e642-4802-b943-8c504ef8c3ec-utilities\") pod \"20946332-e642-4802-b943-8c504ef8c3ec\" (UID: \"20946332-e642-4802-b943-8c504ef8c3ec\") " Jan 27 15:14:52 crc kubenswrapper[4697]: I0127 15:14:52.657034 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t5pdh\" (UniqueName: \"kubernetes.io/projected/20946332-e642-4802-b943-8c504ef8c3ec-kube-api-access-t5pdh\") pod \"20946332-e642-4802-b943-8c504ef8c3ec\" (UID: \"20946332-e642-4802-b943-8c504ef8c3ec\") " Jan 27 15:14:52 crc kubenswrapper[4697]: I0127 15:14:52.657054 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20946332-e642-4802-b943-8c504ef8c3ec-catalog-content\") pod \"20946332-e642-4802-b943-8c504ef8c3ec\" (UID: \"20946332-e642-4802-b943-8c504ef8c3ec\") " Jan 27 15:14:52 crc kubenswrapper[4697]: I0127 15:14:52.657274 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5hrcv\" (UniqueName: \"kubernetes.io/projected/d7864bf9-220d-402f-bb77-0240a422c2f8-kube-api-access-5hrcv\") on node \"crc\" DevicePath \"\"" Jan 27 15:14:52 crc kubenswrapper[4697]: I0127 15:14:52.657305 4697 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/baa7401d-bcad-4175-af1b-46414c003f9e-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 27 15:14:52 crc kubenswrapper[4697]: I0127 15:14:52.657335 4697 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7864bf9-220d-402f-bb77-0240a422c2f8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 15:14:52 crc kubenswrapper[4697]: I0127 15:14:52.657389 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kxdxh\" (UniqueName: \"kubernetes.io/projected/baa7401d-bcad-4175-af1b-46414c003f9e-kube-api-access-kxdxh\") on node \"crc\" DevicePath \"\"" Jan 27 15:14:52 crc kubenswrapper[4697]: I0127 15:14:52.657982 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/316f7102-a9a6-40c4-b38b-ba9c7736526a-utilities" (OuterVolumeSpecName: "utilities") pod "316f7102-a9a6-40c4-b38b-ba9c7736526a" (UID: "316f7102-a9a6-40c4-b38b-ba9c7736526a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:14:52 crc kubenswrapper[4697]: I0127 15:14:52.658217 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20946332-e642-4802-b943-8c504ef8c3ec-utilities" (OuterVolumeSpecName: "utilities") pod "20946332-e642-4802-b943-8c504ef8c3ec" (UID: "20946332-e642-4802-b943-8c504ef8c3ec"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:14:52 crc kubenswrapper[4697]: I0127 15:14:52.662078 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/316f7102-a9a6-40c4-b38b-ba9c7736526a-kube-api-access-gvtd9" (OuterVolumeSpecName: "kube-api-access-gvtd9") pod "316f7102-a9a6-40c4-b38b-ba9c7736526a" (UID: "316f7102-a9a6-40c4-b38b-ba9c7736526a"). InnerVolumeSpecName "kube-api-access-gvtd9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:14:52 crc kubenswrapper[4697]: I0127 15:14:52.662850 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20946332-e642-4802-b943-8c504ef8c3ec-kube-api-access-t5pdh" (OuterVolumeSpecName: "kube-api-access-t5pdh") pod "20946332-e642-4802-b943-8c504ef8c3ec" (UID: "20946332-e642-4802-b943-8c504ef8c3ec"). InnerVolumeSpecName "kube-api-access-t5pdh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:14:52 crc kubenswrapper[4697]: I0127 15:14:52.708479 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/316f7102-a9a6-40c4-b38b-ba9c7736526a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "316f7102-a9a6-40c4-b38b-ba9c7736526a" (UID: "316f7102-a9a6-40c4-b38b-ba9c7736526a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:14:52 crc kubenswrapper[4697]: I0127 15:14:52.724695 4697 generic.go:334] "Generic (PLEG): container finished" podID="baa7401d-bcad-4175-af1b-46414c003f9e" containerID="5ea45aeff8eba19e51f7f03a1286de49e37eeeceb86a6a07a5cb5b98203b8d21" exitCode=0 Jan 27 15:14:52 crc kubenswrapper[4697]: I0127 15:14:52.724823 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-45xm2" Jan 27 15:14:52 crc kubenswrapper[4697]: I0127 15:14:52.725355 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-45xm2" event={"ID":"baa7401d-bcad-4175-af1b-46414c003f9e","Type":"ContainerDied","Data":"5ea45aeff8eba19e51f7f03a1286de49e37eeeceb86a6a07a5cb5b98203b8d21"} Jan 27 15:14:52 crc kubenswrapper[4697]: I0127 15:14:52.725382 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-45xm2" event={"ID":"baa7401d-bcad-4175-af1b-46414c003f9e","Type":"ContainerDied","Data":"815121f673a37302628072b384cf0f76b903303b6ed6a35e95c05b89d9509010"} Jan 27 15:14:52 crc kubenswrapper[4697]: I0127 15:14:52.725401 4697 scope.go:117] "RemoveContainer" containerID="5ea45aeff8eba19e51f7f03a1286de49e37eeeceb86a6a07a5cb5b98203b8d21" Jan 27 15:14:52 crc kubenswrapper[4697]: I0127 15:14:52.727225 4697 generic.go:334] "Generic (PLEG): container finished" podID="316f7102-a9a6-40c4-b38b-ba9c7736526a" containerID="748b048149034c965bdff46bd997b49b83543fff9b59ab2e8abeb9d6c62c789e" exitCode=0 Jan 27 15:14:52 crc kubenswrapper[4697]: I0127 15:14:52.727379 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-56255" Jan 27 15:14:52 crc kubenswrapper[4697]: I0127 15:14:52.728181 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-56255" event={"ID":"316f7102-a9a6-40c4-b38b-ba9c7736526a","Type":"ContainerDied","Data":"748b048149034c965bdff46bd997b49b83543fff9b59ab2e8abeb9d6c62c789e"} Jan 27 15:14:52 crc kubenswrapper[4697]: I0127 15:14:52.728238 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-56255" event={"ID":"316f7102-a9a6-40c4-b38b-ba9c7736526a","Type":"ContainerDied","Data":"2af11725f4c88077db07e7383c9c73df1ff77fc5eedeb887c192cd8cb63fc249"} Jan 27 15:14:52 crc kubenswrapper[4697]: I0127 15:14:52.737665 4697 generic.go:334] "Generic (PLEG): container finished" podID="20946332-e642-4802-b943-8c504ef8c3ec" containerID="ab904360537628b6277b9dbe62f06c910b7868c23a57255c36036686a33b3add" exitCode=0 Jan 27 15:14:52 crc kubenswrapper[4697]: I0127 15:14:52.737769 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wst7l" event={"ID":"20946332-e642-4802-b943-8c504ef8c3ec","Type":"ContainerDied","Data":"ab904360537628b6277b9dbe62f06c910b7868c23a57255c36036686a33b3add"} Jan 27 15:14:52 crc kubenswrapper[4697]: I0127 15:14:52.737848 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wst7l" event={"ID":"20946332-e642-4802-b943-8c504ef8c3ec","Type":"ContainerDied","Data":"d2c42e721683e1257b972b3e1f0950b092bca4e3a92b391de6e02aba5604a70e"} Jan 27 15:14:52 crc kubenswrapper[4697]: I0127 15:14:52.737928 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wst7l" Jan 27 15:14:52 crc kubenswrapper[4697]: I0127 15:14:52.752078 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-45xm2"] Jan 27 15:14:52 crc kubenswrapper[4697]: I0127 15:14:52.753401 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-45xm2"] Jan 27 15:14:52 crc kubenswrapper[4697]: I0127 15:14:52.753765 4697 scope.go:117] "RemoveContainer" containerID="31c0ab84a5b388f859f6c45a8b145c924146deefae5e970ec7cb5133aa40ab83" Jan 27 15:14:52 crc kubenswrapper[4697]: I0127 15:14:52.753927 4697 generic.go:334] "Generic (PLEG): container finished" podID="bf0813c1-e1e9-42a4-8cf9-8fd7fba35e3d" containerID="19278f20367c3d68412249f688ef50f3eeaa19d235ac0a41cc7b2050ebcd7af7" exitCode=0 Jan 27 15:14:52 crc kubenswrapper[4697]: I0127 15:14:52.754047 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-59htg" Jan 27 15:14:52 crc kubenswrapper[4697]: I0127 15:14:52.754145 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-59htg" event={"ID":"bf0813c1-e1e9-42a4-8cf9-8fd7fba35e3d","Type":"ContainerDied","Data":"19278f20367c3d68412249f688ef50f3eeaa19d235ac0a41cc7b2050ebcd7af7"} Jan 27 15:14:52 crc kubenswrapper[4697]: I0127 15:14:52.754230 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-59htg" event={"ID":"bf0813c1-e1e9-42a4-8cf9-8fd7fba35e3d","Type":"ContainerDied","Data":"e42ec4b279cb18c82c3c8bc9e910789f2171d0df56384daf8541822228ac1ce6"} Jan 27 15:14:52 crc kubenswrapper[4697]: I0127 15:14:52.758439 4697 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/316f7102-a9a6-40c4-b38b-ba9c7736526a-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 15:14:52 crc kubenswrapper[4697]: I0127 15:14:52.758467 4697 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20946332-e642-4802-b943-8c504ef8c3ec-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 15:14:52 crc kubenswrapper[4697]: I0127 15:14:52.758479 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t5pdh\" (UniqueName: \"kubernetes.io/projected/20946332-e642-4802-b943-8c504ef8c3ec-kube-api-access-t5pdh\") on node \"crc\" DevicePath \"\"" Jan 27 15:14:52 crc kubenswrapper[4697]: I0127 15:14:52.758492 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gvtd9\" (UniqueName: \"kubernetes.io/projected/316f7102-a9a6-40c4-b38b-ba9c7736526a-kube-api-access-gvtd9\") on node \"crc\" DevicePath \"\"" Jan 27 15:14:52 crc kubenswrapper[4697]: I0127 15:14:52.758503 4697 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/316f7102-a9a6-40c4-b38b-ba9c7736526a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 15:14:52 crc kubenswrapper[4697]: I0127 15:14:52.759823 4697 generic.go:334] "Generic (PLEG): container finished" podID="d7864bf9-220d-402f-bb77-0240a422c2f8" containerID="6ad129e2b3ab491f043bb2b6f6f846788fa6f0f4397c088ff3445dbc07c48556" exitCode=0 Jan 27 15:14:52 crc kubenswrapper[4697]: I0127 15:14:52.759858 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cbv2z" event={"ID":"d7864bf9-220d-402f-bb77-0240a422c2f8","Type":"ContainerDied","Data":"6ad129e2b3ab491f043bb2b6f6f846788fa6f0f4397c088ff3445dbc07c48556"} Jan 27 15:14:52 crc kubenswrapper[4697]: I0127 15:14:52.759883 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cbv2z" event={"ID":"d7864bf9-220d-402f-bb77-0240a422c2f8","Type":"ContainerDied","Data":"a136edd5440e98ce1be0cc6e4f49af93fe0a252c2a0fbed45df56ed8399f459e"} Jan 27 15:14:52 crc kubenswrapper[4697]: I0127 15:14:52.759947 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cbv2z" Jan 27 15:14:52 crc kubenswrapper[4697]: I0127 15:14:52.781064 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-56255"] Jan 27 15:14:52 crc kubenswrapper[4697]: I0127 15:14:52.787124 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-56255"] Jan 27 15:14:52 crc kubenswrapper[4697]: I0127 15:14:52.797276 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-59htg"] Jan 27 15:14:52 crc kubenswrapper[4697]: I0127 15:14:52.797495 4697 scope.go:117] "RemoveContainer" containerID="5ea45aeff8eba19e51f7f03a1286de49e37eeeceb86a6a07a5cb5b98203b8d21" Jan 27 15:14:52 crc kubenswrapper[4697]: E0127 15:14:52.798045 4697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ea45aeff8eba19e51f7f03a1286de49e37eeeceb86a6a07a5cb5b98203b8d21\": container with ID starting with 5ea45aeff8eba19e51f7f03a1286de49e37eeeceb86a6a07a5cb5b98203b8d21 not found: ID does not exist" containerID="5ea45aeff8eba19e51f7f03a1286de49e37eeeceb86a6a07a5cb5b98203b8d21" Jan 27 15:14:52 crc kubenswrapper[4697]: I0127 15:14:52.798299 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ea45aeff8eba19e51f7f03a1286de49e37eeeceb86a6a07a5cb5b98203b8d21"} err="failed to get container status \"5ea45aeff8eba19e51f7f03a1286de49e37eeeceb86a6a07a5cb5b98203b8d21\": rpc error: code = NotFound desc = could not find container \"5ea45aeff8eba19e51f7f03a1286de49e37eeeceb86a6a07a5cb5b98203b8d21\": container with ID starting with 5ea45aeff8eba19e51f7f03a1286de49e37eeeceb86a6a07a5cb5b98203b8d21 not found: ID does not exist" Jan 27 15:14:52 crc kubenswrapper[4697]: I0127 15:14:52.798447 4697 scope.go:117] "RemoveContainer" containerID="31c0ab84a5b388f859f6c45a8b145c924146deefae5e970ec7cb5133aa40ab83" Jan 27 15:14:52 crc kubenswrapper[4697]: E0127 15:14:52.799440 4697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"31c0ab84a5b388f859f6c45a8b145c924146deefae5e970ec7cb5133aa40ab83\": container with ID starting with 31c0ab84a5b388f859f6c45a8b145c924146deefae5e970ec7cb5133aa40ab83 not found: ID does not exist" containerID="31c0ab84a5b388f859f6c45a8b145c924146deefae5e970ec7cb5133aa40ab83" Jan 27 15:14:52 crc kubenswrapper[4697]: I0127 15:14:52.799472 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31c0ab84a5b388f859f6c45a8b145c924146deefae5e970ec7cb5133aa40ab83"} err="failed to get container status \"31c0ab84a5b388f859f6c45a8b145c924146deefae5e970ec7cb5133aa40ab83\": rpc error: code = NotFound desc = could not find container \"31c0ab84a5b388f859f6c45a8b145c924146deefae5e970ec7cb5133aa40ab83\": container with ID starting with 31c0ab84a5b388f859f6c45a8b145c924146deefae5e970ec7cb5133aa40ab83 not found: ID does not exist" Jan 27 15:14:52 crc kubenswrapper[4697]: I0127 15:14:52.799492 4697 scope.go:117] "RemoveContainer" containerID="748b048149034c965bdff46bd997b49b83543fff9b59ab2e8abeb9d6c62c789e" Jan 27 15:14:52 crc kubenswrapper[4697]: I0127 15:14:52.801184 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-59htg"] Jan 27 15:14:52 crc kubenswrapper[4697]: I0127 15:14:52.819883 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20946332-e642-4802-b943-8c504ef8c3ec-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "20946332-e642-4802-b943-8c504ef8c3ec" (UID: "20946332-e642-4802-b943-8c504ef8c3ec"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:14:52 crc kubenswrapper[4697]: I0127 15:14:52.850055 4697 scope.go:117] "RemoveContainer" containerID="58359a344dcf3fc526e390ead399464d02504a33c6b90773cf210ccdec2e72fa" Jan 27 15:14:52 crc kubenswrapper[4697]: I0127 15:14:52.853991 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-cbv2z"] Jan 27 15:14:52 crc kubenswrapper[4697]: I0127 15:14:52.860036 4697 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20946332-e642-4802-b943-8c504ef8c3ec-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 15:14:52 crc kubenswrapper[4697]: I0127 15:14:52.860094 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-cbv2z"] Jan 27 15:14:52 crc kubenswrapper[4697]: I0127 15:14:52.866916 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-hwq4c"] Jan 27 15:14:52 crc kubenswrapper[4697]: I0127 15:14:52.877240 4697 scope.go:117] "RemoveContainer" containerID="905d5ffe2f26fc6a47a46026dbcb1aecd4b8e1e24dcbd5491e7e593b72bd2fc3" Jan 27 15:14:52 crc kubenswrapper[4697]: I0127 15:14:52.903833 4697 scope.go:117] "RemoveContainer" containerID="748b048149034c965bdff46bd997b49b83543fff9b59ab2e8abeb9d6c62c789e" Jan 27 15:14:52 crc kubenswrapper[4697]: E0127 15:14:52.904404 4697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"748b048149034c965bdff46bd997b49b83543fff9b59ab2e8abeb9d6c62c789e\": container with ID starting with 748b048149034c965bdff46bd997b49b83543fff9b59ab2e8abeb9d6c62c789e not found: ID does not exist" containerID="748b048149034c965bdff46bd997b49b83543fff9b59ab2e8abeb9d6c62c789e" Jan 27 15:14:52 crc kubenswrapper[4697]: I0127 15:14:52.904497 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"748b048149034c965bdff46bd997b49b83543fff9b59ab2e8abeb9d6c62c789e"} err="failed to get container status \"748b048149034c965bdff46bd997b49b83543fff9b59ab2e8abeb9d6c62c789e\": rpc error: code = NotFound desc = could not find container \"748b048149034c965bdff46bd997b49b83543fff9b59ab2e8abeb9d6c62c789e\": container with ID starting with 748b048149034c965bdff46bd997b49b83543fff9b59ab2e8abeb9d6c62c789e not found: ID does not exist" Jan 27 15:14:52 crc kubenswrapper[4697]: I0127 15:14:52.904527 4697 scope.go:117] "RemoveContainer" containerID="58359a344dcf3fc526e390ead399464d02504a33c6b90773cf210ccdec2e72fa" Jan 27 15:14:52 crc kubenswrapper[4697]: E0127 15:14:52.904922 4697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58359a344dcf3fc526e390ead399464d02504a33c6b90773cf210ccdec2e72fa\": container with ID starting with 58359a344dcf3fc526e390ead399464d02504a33c6b90773cf210ccdec2e72fa not found: ID does not exist" containerID="58359a344dcf3fc526e390ead399464d02504a33c6b90773cf210ccdec2e72fa" Jan 27 15:14:52 crc kubenswrapper[4697]: I0127 15:14:52.904952 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58359a344dcf3fc526e390ead399464d02504a33c6b90773cf210ccdec2e72fa"} err="failed to get container status \"58359a344dcf3fc526e390ead399464d02504a33c6b90773cf210ccdec2e72fa\": rpc error: code = NotFound desc = could not find container \"58359a344dcf3fc526e390ead399464d02504a33c6b90773cf210ccdec2e72fa\": container with ID starting with 58359a344dcf3fc526e390ead399464d02504a33c6b90773cf210ccdec2e72fa not found: ID does not exist" Jan 27 15:14:52 crc kubenswrapper[4697]: I0127 15:14:52.904974 4697 scope.go:117] "RemoveContainer" containerID="905d5ffe2f26fc6a47a46026dbcb1aecd4b8e1e24dcbd5491e7e593b72bd2fc3" Jan 27 15:14:52 crc kubenswrapper[4697]: E0127 15:14:52.905346 4697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"905d5ffe2f26fc6a47a46026dbcb1aecd4b8e1e24dcbd5491e7e593b72bd2fc3\": container with ID starting with 905d5ffe2f26fc6a47a46026dbcb1aecd4b8e1e24dcbd5491e7e593b72bd2fc3 not found: ID does not exist" containerID="905d5ffe2f26fc6a47a46026dbcb1aecd4b8e1e24dcbd5491e7e593b72bd2fc3" Jan 27 15:14:52 crc kubenswrapper[4697]: I0127 15:14:52.905474 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"905d5ffe2f26fc6a47a46026dbcb1aecd4b8e1e24dcbd5491e7e593b72bd2fc3"} err="failed to get container status \"905d5ffe2f26fc6a47a46026dbcb1aecd4b8e1e24dcbd5491e7e593b72bd2fc3\": rpc error: code = NotFound desc = could not find container \"905d5ffe2f26fc6a47a46026dbcb1aecd4b8e1e24dcbd5491e7e593b72bd2fc3\": container with ID starting with 905d5ffe2f26fc6a47a46026dbcb1aecd4b8e1e24dcbd5491e7e593b72bd2fc3 not found: ID does not exist" Jan 27 15:14:52 crc kubenswrapper[4697]: I0127 15:14:52.905585 4697 scope.go:117] "RemoveContainer" containerID="ab904360537628b6277b9dbe62f06c910b7868c23a57255c36036686a33b3add" Jan 27 15:14:52 crc kubenswrapper[4697]: I0127 15:14:52.928982 4697 scope.go:117] "RemoveContainer" containerID="2d4b85c6a3fc9dad6ab73cec5f33afa3ba363d59ef701e5e9fb29d624e226307" Jan 27 15:14:52 crc kubenswrapper[4697]: I0127 15:14:52.947152 4697 scope.go:117] "RemoveContainer" containerID="ff8934e32768840d7a6891eff01e1594c2b342778bd9cb5d6695426d3632a359" Jan 27 15:14:52 crc kubenswrapper[4697]: I0127 15:14:52.962894 4697 scope.go:117] "RemoveContainer" containerID="ab904360537628b6277b9dbe62f06c910b7868c23a57255c36036686a33b3add" Jan 27 15:14:52 crc kubenswrapper[4697]: E0127 15:14:52.964604 4697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab904360537628b6277b9dbe62f06c910b7868c23a57255c36036686a33b3add\": container with ID starting with ab904360537628b6277b9dbe62f06c910b7868c23a57255c36036686a33b3add not found: ID does not exist" containerID="ab904360537628b6277b9dbe62f06c910b7868c23a57255c36036686a33b3add" Jan 27 15:14:52 crc kubenswrapper[4697]: I0127 15:14:52.964640 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab904360537628b6277b9dbe62f06c910b7868c23a57255c36036686a33b3add"} err="failed to get container status \"ab904360537628b6277b9dbe62f06c910b7868c23a57255c36036686a33b3add\": rpc error: code = NotFound desc = could not find container \"ab904360537628b6277b9dbe62f06c910b7868c23a57255c36036686a33b3add\": container with ID starting with ab904360537628b6277b9dbe62f06c910b7868c23a57255c36036686a33b3add not found: ID does not exist" Jan 27 15:14:52 crc kubenswrapper[4697]: I0127 15:14:52.964662 4697 scope.go:117] "RemoveContainer" containerID="2d4b85c6a3fc9dad6ab73cec5f33afa3ba363d59ef701e5e9fb29d624e226307" Jan 27 15:14:52 crc kubenswrapper[4697]: E0127 15:14:52.965107 4697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d4b85c6a3fc9dad6ab73cec5f33afa3ba363d59ef701e5e9fb29d624e226307\": container with ID starting with 2d4b85c6a3fc9dad6ab73cec5f33afa3ba363d59ef701e5e9fb29d624e226307 not found: ID does not exist" containerID="2d4b85c6a3fc9dad6ab73cec5f33afa3ba363d59ef701e5e9fb29d624e226307" Jan 27 15:14:52 crc kubenswrapper[4697]: I0127 15:14:52.965128 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d4b85c6a3fc9dad6ab73cec5f33afa3ba363d59ef701e5e9fb29d624e226307"} err="failed to get container status \"2d4b85c6a3fc9dad6ab73cec5f33afa3ba363d59ef701e5e9fb29d624e226307\": rpc error: code = NotFound desc = could not find container \"2d4b85c6a3fc9dad6ab73cec5f33afa3ba363d59ef701e5e9fb29d624e226307\": container with ID starting with 2d4b85c6a3fc9dad6ab73cec5f33afa3ba363d59ef701e5e9fb29d624e226307 not found: ID does not exist" Jan 27 15:14:52 crc kubenswrapper[4697]: I0127 15:14:52.965142 4697 scope.go:117] "RemoveContainer" containerID="ff8934e32768840d7a6891eff01e1594c2b342778bd9cb5d6695426d3632a359" Jan 27 15:14:52 crc kubenswrapper[4697]: E0127 15:14:52.965480 4697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ff8934e32768840d7a6891eff01e1594c2b342778bd9cb5d6695426d3632a359\": container with ID starting with ff8934e32768840d7a6891eff01e1594c2b342778bd9cb5d6695426d3632a359 not found: ID does not exist" containerID="ff8934e32768840d7a6891eff01e1594c2b342778bd9cb5d6695426d3632a359" Jan 27 15:14:52 crc kubenswrapper[4697]: I0127 15:14:52.965500 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff8934e32768840d7a6891eff01e1594c2b342778bd9cb5d6695426d3632a359"} err="failed to get container status \"ff8934e32768840d7a6891eff01e1594c2b342778bd9cb5d6695426d3632a359\": rpc error: code = NotFound desc = could not find container \"ff8934e32768840d7a6891eff01e1594c2b342778bd9cb5d6695426d3632a359\": container with ID starting with ff8934e32768840d7a6891eff01e1594c2b342778bd9cb5d6695426d3632a359 not found: ID does not exist" Jan 27 15:14:52 crc kubenswrapper[4697]: I0127 15:14:52.965526 4697 scope.go:117] "RemoveContainer" containerID="19278f20367c3d68412249f688ef50f3eeaa19d235ac0a41cc7b2050ebcd7af7" Jan 27 15:14:52 crc kubenswrapper[4697]: I0127 15:14:52.986959 4697 scope.go:117] "RemoveContainer" containerID="633fc583b6005a43efa801b10ac82a0323a1375b30c13145cc7fda0a1df01c70" Jan 27 15:14:53 crc kubenswrapper[4697]: I0127 15:14:53.000919 4697 scope.go:117] "RemoveContainer" containerID="8cddfd6a2cc55c3c78aae23b52977997bc1b96a4dd948af487e09beabba7afba" Jan 27 15:14:53 crc kubenswrapper[4697]: I0127 15:14:53.017267 4697 scope.go:117] "RemoveContainer" containerID="19278f20367c3d68412249f688ef50f3eeaa19d235ac0a41cc7b2050ebcd7af7" Jan 27 15:14:53 crc kubenswrapper[4697]: E0127 15:14:53.018250 4697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"19278f20367c3d68412249f688ef50f3eeaa19d235ac0a41cc7b2050ebcd7af7\": container with ID starting with 19278f20367c3d68412249f688ef50f3eeaa19d235ac0a41cc7b2050ebcd7af7 not found: ID does not exist" containerID="19278f20367c3d68412249f688ef50f3eeaa19d235ac0a41cc7b2050ebcd7af7" Jan 27 15:14:53 crc kubenswrapper[4697]: I0127 15:14:53.018292 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19278f20367c3d68412249f688ef50f3eeaa19d235ac0a41cc7b2050ebcd7af7"} err="failed to get container status \"19278f20367c3d68412249f688ef50f3eeaa19d235ac0a41cc7b2050ebcd7af7\": rpc error: code = NotFound desc = could not find container \"19278f20367c3d68412249f688ef50f3eeaa19d235ac0a41cc7b2050ebcd7af7\": container with ID starting with 19278f20367c3d68412249f688ef50f3eeaa19d235ac0a41cc7b2050ebcd7af7 not found: ID does not exist" Jan 27 15:14:53 crc kubenswrapper[4697]: I0127 15:14:53.018319 4697 scope.go:117] "RemoveContainer" containerID="633fc583b6005a43efa801b10ac82a0323a1375b30c13145cc7fda0a1df01c70" Jan 27 15:14:53 crc kubenswrapper[4697]: E0127 15:14:53.019014 4697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"633fc583b6005a43efa801b10ac82a0323a1375b30c13145cc7fda0a1df01c70\": container with ID starting with 633fc583b6005a43efa801b10ac82a0323a1375b30c13145cc7fda0a1df01c70 not found: ID does not exist" containerID="633fc583b6005a43efa801b10ac82a0323a1375b30c13145cc7fda0a1df01c70" Jan 27 15:14:53 crc kubenswrapper[4697]: I0127 15:14:53.019066 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"633fc583b6005a43efa801b10ac82a0323a1375b30c13145cc7fda0a1df01c70"} err="failed to get container status \"633fc583b6005a43efa801b10ac82a0323a1375b30c13145cc7fda0a1df01c70\": rpc error: code = NotFound desc = could not find container \"633fc583b6005a43efa801b10ac82a0323a1375b30c13145cc7fda0a1df01c70\": container with ID starting with 633fc583b6005a43efa801b10ac82a0323a1375b30c13145cc7fda0a1df01c70 not found: ID does not exist" Jan 27 15:14:53 crc kubenswrapper[4697]: I0127 15:14:53.019095 4697 scope.go:117] "RemoveContainer" containerID="8cddfd6a2cc55c3c78aae23b52977997bc1b96a4dd948af487e09beabba7afba" Jan 27 15:14:53 crc kubenswrapper[4697]: E0127 15:14:53.019593 4697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8cddfd6a2cc55c3c78aae23b52977997bc1b96a4dd948af487e09beabba7afba\": container with ID starting with 8cddfd6a2cc55c3c78aae23b52977997bc1b96a4dd948af487e09beabba7afba not found: ID does not exist" containerID="8cddfd6a2cc55c3c78aae23b52977997bc1b96a4dd948af487e09beabba7afba" Jan 27 15:14:53 crc kubenswrapper[4697]: I0127 15:14:53.019624 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8cddfd6a2cc55c3c78aae23b52977997bc1b96a4dd948af487e09beabba7afba"} err="failed to get container status \"8cddfd6a2cc55c3c78aae23b52977997bc1b96a4dd948af487e09beabba7afba\": rpc error: code = NotFound desc = could not find container \"8cddfd6a2cc55c3c78aae23b52977997bc1b96a4dd948af487e09beabba7afba\": container with ID starting with 8cddfd6a2cc55c3c78aae23b52977997bc1b96a4dd948af487e09beabba7afba not found: ID does not exist" Jan 27 15:14:53 crc kubenswrapper[4697]: I0127 15:14:53.019666 4697 scope.go:117] "RemoveContainer" containerID="6ad129e2b3ab491f043bb2b6f6f846788fa6f0f4397c088ff3445dbc07c48556" Jan 27 15:14:53 crc kubenswrapper[4697]: I0127 15:14:53.041344 4697 scope.go:117] "RemoveContainer" containerID="2b1863d9ef7301bac8f70977042df56ff9f3ad13fa843ca1c9d1e5421fac1da2" Jan 27 15:14:53 crc kubenswrapper[4697]: I0127 15:14:53.053739 4697 scope.go:117] "RemoveContainer" containerID="e70db5c6ed677a639d2d5c04f3cd9bd3c0931ee09b13f2328ad630080eaaee95" Jan 27 15:14:53 crc kubenswrapper[4697]: I0127 15:14:53.066701 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wst7l"] Jan 27 15:14:53 crc kubenswrapper[4697]: I0127 15:14:53.076294 4697 scope.go:117] "RemoveContainer" containerID="6ad129e2b3ab491f043bb2b6f6f846788fa6f0f4397c088ff3445dbc07c48556" Jan 27 15:14:53 crc kubenswrapper[4697]: I0127 15:14:53.076592 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-wst7l"] Jan 27 15:14:53 crc kubenswrapper[4697]: E0127 15:14:53.076701 4697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ad129e2b3ab491f043bb2b6f6f846788fa6f0f4397c088ff3445dbc07c48556\": container with ID starting with 6ad129e2b3ab491f043bb2b6f6f846788fa6f0f4397c088ff3445dbc07c48556 not found: ID does not exist" containerID="6ad129e2b3ab491f043bb2b6f6f846788fa6f0f4397c088ff3445dbc07c48556" Jan 27 15:14:53 crc kubenswrapper[4697]: I0127 15:14:53.076735 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ad129e2b3ab491f043bb2b6f6f846788fa6f0f4397c088ff3445dbc07c48556"} err="failed to get container status \"6ad129e2b3ab491f043bb2b6f6f846788fa6f0f4397c088ff3445dbc07c48556\": rpc error: code = NotFound desc = could not find container \"6ad129e2b3ab491f043bb2b6f6f846788fa6f0f4397c088ff3445dbc07c48556\": container with ID starting with 6ad129e2b3ab491f043bb2b6f6f846788fa6f0f4397c088ff3445dbc07c48556 not found: ID does not exist" Jan 27 15:14:53 crc kubenswrapper[4697]: I0127 15:14:53.076765 4697 scope.go:117] "RemoveContainer" containerID="2b1863d9ef7301bac8f70977042df56ff9f3ad13fa843ca1c9d1e5421fac1da2" Jan 27 15:14:53 crc kubenswrapper[4697]: E0127 15:14:53.078163 4697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b1863d9ef7301bac8f70977042df56ff9f3ad13fa843ca1c9d1e5421fac1da2\": container with ID starting with 2b1863d9ef7301bac8f70977042df56ff9f3ad13fa843ca1c9d1e5421fac1da2 not found: ID does not exist" containerID="2b1863d9ef7301bac8f70977042df56ff9f3ad13fa843ca1c9d1e5421fac1da2" Jan 27 15:14:53 crc kubenswrapper[4697]: I0127 15:14:53.078208 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b1863d9ef7301bac8f70977042df56ff9f3ad13fa843ca1c9d1e5421fac1da2"} err="failed to get container status \"2b1863d9ef7301bac8f70977042df56ff9f3ad13fa843ca1c9d1e5421fac1da2\": rpc error: code = NotFound desc = could not find container \"2b1863d9ef7301bac8f70977042df56ff9f3ad13fa843ca1c9d1e5421fac1da2\": container with ID starting with 2b1863d9ef7301bac8f70977042df56ff9f3ad13fa843ca1c9d1e5421fac1da2 not found: ID does not exist" Jan 27 15:14:53 crc kubenswrapper[4697]: I0127 15:14:53.078231 4697 scope.go:117] "RemoveContainer" containerID="e70db5c6ed677a639d2d5c04f3cd9bd3c0931ee09b13f2328ad630080eaaee95" Jan 27 15:14:53 crc kubenswrapper[4697]: E0127 15:14:53.079001 4697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e70db5c6ed677a639d2d5c04f3cd9bd3c0931ee09b13f2328ad630080eaaee95\": container with ID starting with e70db5c6ed677a639d2d5c04f3cd9bd3c0931ee09b13f2328ad630080eaaee95 not found: ID does not exist" containerID="e70db5c6ed677a639d2d5c04f3cd9bd3c0931ee09b13f2328ad630080eaaee95" Jan 27 15:14:53 crc kubenswrapper[4697]: I0127 15:14:53.079021 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e70db5c6ed677a639d2d5c04f3cd9bd3c0931ee09b13f2328ad630080eaaee95"} err="failed to get container status \"e70db5c6ed677a639d2d5c04f3cd9bd3c0931ee09b13f2328ad630080eaaee95\": rpc error: code = NotFound desc = could not find container \"e70db5c6ed677a639d2d5c04f3cd9bd3c0931ee09b13f2328ad630080eaaee95\": container with ID starting with e70db5c6ed677a639d2d5c04f3cd9bd3c0931ee09b13f2328ad630080eaaee95 not found: ID does not exist" Jan 27 15:14:53 crc kubenswrapper[4697]: I0127 15:14:53.214914 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7568f5d7c4-6b452"] Jan 27 15:14:53 crc kubenswrapper[4697]: I0127 15:14:53.215330 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-7568f5d7c4-6b452" podUID="49aaa39e-82ad-44c7-b017-58b55e8f5f90" containerName="controller-manager" containerID="cri-o://c04379beef8c58a2c568b5e7986b3103502d302dffc99ab2ed7319b7257b3bef" gracePeriod=30 Jan 27 15:14:53 crc kubenswrapper[4697]: I0127 15:14:53.266226 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5474b5bbd7-sxnrx"] Jan 27 15:14:53 crc kubenswrapper[4697]: I0127 15:14:53.266443 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-5474b5bbd7-sxnrx" podUID="ce8da7f6-3656-484b-9784-e07362e25103" containerName="route-controller-manager" containerID="cri-o://e55bd5a497bd8ea9c5065c4db7c377140def4323dec57876384841553b5f4c5b" gracePeriod=30 Jan 27 15:14:53 crc kubenswrapper[4697]: I0127 15:14:53.717295 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7568f5d7c4-6b452" Jan 27 15:14:53 crc kubenswrapper[4697]: I0127 15:14:53.758044 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5474b5bbd7-sxnrx" Jan 27 15:14:53 crc kubenswrapper[4697]: I0127 15:14:53.773971 4697 generic.go:334] "Generic (PLEG): container finished" podID="ce8da7f6-3656-484b-9784-e07362e25103" containerID="e55bd5a497bd8ea9c5065c4db7c377140def4323dec57876384841553b5f4c5b" exitCode=0 Jan 27 15:14:53 crc kubenswrapper[4697]: I0127 15:14:53.774188 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5474b5bbd7-sxnrx" Jan 27 15:14:53 crc kubenswrapper[4697]: I0127 15:14:53.774498 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5474b5bbd7-sxnrx" event={"ID":"ce8da7f6-3656-484b-9784-e07362e25103","Type":"ContainerDied","Data":"e55bd5a497bd8ea9c5065c4db7c377140def4323dec57876384841553b5f4c5b"} Jan 27 15:14:53 crc kubenswrapper[4697]: I0127 15:14:53.774537 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5474b5bbd7-sxnrx" event={"ID":"ce8da7f6-3656-484b-9784-e07362e25103","Type":"ContainerDied","Data":"af978ff23141fa46293619d77dda6c245477ed3ee8301a6b043ffe43454ab541"} Jan 27 15:14:53 crc kubenswrapper[4697]: I0127 15:14:53.774579 4697 scope.go:117] "RemoveContainer" containerID="e55bd5a497bd8ea9c5065c4db7c377140def4323dec57876384841553b5f4c5b" Jan 27 15:14:53 crc kubenswrapper[4697]: I0127 15:14:53.793373 4697 generic.go:334] "Generic (PLEG): container finished" podID="49aaa39e-82ad-44c7-b017-58b55e8f5f90" containerID="c04379beef8c58a2c568b5e7986b3103502d302dffc99ab2ed7319b7257b3bef" exitCode=0 Jan 27 15:14:53 crc kubenswrapper[4697]: I0127 15:14:53.793436 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7568f5d7c4-6b452" event={"ID":"49aaa39e-82ad-44c7-b017-58b55e8f5f90","Type":"ContainerDied","Data":"c04379beef8c58a2c568b5e7986b3103502d302dffc99ab2ed7319b7257b3bef"} Jan 27 15:14:53 crc kubenswrapper[4697]: I0127 15:14:53.793457 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7568f5d7c4-6b452" event={"ID":"49aaa39e-82ad-44c7-b017-58b55e8f5f90","Type":"ContainerDied","Data":"e05f1a115132875c253478af5f00973fb2c4a47d8fb5126a001eb8c2f39dff23"} Jan 27 15:14:53 crc kubenswrapper[4697]: I0127 15:14:53.793506 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7568f5d7c4-6b452" Jan 27 15:14:53 crc kubenswrapper[4697]: I0127 15:14:53.799037 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-hwq4c" event={"ID":"e4c801e2-39ef-4230-8bb0-fed36eccba1a","Type":"ContainerStarted","Data":"fd77997cb847ad9f8cae5b51d135cb92de0efc509c42ee6dcd35fccfd3d50eed"} Jan 27 15:14:53 crc kubenswrapper[4697]: I0127 15:14:53.799078 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-hwq4c" event={"ID":"e4c801e2-39ef-4230-8bb0-fed36eccba1a","Type":"ContainerStarted","Data":"226e8c05e6f26c036fad683a5b9e385ec4259a2c84ff9629f12655632e7c9573"} Jan 27 15:14:53 crc kubenswrapper[4697]: I0127 15:14:53.799431 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-hwq4c" Jan 27 15:14:53 crc kubenswrapper[4697]: I0127 15:14:53.800447 4697 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-hwq4c container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.67:8080/healthz\": dial tcp 10.217.0.67:8080: connect: connection refused" start-of-body= Jan 27 15:14:53 crc kubenswrapper[4697]: I0127 15:14:53.800490 4697 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-hwq4c" podUID="e4c801e2-39ef-4230-8bb0-fed36eccba1a" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.67:8080/healthz\": dial tcp 10.217.0.67:8080: connect: connection refused" Jan 27 15:14:53 crc kubenswrapper[4697]: I0127 15:14:53.819758 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-hwq4c" podStartSLOduration=2.819737933 podStartE2EDuration="2.819737933s" podCreationTimestamp="2026-01-27 15:14:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:14:53.818097204 +0000 UTC m=+389.990496985" watchObservedRunningTime="2026-01-27 15:14:53.819737933 +0000 UTC m=+389.992137714" Jan 27 15:14:53 crc kubenswrapper[4697]: I0127 15:14:53.832271 4697 scope.go:117] "RemoveContainer" containerID="e55bd5a497bd8ea9c5065c4db7c377140def4323dec57876384841553b5f4c5b" Jan 27 15:14:53 crc kubenswrapper[4697]: E0127 15:14:53.836100 4697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e55bd5a497bd8ea9c5065c4db7c377140def4323dec57876384841553b5f4c5b\": container with ID starting with e55bd5a497bd8ea9c5065c4db7c377140def4323dec57876384841553b5f4c5b not found: ID does not exist" containerID="e55bd5a497bd8ea9c5065c4db7c377140def4323dec57876384841553b5f4c5b" Jan 27 15:14:53 crc kubenswrapper[4697]: I0127 15:14:53.836149 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e55bd5a497bd8ea9c5065c4db7c377140def4323dec57876384841553b5f4c5b"} err="failed to get container status \"e55bd5a497bd8ea9c5065c4db7c377140def4323dec57876384841553b5f4c5b\": rpc error: code = NotFound desc = could not find container \"e55bd5a497bd8ea9c5065c4db7c377140def4323dec57876384841553b5f4c5b\": container with ID starting with e55bd5a497bd8ea9c5065c4db7c377140def4323dec57876384841553b5f4c5b not found: ID does not exist" Jan 27 15:14:53 crc kubenswrapper[4697]: I0127 15:14:53.836188 4697 scope.go:117] "RemoveContainer" containerID="c04379beef8c58a2c568b5e7986b3103502d302dffc99ab2ed7319b7257b3bef" Jan 27 15:14:53 crc kubenswrapper[4697]: I0127 15:14:53.855068 4697 scope.go:117] "RemoveContainer" containerID="c04379beef8c58a2c568b5e7986b3103502d302dffc99ab2ed7319b7257b3bef" Jan 27 15:14:53 crc kubenswrapper[4697]: E0127 15:14:53.855605 4697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c04379beef8c58a2c568b5e7986b3103502d302dffc99ab2ed7319b7257b3bef\": container with ID starting with c04379beef8c58a2c568b5e7986b3103502d302dffc99ab2ed7319b7257b3bef not found: ID does not exist" containerID="c04379beef8c58a2c568b5e7986b3103502d302dffc99ab2ed7319b7257b3bef" Jan 27 15:14:53 crc kubenswrapper[4697]: I0127 15:14:53.855644 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c04379beef8c58a2c568b5e7986b3103502d302dffc99ab2ed7319b7257b3bef"} err="failed to get container status \"c04379beef8c58a2c568b5e7986b3103502d302dffc99ab2ed7319b7257b3bef\": rpc error: code = NotFound desc = could not find container \"c04379beef8c58a2c568b5e7986b3103502d302dffc99ab2ed7319b7257b3bef\": container with ID starting with c04379beef8c58a2c568b5e7986b3103502d302dffc99ab2ed7319b7257b3bef not found: ID does not exist" Jan 27 15:14:53 crc kubenswrapper[4697]: I0127 15:14:53.871278 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49aaa39e-82ad-44c7-b017-58b55e8f5f90-config\") pod \"49aaa39e-82ad-44c7-b017-58b55e8f5f90\" (UID: \"49aaa39e-82ad-44c7-b017-58b55e8f5f90\") " Jan 27 15:14:53 crc kubenswrapper[4697]: I0127 15:14:53.871329 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qm2jn\" (UniqueName: \"kubernetes.io/projected/ce8da7f6-3656-484b-9784-e07362e25103-kube-api-access-qm2jn\") pod \"ce8da7f6-3656-484b-9784-e07362e25103\" (UID: \"ce8da7f6-3656-484b-9784-e07362e25103\") " Jan 27 15:14:53 crc kubenswrapper[4697]: I0127 15:14:53.871357 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/49aaa39e-82ad-44c7-b017-58b55e8f5f90-proxy-ca-bundles\") pod \"49aaa39e-82ad-44c7-b017-58b55e8f5f90\" (UID: \"49aaa39e-82ad-44c7-b017-58b55e8f5f90\") " Jan 27 15:14:53 crc kubenswrapper[4697]: I0127 15:14:53.871372 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ce8da7f6-3656-484b-9784-e07362e25103-client-ca\") pod \"ce8da7f6-3656-484b-9784-e07362e25103\" (UID: \"ce8da7f6-3656-484b-9784-e07362e25103\") " Jan 27 15:14:53 crc kubenswrapper[4697]: I0127 15:14:53.871423 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n6lnp\" (UniqueName: \"kubernetes.io/projected/49aaa39e-82ad-44c7-b017-58b55e8f5f90-kube-api-access-n6lnp\") pod \"49aaa39e-82ad-44c7-b017-58b55e8f5f90\" (UID: \"49aaa39e-82ad-44c7-b017-58b55e8f5f90\") " Jan 27 15:14:53 crc kubenswrapper[4697]: I0127 15:14:53.871440 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/49aaa39e-82ad-44c7-b017-58b55e8f5f90-serving-cert\") pod \"49aaa39e-82ad-44c7-b017-58b55e8f5f90\" (UID: \"49aaa39e-82ad-44c7-b017-58b55e8f5f90\") " Jan 27 15:14:53 crc kubenswrapper[4697]: I0127 15:14:53.871455 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce8da7f6-3656-484b-9784-e07362e25103-config\") pod \"ce8da7f6-3656-484b-9784-e07362e25103\" (UID: \"ce8da7f6-3656-484b-9784-e07362e25103\") " Jan 27 15:14:53 crc kubenswrapper[4697]: I0127 15:14:53.871467 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ce8da7f6-3656-484b-9784-e07362e25103-serving-cert\") pod \"ce8da7f6-3656-484b-9784-e07362e25103\" (UID: \"ce8da7f6-3656-484b-9784-e07362e25103\") " Jan 27 15:14:53 crc kubenswrapper[4697]: I0127 15:14:53.871505 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/49aaa39e-82ad-44c7-b017-58b55e8f5f90-client-ca\") pod \"49aaa39e-82ad-44c7-b017-58b55e8f5f90\" (UID: \"49aaa39e-82ad-44c7-b017-58b55e8f5f90\") " Jan 27 15:14:53 crc kubenswrapper[4697]: I0127 15:14:53.872496 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49aaa39e-82ad-44c7-b017-58b55e8f5f90-config" (OuterVolumeSpecName: "config") pod "49aaa39e-82ad-44c7-b017-58b55e8f5f90" (UID: "49aaa39e-82ad-44c7-b017-58b55e8f5f90"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:14:53 crc kubenswrapper[4697]: I0127 15:14:53.873162 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce8da7f6-3656-484b-9784-e07362e25103-client-ca" (OuterVolumeSpecName: "client-ca") pod "ce8da7f6-3656-484b-9784-e07362e25103" (UID: "ce8da7f6-3656-484b-9784-e07362e25103"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:14:53 crc kubenswrapper[4697]: I0127 15:14:53.873245 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce8da7f6-3656-484b-9784-e07362e25103-config" (OuterVolumeSpecName: "config") pod "ce8da7f6-3656-484b-9784-e07362e25103" (UID: "ce8da7f6-3656-484b-9784-e07362e25103"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:14:53 crc kubenswrapper[4697]: I0127 15:14:53.873759 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49aaa39e-82ad-44c7-b017-58b55e8f5f90-client-ca" (OuterVolumeSpecName: "client-ca") pod "49aaa39e-82ad-44c7-b017-58b55e8f5f90" (UID: "49aaa39e-82ad-44c7-b017-58b55e8f5f90"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:14:53 crc kubenswrapper[4697]: I0127 15:14:53.874135 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49aaa39e-82ad-44c7-b017-58b55e8f5f90-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "49aaa39e-82ad-44c7-b017-58b55e8f5f90" (UID: "49aaa39e-82ad-44c7-b017-58b55e8f5f90"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:14:53 crc kubenswrapper[4697]: I0127 15:14:53.878482 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce8da7f6-3656-484b-9784-e07362e25103-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "ce8da7f6-3656-484b-9784-e07362e25103" (UID: "ce8da7f6-3656-484b-9784-e07362e25103"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:14:53 crc kubenswrapper[4697]: I0127 15:14:53.878697 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49aaa39e-82ad-44c7-b017-58b55e8f5f90-kube-api-access-n6lnp" (OuterVolumeSpecName: "kube-api-access-n6lnp") pod "49aaa39e-82ad-44c7-b017-58b55e8f5f90" (UID: "49aaa39e-82ad-44c7-b017-58b55e8f5f90"). InnerVolumeSpecName "kube-api-access-n6lnp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:14:53 crc kubenswrapper[4697]: I0127 15:14:53.879970 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce8da7f6-3656-484b-9784-e07362e25103-kube-api-access-qm2jn" (OuterVolumeSpecName: "kube-api-access-qm2jn") pod "ce8da7f6-3656-484b-9784-e07362e25103" (UID: "ce8da7f6-3656-484b-9784-e07362e25103"). InnerVolumeSpecName "kube-api-access-qm2jn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:14:53 crc kubenswrapper[4697]: I0127 15:14:53.883298 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-q9lwr"] Jan 27 15:14:53 crc kubenswrapper[4697]: E0127 15:14:53.883478 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20946332-e642-4802-b943-8c504ef8c3ec" containerName="extract-utilities" Jan 27 15:14:53 crc kubenswrapper[4697]: I0127 15:14:53.883493 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="20946332-e642-4802-b943-8c504ef8c3ec" containerName="extract-utilities" Jan 27 15:14:53 crc kubenswrapper[4697]: E0127 15:14:53.883504 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="316f7102-a9a6-40c4-b38b-ba9c7736526a" containerName="extract-utilities" Jan 27 15:14:53 crc kubenswrapper[4697]: I0127 15:14:53.883510 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="316f7102-a9a6-40c4-b38b-ba9c7736526a" containerName="extract-utilities" Jan 27 15:14:53 crc kubenswrapper[4697]: E0127 15:14:53.883519 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7864bf9-220d-402f-bb77-0240a422c2f8" containerName="registry-server" Jan 27 15:14:53 crc kubenswrapper[4697]: I0127 15:14:53.883525 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7864bf9-220d-402f-bb77-0240a422c2f8" containerName="registry-server" Jan 27 15:14:53 crc kubenswrapper[4697]: E0127 15:14:53.883533 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20946332-e642-4802-b943-8c504ef8c3ec" containerName="registry-server" Jan 27 15:14:53 crc kubenswrapper[4697]: I0127 15:14:53.883541 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="20946332-e642-4802-b943-8c504ef8c3ec" containerName="registry-server" Jan 27 15:14:53 crc kubenswrapper[4697]: E0127 15:14:53.883549 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="316f7102-a9a6-40c4-b38b-ba9c7736526a" containerName="registry-server" Jan 27 15:14:53 crc kubenswrapper[4697]: I0127 15:14:53.883554 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="316f7102-a9a6-40c4-b38b-ba9c7736526a" containerName="registry-server" Jan 27 15:14:53 crc kubenswrapper[4697]: E0127 15:14:53.883563 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="baa7401d-bcad-4175-af1b-46414c003f9e" containerName="marketplace-operator" Jan 27 15:14:53 crc kubenswrapper[4697]: I0127 15:14:53.883569 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="baa7401d-bcad-4175-af1b-46414c003f9e" containerName="marketplace-operator" Jan 27 15:14:53 crc kubenswrapper[4697]: E0127 15:14:53.883579 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce8da7f6-3656-484b-9784-e07362e25103" containerName="route-controller-manager" Jan 27 15:14:53 crc kubenswrapper[4697]: I0127 15:14:53.883585 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce8da7f6-3656-484b-9784-e07362e25103" containerName="route-controller-manager" Jan 27 15:14:53 crc kubenswrapper[4697]: E0127 15:14:53.883592 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="316f7102-a9a6-40c4-b38b-ba9c7736526a" containerName="extract-content" Jan 27 15:14:53 crc kubenswrapper[4697]: I0127 15:14:53.883598 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="316f7102-a9a6-40c4-b38b-ba9c7736526a" containerName="extract-content" Jan 27 15:14:53 crc kubenswrapper[4697]: E0127 15:14:53.883607 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf0813c1-e1e9-42a4-8cf9-8fd7fba35e3d" containerName="registry-server" Jan 27 15:14:53 crc kubenswrapper[4697]: I0127 15:14:53.883614 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf0813c1-e1e9-42a4-8cf9-8fd7fba35e3d" containerName="registry-server" Jan 27 15:14:53 crc kubenswrapper[4697]: E0127 15:14:53.883621 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20946332-e642-4802-b943-8c504ef8c3ec" containerName="extract-content" Jan 27 15:14:53 crc kubenswrapper[4697]: I0127 15:14:53.883627 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="20946332-e642-4802-b943-8c504ef8c3ec" containerName="extract-content" Jan 27 15:14:53 crc kubenswrapper[4697]: E0127 15:14:53.883636 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7864bf9-220d-402f-bb77-0240a422c2f8" containerName="extract-content" Jan 27 15:14:53 crc kubenswrapper[4697]: I0127 15:14:53.883642 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7864bf9-220d-402f-bb77-0240a422c2f8" containerName="extract-content" Jan 27 15:14:53 crc kubenswrapper[4697]: E0127 15:14:53.883650 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49aaa39e-82ad-44c7-b017-58b55e8f5f90" containerName="controller-manager" Jan 27 15:14:53 crc kubenswrapper[4697]: I0127 15:14:53.883657 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="49aaa39e-82ad-44c7-b017-58b55e8f5f90" containerName="controller-manager" Jan 27 15:14:53 crc kubenswrapper[4697]: E0127 15:14:53.883665 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf0813c1-e1e9-42a4-8cf9-8fd7fba35e3d" containerName="extract-utilities" Jan 27 15:14:53 crc kubenswrapper[4697]: I0127 15:14:53.883670 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf0813c1-e1e9-42a4-8cf9-8fd7fba35e3d" containerName="extract-utilities" Jan 27 15:14:53 crc kubenswrapper[4697]: E0127 15:14:53.883677 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf0813c1-e1e9-42a4-8cf9-8fd7fba35e3d" containerName="extract-content" Jan 27 15:14:53 crc kubenswrapper[4697]: I0127 15:14:53.883682 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf0813c1-e1e9-42a4-8cf9-8fd7fba35e3d" containerName="extract-content" Jan 27 15:14:53 crc kubenswrapper[4697]: E0127 15:14:53.883691 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7864bf9-220d-402f-bb77-0240a422c2f8" containerName="extract-utilities" Jan 27 15:14:53 crc kubenswrapper[4697]: I0127 15:14:53.883696 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7864bf9-220d-402f-bb77-0240a422c2f8" containerName="extract-utilities" Jan 27 15:14:53 crc kubenswrapper[4697]: I0127 15:14:53.883768 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="316f7102-a9a6-40c4-b38b-ba9c7736526a" containerName="registry-server" Jan 27 15:14:53 crc kubenswrapper[4697]: I0127 15:14:53.883859 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="20946332-e642-4802-b943-8c504ef8c3ec" containerName="registry-server" Jan 27 15:14:53 crc kubenswrapper[4697]: I0127 15:14:53.883868 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="49aaa39e-82ad-44c7-b017-58b55e8f5f90" containerName="controller-manager" Jan 27 15:14:53 crc kubenswrapper[4697]: I0127 15:14:53.883878 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce8da7f6-3656-484b-9784-e07362e25103" containerName="route-controller-manager" Jan 27 15:14:53 crc kubenswrapper[4697]: I0127 15:14:53.883885 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7864bf9-220d-402f-bb77-0240a422c2f8" containerName="registry-server" Jan 27 15:14:53 crc kubenswrapper[4697]: I0127 15:14:53.883892 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf0813c1-e1e9-42a4-8cf9-8fd7fba35e3d" containerName="registry-server" Jan 27 15:14:53 crc kubenswrapper[4697]: I0127 15:14:53.883898 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="baa7401d-bcad-4175-af1b-46414c003f9e" containerName="marketplace-operator" Jan 27 15:14:53 crc kubenswrapper[4697]: I0127 15:14:53.883906 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="baa7401d-bcad-4175-af1b-46414c003f9e" containerName="marketplace-operator" Jan 27 15:14:53 crc kubenswrapper[4697]: E0127 15:14:53.884011 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="baa7401d-bcad-4175-af1b-46414c003f9e" containerName="marketplace-operator" Jan 27 15:14:53 crc kubenswrapper[4697]: I0127 15:14:53.884019 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="baa7401d-bcad-4175-af1b-46414c003f9e" containerName="marketplace-operator" Jan 27 15:14:53 crc kubenswrapper[4697]: I0127 15:14:53.884566 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q9lwr" Jan 27 15:14:53 crc kubenswrapper[4697]: I0127 15:14:53.886916 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49aaa39e-82ad-44c7-b017-58b55e8f5f90-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "49aaa39e-82ad-44c7-b017-58b55e8f5f90" (UID: "49aaa39e-82ad-44c7-b017-58b55e8f5f90"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:14:53 crc kubenswrapper[4697]: I0127 15:14:53.887668 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 27 15:14:53 crc kubenswrapper[4697]: I0127 15:14:53.904611 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-q9lwr"] Jan 27 15:14:53 crc kubenswrapper[4697]: I0127 15:14:53.974506 4697 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/49aaa39e-82ad-44c7-b017-58b55e8f5f90-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 15:14:53 crc kubenswrapper[4697]: I0127 15:14:53.974556 4697 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49aaa39e-82ad-44c7-b017-58b55e8f5f90-config\") on node \"crc\" DevicePath \"\"" Jan 27 15:14:53 crc kubenswrapper[4697]: I0127 15:14:53.974566 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qm2jn\" (UniqueName: \"kubernetes.io/projected/ce8da7f6-3656-484b-9784-e07362e25103-kube-api-access-qm2jn\") on node \"crc\" DevicePath \"\"" Jan 27 15:14:53 crc kubenswrapper[4697]: I0127 15:14:53.974579 4697 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/49aaa39e-82ad-44c7-b017-58b55e8f5f90-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 27 15:14:53 crc kubenswrapper[4697]: I0127 15:14:53.974587 4697 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ce8da7f6-3656-484b-9784-e07362e25103-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 15:14:53 crc kubenswrapper[4697]: I0127 15:14:53.974596 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n6lnp\" (UniqueName: \"kubernetes.io/projected/49aaa39e-82ad-44c7-b017-58b55e8f5f90-kube-api-access-n6lnp\") on node \"crc\" DevicePath \"\"" Jan 27 15:14:53 crc kubenswrapper[4697]: I0127 15:14:53.974620 4697 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/49aaa39e-82ad-44c7-b017-58b55e8f5f90-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 15:14:53 crc kubenswrapper[4697]: I0127 15:14:53.974628 4697 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce8da7f6-3656-484b-9784-e07362e25103-config\") on node \"crc\" DevicePath \"\"" Jan 27 15:14:53 crc kubenswrapper[4697]: I0127 15:14:53.974637 4697 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ce8da7f6-3656-484b-9784-e07362e25103-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 15:14:54 crc kubenswrapper[4697]: I0127 15:14:54.075435 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qcl7r\" (UniqueName: \"kubernetes.io/projected/24c17e72-9143-4da4-8b8f-0a777f568dfc-kube-api-access-qcl7r\") pod \"redhat-marketplace-q9lwr\" (UID: \"24c17e72-9143-4da4-8b8f-0a777f568dfc\") " pod="openshift-marketplace/redhat-marketplace-q9lwr" Jan 27 15:14:54 crc kubenswrapper[4697]: I0127 15:14:54.075476 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24c17e72-9143-4da4-8b8f-0a777f568dfc-catalog-content\") pod \"redhat-marketplace-q9lwr\" (UID: \"24c17e72-9143-4da4-8b8f-0a777f568dfc\") " pod="openshift-marketplace/redhat-marketplace-q9lwr" Jan 27 15:14:54 crc kubenswrapper[4697]: I0127 15:14:54.075672 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24c17e72-9143-4da4-8b8f-0a777f568dfc-utilities\") pod \"redhat-marketplace-q9lwr\" (UID: \"24c17e72-9143-4da4-8b8f-0a777f568dfc\") " pod="openshift-marketplace/redhat-marketplace-q9lwr" Jan 27 15:14:54 crc kubenswrapper[4697]: I0127 15:14:54.083362 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-cbj7r"] Jan 27 15:14:54 crc kubenswrapper[4697]: I0127 15:14:54.084305 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cbj7r" Jan 27 15:14:54 crc kubenswrapper[4697]: I0127 15:14:54.086869 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 27 15:14:54 crc kubenswrapper[4697]: I0127 15:14:54.097012 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cbj7r"] Jan 27 15:14:54 crc kubenswrapper[4697]: I0127 15:14:54.146442 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7568f5d7c4-6b452"] Jan 27 15:14:54 crc kubenswrapper[4697]: I0127 15:14:54.157558 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7568f5d7c4-6b452"] Jan 27 15:14:54 crc kubenswrapper[4697]: I0127 15:14:54.161805 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5474b5bbd7-sxnrx"] Jan 27 15:14:54 crc kubenswrapper[4697]: I0127 15:14:54.165402 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5474b5bbd7-sxnrx"] Jan 27 15:14:54 crc kubenswrapper[4697]: I0127 15:14:54.176926 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qcl7r\" (UniqueName: \"kubernetes.io/projected/24c17e72-9143-4da4-8b8f-0a777f568dfc-kube-api-access-qcl7r\") pod \"redhat-marketplace-q9lwr\" (UID: \"24c17e72-9143-4da4-8b8f-0a777f568dfc\") " pod="openshift-marketplace/redhat-marketplace-q9lwr" Jan 27 15:14:54 crc kubenswrapper[4697]: I0127 15:14:54.176964 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24c17e72-9143-4da4-8b8f-0a777f568dfc-catalog-content\") pod \"redhat-marketplace-q9lwr\" (UID: \"24c17e72-9143-4da4-8b8f-0a777f568dfc\") " pod="openshift-marketplace/redhat-marketplace-q9lwr" Jan 27 15:14:54 crc kubenswrapper[4697]: I0127 15:14:54.177021 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24c17e72-9143-4da4-8b8f-0a777f568dfc-utilities\") pod \"redhat-marketplace-q9lwr\" (UID: \"24c17e72-9143-4da4-8b8f-0a777f568dfc\") " pod="openshift-marketplace/redhat-marketplace-q9lwr" Jan 27 15:14:54 crc kubenswrapper[4697]: I0127 15:14:54.177379 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24c17e72-9143-4da4-8b8f-0a777f568dfc-utilities\") pod \"redhat-marketplace-q9lwr\" (UID: \"24c17e72-9143-4da4-8b8f-0a777f568dfc\") " pod="openshift-marketplace/redhat-marketplace-q9lwr" Jan 27 15:14:54 crc kubenswrapper[4697]: I0127 15:14:54.177478 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24c17e72-9143-4da4-8b8f-0a777f568dfc-catalog-content\") pod \"redhat-marketplace-q9lwr\" (UID: \"24c17e72-9143-4da4-8b8f-0a777f568dfc\") " pod="openshift-marketplace/redhat-marketplace-q9lwr" Jan 27 15:14:54 crc kubenswrapper[4697]: I0127 15:14:54.193375 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qcl7r\" (UniqueName: \"kubernetes.io/projected/24c17e72-9143-4da4-8b8f-0a777f568dfc-kube-api-access-qcl7r\") pod \"redhat-marketplace-q9lwr\" (UID: \"24c17e72-9143-4da4-8b8f-0a777f568dfc\") " pod="openshift-marketplace/redhat-marketplace-q9lwr" Jan 27 15:14:54 crc kubenswrapper[4697]: I0127 15:14:54.225529 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q9lwr" Jan 27 15:14:54 crc kubenswrapper[4697]: I0127 15:14:54.277860 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52ecc276-9ad2-4527-9e59-a4e19c63d851-utilities\") pod \"redhat-operators-cbj7r\" (UID: \"52ecc276-9ad2-4527-9e59-a4e19c63d851\") " pod="openshift-marketplace/redhat-operators-cbj7r" Jan 27 15:14:54 crc kubenswrapper[4697]: I0127 15:14:54.277901 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52ecc276-9ad2-4527-9e59-a4e19c63d851-catalog-content\") pod \"redhat-operators-cbj7r\" (UID: \"52ecc276-9ad2-4527-9e59-a4e19c63d851\") " pod="openshift-marketplace/redhat-operators-cbj7r" Jan 27 15:14:54 crc kubenswrapper[4697]: I0127 15:14:54.277927 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfg77\" (UniqueName: \"kubernetes.io/projected/52ecc276-9ad2-4527-9e59-a4e19c63d851-kube-api-access-tfg77\") pod \"redhat-operators-cbj7r\" (UID: \"52ecc276-9ad2-4527-9e59-a4e19c63d851\") " pod="openshift-marketplace/redhat-operators-cbj7r" Jan 27 15:14:54 crc kubenswrapper[4697]: I0127 15:14:54.378672 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52ecc276-9ad2-4527-9e59-a4e19c63d851-utilities\") pod \"redhat-operators-cbj7r\" (UID: \"52ecc276-9ad2-4527-9e59-a4e19c63d851\") " pod="openshift-marketplace/redhat-operators-cbj7r" Jan 27 15:14:54 crc kubenswrapper[4697]: I0127 15:14:54.378706 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52ecc276-9ad2-4527-9e59-a4e19c63d851-catalog-content\") pod \"redhat-operators-cbj7r\" (UID: \"52ecc276-9ad2-4527-9e59-a4e19c63d851\") " pod="openshift-marketplace/redhat-operators-cbj7r" Jan 27 15:14:54 crc kubenswrapper[4697]: I0127 15:14:54.378724 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfg77\" (UniqueName: \"kubernetes.io/projected/52ecc276-9ad2-4527-9e59-a4e19c63d851-kube-api-access-tfg77\") pod \"redhat-operators-cbj7r\" (UID: \"52ecc276-9ad2-4527-9e59-a4e19c63d851\") " pod="openshift-marketplace/redhat-operators-cbj7r" Jan 27 15:14:54 crc kubenswrapper[4697]: I0127 15:14:54.379722 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52ecc276-9ad2-4527-9e59-a4e19c63d851-utilities\") pod \"redhat-operators-cbj7r\" (UID: \"52ecc276-9ad2-4527-9e59-a4e19c63d851\") " pod="openshift-marketplace/redhat-operators-cbj7r" Jan 27 15:14:54 crc kubenswrapper[4697]: I0127 15:14:54.379942 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52ecc276-9ad2-4527-9e59-a4e19c63d851-catalog-content\") pod \"redhat-operators-cbj7r\" (UID: \"52ecc276-9ad2-4527-9e59-a4e19c63d851\") " pod="openshift-marketplace/redhat-operators-cbj7r" Jan 27 15:14:54 crc kubenswrapper[4697]: I0127 15:14:54.395358 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tfg77\" (UniqueName: \"kubernetes.io/projected/52ecc276-9ad2-4527-9e59-a4e19c63d851-kube-api-access-tfg77\") pod \"redhat-operators-cbj7r\" (UID: \"52ecc276-9ad2-4527-9e59-a4e19c63d851\") " pod="openshift-marketplace/redhat-operators-cbj7r" Jan 27 15:14:54 crc kubenswrapper[4697]: I0127 15:14:54.405685 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cbj7r" Jan 27 15:14:54 crc kubenswrapper[4697]: I0127 15:14:54.577922 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20946332-e642-4802-b943-8c504ef8c3ec" path="/var/lib/kubelet/pods/20946332-e642-4802-b943-8c504ef8c3ec/volumes" Jan 27 15:14:54 crc kubenswrapper[4697]: I0127 15:14:54.579300 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="316f7102-a9a6-40c4-b38b-ba9c7736526a" path="/var/lib/kubelet/pods/316f7102-a9a6-40c4-b38b-ba9c7736526a/volumes" Jan 27 15:14:54 crc kubenswrapper[4697]: I0127 15:14:54.580305 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49aaa39e-82ad-44c7-b017-58b55e8f5f90" path="/var/lib/kubelet/pods/49aaa39e-82ad-44c7-b017-58b55e8f5f90/volumes" Jan 27 15:14:54 crc kubenswrapper[4697]: I0127 15:14:54.581771 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="baa7401d-bcad-4175-af1b-46414c003f9e" path="/var/lib/kubelet/pods/baa7401d-bcad-4175-af1b-46414c003f9e/volumes" Jan 27 15:14:54 crc kubenswrapper[4697]: I0127 15:14:54.582517 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf0813c1-e1e9-42a4-8cf9-8fd7fba35e3d" path="/var/lib/kubelet/pods/bf0813c1-e1e9-42a4-8cf9-8fd7fba35e3d/volumes" Jan 27 15:14:54 crc kubenswrapper[4697]: I0127 15:14:54.583999 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce8da7f6-3656-484b-9784-e07362e25103" path="/var/lib/kubelet/pods/ce8da7f6-3656-484b-9784-e07362e25103/volumes" Jan 27 15:14:54 crc kubenswrapper[4697]: I0127 15:14:54.584666 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7864bf9-220d-402f-bb77-0240a422c2f8" path="/var/lib/kubelet/pods/d7864bf9-220d-402f-bb77-0240a422c2f8/volumes" Jan 27 15:14:54 crc kubenswrapper[4697]: W0127 15:14:54.663710 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod24c17e72_9143_4da4_8b8f_0a777f568dfc.slice/crio-c2b7cbee3f8767296b10e1d6e70b4c533ce0dc1bad03ea7d4efb2c8b6487ffaf WatchSource:0}: Error finding container c2b7cbee3f8767296b10e1d6e70b4c533ce0dc1bad03ea7d4efb2c8b6487ffaf: Status 404 returned error can't find the container with id c2b7cbee3f8767296b10e1d6e70b4c533ce0dc1bad03ea7d4efb2c8b6487ffaf Jan 27 15:14:54 crc kubenswrapper[4697]: I0127 15:14:54.664760 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-q9lwr"] Jan 27 15:14:54 crc kubenswrapper[4697]: I0127 15:14:54.806698 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q9lwr" event={"ID":"24c17e72-9143-4da4-8b8f-0a777f568dfc","Type":"ContainerStarted","Data":"07f58960a19683c1040b73c8976af023a0a8b40600f503a2cf86ad61626526ea"} Jan 27 15:14:54 crc kubenswrapper[4697]: I0127 15:14:54.807003 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q9lwr" event={"ID":"24c17e72-9143-4da4-8b8f-0a777f568dfc","Type":"ContainerStarted","Data":"c2b7cbee3f8767296b10e1d6e70b4c533ce0dc1bad03ea7d4efb2c8b6487ffaf"} Jan 27 15:14:54 crc kubenswrapper[4697]: I0127 15:14:54.809816 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-hwq4c" Jan 27 15:14:54 crc kubenswrapper[4697]: I0127 15:14:54.825213 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cbj7r"] Jan 27 15:14:54 crc kubenswrapper[4697]: W0127 15:14:54.861372 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod52ecc276_9ad2_4527_9e59_a4e19c63d851.slice/crio-c32a0b675540f3e27757c0cd037d2ebf9c71fd3a1dfce046fd211b1d2d022e1a WatchSource:0}: Error finding container c32a0b675540f3e27757c0cd037d2ebf9c71fd3a1dfce046fd211b1d2d022e1a: Status 404 returned error can't find the container with id c32a0b675540f3e27757c0cd037d2ebf9c71fd3a1dfce046fd211b1d2d022e1a Jan 27 15:14:54 crc kubenswrapper[4697]: I0127 15:14:54.975828 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-c6cc786fd-2v4gr"] Jan 27 15:14:54 crc kubenswrapper[4697]: I0127 15:14:54.976412 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-c6cc786fd-2v4gr" Jan 27 15:14:54 crc kubenswrapper[4697]: I0127 15:14:54.980838 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 27 15:14:54 crc kubenswrapper[4697]: I0127 15:14:54.980908 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 27 15:14:54 crc kubenswrapper[4697]: I0127 15:14:54.981018 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 27 15:14:54 crc kubenswrapper[4697]: I0127 15:14:54.981066 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 27 15:14:54 crc kubenswrapper[4697]: I0127 15:14:54.981345 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 27 15:14:54 crc kubenswrapper[4697]: I0127 15:14:54.981489 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 27 15:14:54 crc kubenswrapper[4697]: I0127 15:14:54.990011 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7c8d4bc847-8rsl4"] Jan 27 15:14:54 crc kubenswrapper[4697]: I0127 15:14:54.990908 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7c8d4bc847-8rsl4" Jan 27 15:14:54 crc kubenswrapper[4697]: I0127 15:14:54.994928 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 27 15:14:54 crc kubenswrapper[4697]: I0127 15:14:54.995137 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 27 15:14:54 crc kubenswrapper[4697]: I0127 15:14:54.995242 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 27 15:14:54 crc kubenswrapper[4697]: I0127 15:14:54.995344 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 27 15:14:54 crc kubenswrapper[4697]: I0127 15:14:54.995727 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 27 15:14:55 crc kubenswrapper[4697]: I0127 15:14:55.003251 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 27 15:14:55 crc kubenswrapper[4697]: I0127 15:14:55.009657 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 27 15:14:55 crc kubenswrapper[4697]: I0127 15:14:55.015023 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7c8d4bc847-8rsl4"] Jan 27 15:14:55 crc kubenswrapper[4697]: I0127 15:14:55.026175 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-c6cc786fd-2v4gr"] Jan 27 15:14:55 crc kubenswrapper[4697]: I0127 15:14:55.090248 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7a6a86af-c5ef-4c92-b496-e9c7974a2e46-client-ca\") pod \"controller-manager-7c8d4bc847-8rsl4\" (UID: \"7a6a86af-c5ef-4c92-b496-e9c7974a2e46\") " pod="openshift-controller-manager/controller-manager-7c8d4bc847-8rsl4" Jan 27 15:14:55 crc kubenswrapper[4697]: I0127 15:14:55.090629 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxd4g\" (UniqueName: \"kubernetes.io/projected/7a6a86af-c5ef-4c92-b496-e9c7974a2e46-kube-api-access-xxd4g\") pod \"controller-manager-7c8d4bc847-8rsl4\" (UID: \"7a6a86af-c5ef-4c92-b496-e9c7974a2e46\") " pod="openshift-controller-manager/controller-manager-7c8d4bc847-8rsl4" Jan 27 15:14:55 crc kubenswrapper[4697]: I0127 15:14:55.090823 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dttjf\" (UniqueName: \"kubernetes.io/projected/31443f3c-4452-47aa-a4bd-2ecd733c442d-kube-api-access-dttjf\") pod \"route-controller-manager-c6cc786fd-2v4gr\" (UID: \"31443f3c-4452-47aa-a4bd-2ecd733c442d\") " pod="openshift-route-controller-manager/route-controller-manager-c6cc786fd-2v4gr" Jan 27 15:14:55 crc kubenswrapper[4697]: I0127 15:14:55.090990 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a6a86af-c5ef-4c92-b496-e9c7974a2e46-serving-cert\") pod \"controller-manager-7c8d4bc847-8rsl4\" (UID: \"7a6a86af-c5ef-4c92-b496-e9c7974a2e46\") " pod="openshift-controller-manager/controller-manager-7c8d4bc847-8rsl4" Jan 27 15:14:55 crc kubenswrapper[4697]: I0127 15:14:55.091169 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7a6a86af-c5ef-4c92-b496-e9c7974a2e46-proxy-ca-bundles\") pod \"controller-manager-7c8d4bc847-8rsl4\" (UID: \"7a6a86af-c5ef-4c92-b496-e9c7974a2e46\") " pod="openshift-controller-manager/controller-manager-7c8d4bc847-8rsl4" Jan 27 15:14:55 crc kubenswrapper[4697]: I0127 15:14:55.091369 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/31443f3c-4452-47aa-a4bd-2ecd733c442d-client-ca\") pod \"route-controller-manager-c6cc786fd-2v4gr\" (UID: \"31443f3c-4452-47aa-a4bd-2ecd733c442d\") " pod="openshift-route-controller-manager/route-controller-manager-c6cc786fd-2v4gr" Jan 27 15:14:55 crc kubenswrapper[4697]: I0127 15:14:55.091514 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/31443f3c-4452-47aa-a4bd-2ecd733c442d-serving-cert\") pod \"route-controller-manager-c6cc786fd-2v4gr\" (UID: \"31443f3c-4452-47aa-a4bd-2ecd733c442d\") " pod="openshift-route-controller-manager/route-controller-manager-c6cc786fd-2v4gr" Jan 27 15:14:55 crc kubenswrapper[4697]: I0127 15:14:55.091712 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a6a86af-c5ef-4c92-b496-e9c7974a2e46-config\") pod \"controller-manager-7c8d4bc847-8rsl4\" (UID: \"7a6a86af-c5ef-4c92-b496-e9c7974a2e46\") " pod="openshift-controller-manager/controller-manager-7c8d4bc847-8rsl4" Jan 27 15:14:55 crc kubenswrapper[4697]: I0127 15:14:55.091895 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31443f3c-4452-47aa-a4bd-2ecd733c442d-config\") pod \"route-controller-manager-c6cc786fd-2v4gr\" (UID: \"31443f3c-4452-47aa-a4bd-2ecd733c442d\") " pod="openshift-route-controller-manager/route-controller-manager-c6cc786fd-2v4gr" Jan 27 15:14:55 crc kubenswrapper[4697]: I0127 15:14:55.108758 4697 patch_prober.go:28] interesting pod/machine-config-daemon-wz495 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 15:14:55 crc kubenswrapper[4697]: I0127 15:14:55.109048 4697 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 15:14:55 crc kubenswrapper[4697]: I0127 15:14:55.193290 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/31443f3c-4452-47aa-a4bd-2ecd733c442d-client-ca\") pod \"route-controller-manager-c6cc786fd-2v4gr\" (UID: \"31443f3c-4452-47aa-a4bd-2ecd733c442d\") " pod="openshift-route-controller-manager/route-controller-manager-c6cc786fd-2v4gr" Jan 27 15:14:55 crc kubenswrapper[4697]: I0127 15:14:55.193342 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/31443f3c-4452-47aa-a4bd-2ecd733c442d-serving-cert\") pod \"route-controller-manager-c6cc786fd-2v4gr\" (UID: \"31443f3c-4452-47aa-a4bd-2ecd733c442d\") " pod="openshift-route-controller-manager/route-controller-manager-c6cc786fd-2v4gr" Jan 27 15:14:55 crc kubenswrapper[4697]: I0127 15:14:55.193384 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a6a86af-c5ef-4c92-b496-e9c7974a2e46-config\") pod \"controller-manager-7c8d4bc847-8rsl4\" (UID: \"7a6a86af-c5ef-4c92-b496-e9c7974a2e46\") " pod="openshift-controller-manager/controller-manager-7c8d4bc847-8rsl4" Jan 27 15:14:55 crc kubenswrapper[4697]: I0127 15:14:55.193420 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31443f3c-4452-47aa-a4bd-2ecd733c442d-config\") pod \"route-controller-manager-c6cc786fd-2v4gr\" (UID: \"31443f3c-4452-47aa-a4bd-2ecd733c442d\") " pod="openshift-route-controller-manager/route-controller-manager-c6cc786fd-2v4gr" Jan 27 15:14:55 crc kubenswrapper[4697]: I0127 15:14:55.193456 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7a6a86af-c5ef-4c92-b496-e9c7974a2e46-client-ca\") pod \"controller-manager-7c8d4bc847-8rsl4\" (UID: \"7a6a86af-c5ef-4c92-b496-e9c7974a2e46\") " pod="openshift-controller-manager/controller-manager-7c8d4bc847-8rsl4" Jan 27 15:14:55 crc kubenswrapper[4697]: I0127 15:14:55.193486 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xxd4g\" (UniqueName: \"kubernetes.io/projected/7a6a86af-c5ef-4c92-b496-e9c7974a2e46-kube-api-access-xxd4g\") pod \"controller-manager-7c8d4bc847-8rsl4\" (UID: \"7a6a86af-c5ef-4c92-b496-e9c7974a2e46\") " pod="openshift-controller-manager/controller-manager-7c8d4bc847-8rsl4" Jan 27 15:14:55 crc kubenswrapper[4697]: I0127 15:14:55.193511 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dttjf\" (UniqueName: \"kubernetes.io/projected/31443f3c-4452-47aa-a4bd-2ecd733c442d-kube-api-access-dttjf\") pod \"route-controller-manager-c6cc786fd-2v4gr\" (UID: \"31443f3c-4452-47aa-a4bd-2ecd733c442d\") " pod="openshift-route-controller-manager/route-controller-manager-c6cc786fd-2v4gr" Jan 27 15:14:55 crc kubenswrapper[4697]: I0127 15:14:55.193535 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a6a86af-c5ef-4c92-b496-e9c7974a2e46-serving-cert\") pod \"controller-manager-7c8d4bc847-8rsl4\" (UID: \"7a6a86af-c5ef-4c92-b496-e9c7974a2e46\") " pod="openshift-controller-manager/controller-manager-7c8d4bc847-8rsl4" Jan 27 15:14:55 crc kubenswrapper[4697]: I0127 15:14:55.193561 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7a6a86af-c5ef-4c92-b496-e9c7974a2e46-proxy-ca-bundles\") pod \"controller-manager-7c8d4bc847-8rsl4\" (UID: \"7a6a86af-c5ef-4c92-b496-e9c7974a2e46\") " pod="openshift-controller-manager/controller-manager-7c8d4bc847-8rsl4" Jan 27 15:14:55 crc kubenswrapper[4697]: I0127 15:14:55.194500 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/31443f3c-4452-47aa-a4bd-2ecd733c442d-client-ca\") pod \"route-controller-manager-c6cc786fd-2v4gr\" (UID: \"31443f3c-4452-47aa-a4bd-2ecd733c442d\") " pod="openshift-route-controller-manager/route-controller-manager-c6cc786fd-2v4gr" Jan 27 15:14:55 crc kubenswrapper[4697]: I0127 15:14:55.194711 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7a6a86af-c5ef-4c92-b496-e9c7974a2e46-proxy-ca-bundles\") pod \"controller-manager-7c8d4bc847-8rsl4\" (UID: \"7a6a86af-c5ef-4c92-b496-e9c7974a2e46\") " pod="openshift-controller-manager/controller-manager-7c8d4bc847-8rsl4" Jan 27 15:14:55 crc kubenswrapper[4697]: I0127 15:14:55.195420 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31443f3c-4452-47aa-a4bd-2ecd733c442d-config\") pod \"route-controller-manager-c6cc786fd-2v4gr\" (UID: \"31443f3c-4452-47aa-a4bd-2ecd733c442d\") " pod="openshift-route-controller-manager/route-controller-manager-c6cc786fd-2v4gr" Jan 27 15:14:55 crc kubenswrapper[4697]: I0127 15:14:55.195651 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a6a86af-c5ef-4c92-b496-e9c7974a2e46-config\") pod \"controller-manager-7c8d4bc847-8rsl4\" (UID: \"7a6a86af-c5ef-4c92-b496-e9c7974a2e46\") " pod="openshift-controller-manager/controller-manager-7c8d4bc847-8rsl4" Jan 27 15:14:55 crc kubenswrapper[4697]: I0127 15:14:55.196256 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7a6a86af-c5ef-4c92-b496-e9c7974a2e46-client-ca\") pod \"controller-manager-7c8d4bc847-8rsl4\" (UID: \"7a6a86af-c5ef-4c92-b496-e9c7974a2e46\") " pod="openshift-controller-manager/controller-manager-7c8d4bc847-8rsl4" Jan 27 15:14:55 crc kubenswrapper[4697]: I0127 15:14:55.198644 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/31443f3c-4452-47aa-a4bd-2ecd733c442d-serving-cert\") pod \"route-controller-manager-c6cc786fd-2v4gr\" (UID: \"31443f3c-4452-47aa-a4bd-2ecd733c442d\") " pod="openshift-route-controller-manager/route-controller-manager-c6cc786fd-2v4gr" Jan 27 15:14:55 crc kubenswrapper[4697]: I0127 15:14:55.198848 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a6a86af-c5ef-4c92-b496-e9c7974a2e46-serving-cert\") pod \"controller-manager-7c8d4bc847-8rsl4\" (UID: \"7a6a86af-c5ef-4c92-b496-e9c7974a2e46\") " pod="openshift-controller-manager/controller-manager-7c8d4bc847-8rsl4" Jan 27 15:14:55 crc kubenswrapper[4697]: I0127 15:14:55.217634 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xxd4g\" (UniqueName: \"kubernetes.io/projected/7a6a86af-c5ef-4c92-b496-e9c7974a2e46-kube-api-access-xxd4g\") pod \"controller-manager-7c8d4bc847-8rsl4\" (UID: \"7a6a86af-c5ef-4c92-b496-e9c7974a2e46\") " pod="openshift-controller-manager/controller-manager-7c8d4bc847-8rsl4" Jan 27 15:14:55 crc kubenswrapper[4697]: I0127 15:14:55.221414 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dttjf\" (UniqueName: \"kubernetes.io/projected/31443f3c-4452-47aa-a4bd-2ecd733c442d-kube-api-access-dttjf\") pod \"route-controller-manager-c6cc786fd-2v4gr\" (UID: \"31443f3c-4452-47aa-a4bd-2ecd733c442d\") " pod="openshift-route-controller-manager/route-controller-manager-c6cc786fd-2v4gr" Jan 27 15:14:55 crc kubenswrapper[4697]: I0127 15:14:55.358914 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-c6cc786fd-2v4gr" Jan 27 15:14:55 crc kubenswrapper[4697]: I0127 15:14:55.361337 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7c8d4bc847-8rsl4" Jan 27 15:14:55 crc kubenswrapper[4697]: I0127 15:14:55.670486 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-c6cc786fd-2v4gr"] Jan 27 15:14:55 crc kubenswrapper[4697]: I0127 15:14:55.813918 4697 generic.go:334] "Generic (PLEG): container finished" podID="52ecc276-9ad2-4527-9e59-a4e19c63d851" containerID="28737e1c852e6767ab3ee159f19ffc9a91176d7603096a9900a49442cb42da66" exitCode=0 Jan 27 15:14:55 crc kubenswrapper[4697]: I0127 15:14:55.813977 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cbj7r" event={"ID":"52ecc276-9ad2-4527-9e59-a4e19c63d851","Type":"ContainerDied","Data":"28737e1c852e6767ab3ee159f19ffc9a91176d7603096a9900a49442cb42da66"} Jan 27 15:14:55 crc kubenswrapper[4697]: I0127 15:14:55.814215 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cbj7r" event={"ID":"52ecc276-9ad2-4527-9e59-a4e19c63d851","Type":"ContainerStarted","Data":"c32a0b675540f3e27757c0cd037d2ebf9c71fd3a1dfce046fd211b1d2d022e1a"} Jan 27 15:14:55 crc kubenswrapper[4697]: I0127 15:14:55.815407 4697 generic.go:334] "Generic (PLEG): container finished" podID="24c17e72-9143-4da4-8b8f-0a777f568dfc" containerID="07f58960a19683c1040b73c8976af023a0a8b40600f503a2cf86ad61626526ea" exitCode=0 Jan 27 15:14:55 crc kubenswrapper[4697]: I0127 15:14:55.815461 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q9lwr" event={"ID":"24c17e72-9143-4da4-8b8f-0a777f568dfc","Type":"ContainerDied","Data":"07f58960a19683c1040b73c8976af023a0a8b40600f503a2cf86ad61626526ea"} Jan 27 15:14:55 crc kubenswrapper[4697]: I0127 15:14:55.818251 4697 patch_prober.go:28] interesting pod/route-controller-manager-c6cc786fd-2v4gr container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.70:8443/healthz\": dial tcp 10.217.0.70:8443: connect: connection refused" start-of-body= Jan 27 15:14:55 crc kubenswrapper[4697]: I0127 15:14:55.818291 4697 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-c6cc786fd-2v4gr" podUID="31443f3c-4452-47aa-a4bd-2ecd733c442d" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.70:8443/healthz\": dial tcp 10.217.0.70:8443: connect: connection refused" Jan 27 15:14:55 crc kubenswrapper[4697]: I0127 15:14:55.818466 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-c6cc786fd-2v4gr" event={"ID":"31443f3c-4452-47aa-a4bd-2ecd733c442d","Type":"ContainerStarted","Data":"13ed9039884c50ac8fefe558246a85c1850ed332913f78784aa5ee09bc4aa6b1"} Jan 27 15:14:55 crc kubenswrapper[4697]: I0127 15:14:55.818489 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-c6cc786fd-2v4gr" event={"ID":"31443f3c-4452-47aa-a4bd-2ecd733c442d","Type":"ContainerStarted","Data":"60372e685cf73528551a48d5b492fb703f699519761ff9707b76ad65fb03cf4e"} Jan 27 15:14:55 crc kubenswrapper[4697]: I0127 15:14:55.818501 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-c6cc786fd-2v4gr" Jan 27 15:14:55 crc kubenswrapper[4697]: I0127 15:14:55.857545 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-c6cc786fd-2v4gr" podStartSLOduration=2.85752237 podStartE2EDuration="2.85752237s" podCreationTimestamp="2026-01-27 15:14:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:14:55.854022188 +0000 UTC m=+392.026421979" watchObservedRunningTime="2026-01-27 15:14:55.85752237 +0000 UTC m=+392.029922151" Jan 27 15:14:55 crc kubenswrapper[4697]: I0127 15:14:55.903297 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7c8d4bc847-8rsl4"] Jan 27 15:14:55 crc kubenswrapper[4697]: W0127 15:14:55.908986 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7a6a86af_c5ef_4c92_b496_e9c7974a2e46.slice/crio-cd51211a22a0a0ec21e34293ed535a35f3f9d1ebacd93bec287d7ad357c0f00d WatchSource:0}: Error finding container cd51211a22a0a0ec21e34293ed535a35f3f9d1ebacd93bec287d7ad357c0f00d: Status 404 returned error can't find the container with id cd51211a22a0a0ec21e34293ed535a35f3f9d1ebacd93bec287d7ad357c0f00d Jan 27 15:14:56 crc kubenswrapper[4697]: I0127 15:14:56.283470 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-sc5n9"] Jan 27 15:14:56 crc kubenswrapper[4697]: I0127 15:14:56.284793 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sc5n9" Jan 27 15:14:56 crc kubenswrapper[4697]: I0127 15:14:56.291943 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 27 15:14:56 crc kubenswrapper[4697]: I0127 15:14:56.304526 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-sc5n9"] Jan 27 15:14:56 crc kubenswrapper[4697]: I0127 15:14:56.416690 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/238b6b97-1f60-4c86-a041-351dba477c64-catalog-content\") pod \"certified-operators-sc5n9\" (UID: \"238b6b97-1f60-4c86-a041-351dba477c64\") " pod="openshift-marketplace/certified-operators-sc5n9" Jan 27 15:14:56 crc kubenswrapper[4697]: I0127 15:14:56.416732 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cp92l\" (UniqueName: \"kubernetes.io/projected/238b6b97-1f60-4c86-a041-351dba477c64-kube-api-access-cp92l\") pod \"certified-operators-sc5n9\" (UID: \"238b6b97-1f60-4c86-a041-351dba477c64\") " pod="openshift-marketplace/certified-operators-sc5n9" Jan 27 15:14:56 crc kubenswrapper[4697]: I0127 15:14:56.416763 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/238b6b97-1f60-4c86-a041-351dba477c64-utilities\") pod \"certified-operators-sc5n9\" (UID: \"238b6b97-1f60-4c86-a041-351dba477c64\") " pod="openshift-marketplace/certified-operators-sc5n9" Jan 27 15:14:56 crc kubenswrapper[4697]: I0127 15:14:56.488334 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-6d8lm"] Jan 27 15:14:56 crc kubenswrapper[4697]: I0127 15:14:56.489279 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6d8lm" Jan 27 15:14:56 crc kubenswrapper[4697]: I0127 15:14:56.491002 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 27 15:14:56 crc kubenswrapper[4697]: I0127 15:14:56.517828 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/238b6b97-1f60-4c86-a041-351dba477c64-catalog-content\") pod \"certified-operators-sc5n9\" (UID: \"238b6b97-1f60-4c86-a041-351dba477c64\") " pod="openshift-marketplace/certified-operators-sc5n9" Jan 27 15:14:56 crc kubenswrapper[4697]: I0127 15:14:56.517873 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cp92l\" (UniqueName: \"kubernetes.io/projected/238b6b97-1f60-4c86-a041-351dba477c64-kube-api-access-cp92l\") pod \"certified-operators-sc5n9\" (UID: \"238b6b97-1f60-4c86-a041-351dba477c64\") " pod="openshift-marketplace/certified-operators-sc5n9" Jan 27 15:14:56 crc kubenswrapper[4697]: I0127 15:14:56.517925 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/238b6b97-1f60-4c86-a041-351dba477c64-utilities\") pod \"certified-operators-sc5n9\" (UID: \"238b6b97-1f60-4c86-a041-351dba477c64\") " pod="openshift-marketplace/certified-operators-sc5n9" Jan 27 15:14:56 crc kubenswrapper[4697]: I0127 15:14:56.518409 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/238b6b97-1f60-4c86-a041-351dba477c64-utilities\") pod \"certified-operators-sc5n9\" (UID: \"238b6b97-1f60-4c86-a041-351dba477c64\") " pod="openshift-marketplace/certified-operators-sc5n9" Jan 27 15:14:56 crc kubenswrapper[4697]: I0127 15:14:56.518438 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/238b6b97-1f60-4c86-a041-351dba477c64-catalog-content\") pod \"certified-operators-sc5n9\" (UID: \"238b6b97-1f60-4c86-a041-351dba477c64\") " pod="openshift-marketplace/certified-operators-sc5n9" Jan 27 15:14:56 crc kubenswrapper[4697]: I0127 15:14:56.557751 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cp92l\" (UniqueName: \"kubernetes.io/projected/238b6b97-1f60-4c86-a041-351dba477c64-kube-api-access-cp92l\") pod \"certified-operators-sc5n9\" (UID: \"238b6b97-1f60-4c86-a041-351dba477c64\") " pod="openshift-marketplace/certified-operators-sc5n9" Jan 27 15:14:56 crc kubenswrapper[4697]: I0127 15:14:56.564313 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6d8lm"] Jan 27 15:14:56 crc kubenswrapper[4697]: I0127 15:14:56.604381 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sc5n9" Jan 27 15:14:56 crc kubenswrapper[4697]: I0127 15:14:56.618823 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m92xq\" (UniqueName: \"kubernetes.io/projected/53f86999-4825-49b5-8652-a6b6bcc1dc5e-kube-api-access-m92xq\") pod \"community-operators-6d8lm\" (UID: \"53f86999-4825-49b5-8652-a6b6bcc1dc5e\") " pod="openshift-marketplace/community-operators-6d8lm" Jan 27 15:14:56 crc kubenswrapper[4697]: I0127 15:14:56.618926 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53f86999-4825-49b5-8652-a6b6bcc1dc5e-catalog-content\") pod \"community-operators-6d8lm\" (UID: \"53f86999-4825-49b5-8652-a6b6bcc1dc5e\") " pod="openshift-marketplace/community-operators-6d8lm" Jan 27 15:14:56 crc kubenswrapper[4697]: I0127 15:14:56.618953 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53f86999-4825-49b5-8652-a6b6bcc1dc5e-utilities\") pod \"community-operators-6d8lm\" (UID: \"53f86999-4825-49b5-8652-a6b6bcc1dc5e\") " pod="openshift-marketplace/community-operators-6d8lm" Jan 27 15:14:56 crc kubenswrapper[4697]: I0127 15:14:56.719773 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53f86999-4825-49b5-8652-a6b6bcc1dc5e-catalog-content\") pod \"community-operators-6d8lm\" (UID: \"53f86999-4825-49b5-8652-a6b6bcc1dc5e\") " pod="openshift-marketplace/community-operators-6d8lm" Jan 27 15:14:56 crc kubenswrapper[4697]: I0127 15:14:56.720094 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53f86999-4825-49b5-8652-a6b6bcc1dc5e-utilities\") pod \"community-operators-6d8lm\" (UID: \"53f86999-4825-49b5-8652-a6b6bcc1dc5e\") " pod="openshift-marketplace/community-operators-6d8lm" Jan 27 15:14:56 crc kubenswrapper[4697]: I0127 15:14:56.720165 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m92xq\" (UniqueName: \"kubernetes.io/projected/53f86999-4825-49b5-8652-a6b6bcc1dc5e-kube-api-access-m92xq\") pod \"community-operators-6d8lm\" (UID: \"53f86999-4825-49b5-8652-a6b6bcc1dc5e\") " pod="openshift-marketplace/community-operators-6d8lm" Jan 27 15:14:56 crc kubenswrapper[4697]: I0127 15:14:56.721955 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53f86999-4825-49b5-8652-a6b6bcc1dc5e-catalog-content\") pod \"community-operators-6d8lm\" (UID: \"53f86999-4825-49b5-8652-a6b6bcc1dc5e\") " pod="openshift-marketplace/community-operators-6d8lm" Jan 27 15:14:56 crc kubenswrapper[4697]: I0127 15:14:56.722168 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53f86999-4825-49b5-8652-a6b6bcc1dc5e-utilities\") pod \"community-operators-6d8lm\" (UID: \"53f86999-4825-49b5-8652-a6b6bcc1dc5e\") " pod="openshift-marketplace/community-operators-6d8lm" Jan 27 15:14:56 crc kubenswrapper[4697]: I0127 15:14:56.735878 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m92xq\" (UniqueName: \"kubernetes.io/projected/53f86999-4825-49b5-8652-a6b6bcc1dc5e-kube-api-access-m92xq\") pod \"community-operators-6d8lm\" (UID: \"53f86999-4825-49b5-8652-a6b6bcc1dc5e\") " pod="openshift-marketplace/community-operators-6d8lm" Jan 27 15:14:56 crc kubenswrapper[4697]: I0127 15:14:56.802676 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6d8lm" Jan 27 15:14:56 crc kubenswrapper[4697]: I0127 15:14:56.826199 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7c8d4bc847-8rsl4" event={"ID":"7a6a86af-c5ef-4c92-b496-e9c7974a2e46","Type":"ContainerStarted","Data":"dc45319a1c854510c5bd881515c1cfc5b5570a2797a39b20c4923a4ac87ad26c"} Jan 27 15:14:56 crc kubenswrapper[4697]: I0127 15:14:56.826241 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7c8d4bc847-8rsl4" event={"ID":"7a6a86af-c5ef-4c92-b496-e9c7974a2e46","Type":"ContainerStarted","Data":"cd51211a22a0a0ec21e34293ed535a35f3f9d1ebacd93bec287d7ad357c0f00d"} Jan 27 15:14:56 crc kubenswrapper[4697]: I0127 15:14:56.827526 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7c8d4bc847-8rsl4" Jan 27 15:14:56 crc kubenswrapper[4697]: I0127 15:14:56.836978 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7c8d4bc847-8rsl4" Jan 27 15:14:56 crc kubenswrapper[4697]: I0127 15:14:56.866484 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7c8d4bc847-8rsl4" podStartSLOduration=3.8664665449999998 podStartE2EDuration="3.866466545s" podCreationTimestamp="2026-01-27 15:14:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:14:56.864553629 +0000 UTC m=+393.036953410" watchObservedRunningTime="2026-01-27 15:14:56.866466545 +0000 UTC m=+393.038866316" Jan 27 15:14:56 crc kubenswrapper[4697]: I0127 15:14:56.876644 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cbj7r" event={"ID":"52ecc276-9ad2-4527-9e59-a4e19c63d851","Type":"ContainerStarted","Data":"415450b735cf5980defd0eb0babac4d8bc45d5e2e4f588d8358e2541688fa371"} Jan 27 15:14:56 crc kubenswrapper[4697]: I0127 15:14:56.884062 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-c6cc786fd-2v4gr" Jan 27 15:14:57 crc kubenswrapper[4697]: I0127 15:14:57.017824 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-sc5n9"] Jan 27 15:14:58 crc kubenswrapper[4697]: I0127 15:14:57.236055 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6d8lm"] Jan 27 15:14:58 crc kubenswrapper[4697]: W0127 15:14:57.241181 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod53f86999_4825_49b5_8652_a6b6bcc1dc5e.slice/crio-4b4f6f1edf58f7bcbd56c663c6cf2f081cbd87c448ee44e1301bf1fdf0b0d836 WatchSource:0}: Error finding container 4b4f6f1edf58f7bcbd56c663c6cf2f081cbd87c448ee44e1301bf1fdf0b0d836: Status 404 returned error can't find the container with id 4b4f6f1edf58f7bcbd56c663c6cf2f081cbd87c448ee44e1301bf1fdf0b0d836 Jan 27 15:14:58 crc kubenswrapper[4697]: I0127 15:14:57.883672 4697 generic.go:334] "Generic (PLEG): container finished" podID="24c17e72-9143-4da4-8b8f-0a777f568dfc" containerID="d1be9074295132afbc28a92a76e0906ec2f63ea70b52fa1fbab32278c0b1e187" exitCode=0 Jan 27 15:14:58 crc kubenswrapper[4697]: I0127 15:14:57.883753 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q9lwr" event={"ID":"24c17e72-9143-4da4-8b8f-0a777f568dfc","Type":"ContainerDied","Data":"d1be9074295132afbc28a92a76e0906ec2f63ea70b52fa1fbab32278c0b1e187"} Jan 27 15:14:58 crc kubenswrapper[4697]: I0127 15:14:57.886713 4697 generic.go:334] "Generic (PLEG): container finished" podID="53f86999-4825-49b5-8652-a6b6bcc1dc5e" containerID="ed8818b2aea55de93e32e6be5014c9f1b37e38349a11416ada0f546a45f65cb7" exitCode=0 Jan 27 15:14:58 crc kubenswrapper[4697]: I0127 15:14:57.887089 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6d8lm" event={"ID":"53f86999-4825-49b5-8652-a6b6bcc1dc5e","Type":"ContainerDied","Data":"ed8818b2aea55de93e32e6be5014c9f1b37e38349a11416ada0f546a45f65cb7"} Jan 27 15:14:58 crc kubenswrapper[4697]: I0127 15:14:57.887115 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6d8lm" event={"ID":"53f86999-4825-49b5-8652-a6b6bcc1dc5e","Type":"ContainerStarted","Data":"4b4f6f1edf58f7bcbd56c663c6cf2f081cbd87c448ee44e1301bf1fdf0b0d836"} Jan 27 15:14:58 crc kubenswrapper[4697]: I0127 15:14:57.896251 4697 generic.go:334] "Generic (PLEG): container finished" podID="52ecc276-9ad2-4527-9e59-a4e19c63d851" containerID="415450b735cf5980defd0eb0babac4d8bc45d5e2e4f588d8358e2541688fa371" exitCode=0 Jan 27 15:14:58 crc kubenswrapper[4697]: I0127 15:14:57.896346 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cbj7r" event={"ID":"52ecc276-9ad2-4527-9e59-a4e19c63d851","Type":"ContainerDied","Data":"415450b735cf5980defd0eb0babac4d8bc45d5e2e4f588d8358e2541688fa371"} Jan 27 15:14:58 crc kubenswrapper[4697]: I0127 15:14:57.899433 4697 generic.go:334] "Generic (PLEG): container finished" podID="238b6b97-1f60-4c86-a041-351dba477c64" containerID="9fe6ae1fd0ab80eea25f5b4fae2d7f396df959e6d968324ba9338ab936651f76" exitCode=0 Jan 27 15:14:58 crc kubenswrapper[4697]: I0127 15:14:57.900540 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sc5n9" event={"ID":"238b6b97-1f60-4c86-a041-351dba477c64","Type":"ContainerDied","Data":"9fe6ae1fd0ab80eea25f5b4fae2d7f396df959e6d968324ba9338ab936651f76"} Jan 27 15:14:58 crc kubenswrapper[4697]: I0127 15:14:57.900572 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sc5n9" event={"ID":"238b6b97-1f60-4c86-a041-351dba477c64","Type":"ContainerStarted","Data":"0634198b9f33aa068d7b3ce1d5ea61bded035f328c7df8c7ec78ced5cba7d5fa"} Jan 27 15:14:58 crc kubenswrapper[4697]: I0127 15:14:58.909274 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q9lwr" event={"ID":"24c17e72-9143-4da4-8b8f-0a777f568dfc","Type":"ContainerStarted","Data":"aa8d6be55e626544dfb5b561a4b30efbc73444f9af0761855b8191c4d5a2f8ba"} Jan 27 15:14:58 crc kubenswrapper[4697]: I0127 15:14:58.911565 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6d8lm" event={"ID":"53f86999-4825-49b5-8652-a6b6bcc1dc5e","Type":"ContainerStarted","Data":"61a0a4857c8cd09224e6b8f5c0f82c4b35574c689e93a68b5cae9b4b0238ffeb"} Jan 27 15:14:58 crc kubenswrapper[4697]: I0127 15:14:58.915419 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cbj7r" event={"ID":"52ecc276-9ad2-4527-9e59-a4e19c63d851","Type":"ContainerStarted","Data":"c775ff7d48b55f3f269b7c06b848439f47a288eadabff87d048420a1ef4a308e"} Jan 27 15:14:58 crc kubenswrapper[4697]: I0127 15:14:58.932846 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-q9lwr" podStartSLOduration=2.319223322 podStartE2EDuration="5.932826856s" podCreationTimestamp="2026-01-27 15:14:53 +0000 UTC" firstStartedPulling="2026-01-27 15:14:54.80841317 +0000 UTC m=+390.980812951" lastFinishedPulling="2026-01-27 15:14:58.422016704 +0000 UTC m=+394.594416485" observedRunningTime="2026-01-27 15:14:58.928418742 +0000 UTC m=+395.100818523" watchObservedRunningTime="2026-01-27 15:14:58.932826856 +0000 UTC m=+395.105226637" Jan 27 15:14:58 crc kubenswrapper[4697]: I0127 15:14:58.945247 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-cbj7r" podStartSLOduration=2.267370794 podStartE2EDuration="4.945226728s" podCreationTimestamp="2026-01-27 15:14:54 +0000 UTC" firstStartedPulling="2026-01-27 15:14:55.825018445 +0000 UTC m=+391.997418226" lastFinishedPulling="2026-01-27 15:14:58.502874379 +0000 UTC m=+394.675274160" observedRunningTime="2026-01-27 15:14:58.944627783 +0000 UTC m=+395.117027564" watchObservedRunningTime="2026-01-27 15:14:58.945226728 +0000 UTC m=+395.117626509" Jan 27 15:14:59 crc kubenswrapper[4697]: I0127 15:14:59.923317 4697 generic.go:334] "Generic (PLEG): container finished" podID="53f86999-4825-49b5-8652-a6b6bcc1dc5e" containerID="61a0a4857c8cd09224e6b8f5c0f82c4b35574c689e93a68b5cae9b4b0238ffeb" exitCode=0 Jan 27 15:14:59 crc kubenswrapper[4697]: I0127 15:14:59.923529 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6d8lm" event={"ID":"53f86999-4825-49b5-8652-a6b6bcc1dc5e","Type":"ContainerDied","Data":"61a0a4857c8cd09224e6b8f5c0f82c4b35574c689e93a68b5cae9b4b0238ffeb"} Jan 27 15:14:59 crc kubenswrapper[4697]: I0127 15:14:59.929490 4697 generic.go:334] "Generic (PLEG): container finished" podID="238b6b97-1f60-4c86-a041-351dba477c64" containerID="4bdb30157916068c0cab2b2dc7abe6dd283524bda352a05193bcb6df6646d7e3" exitCode=0 Jan 27 15:14:59 crc kubenswrapper[4697]: I0127 15:14:59.930717 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sc5n9" event={"ID":"238b6b97-1f60-4c86-a041-351dba477c64","Type":"ContainerDied","Data":"4bdb30157916068c0cab2b2dc7abe6dd283524bda352a05193bcb6df6646d7e3"} Jan 27 15:15:00 crc kubenswrapper[4697]: I0127 15:15:00.173366 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492115-hjr25"] Jan 27 15:15:00 crc kubenswrapper[4697]: I0127 15:15:00.174270 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492115-hjr25" Jan 27 15:15:00 crc kubenswrapper[4697]: I0127 15:15:00.176083 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 27 15:15:00 crc kubenswrapper[4697]: I0127 15:15:00.176299 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 27 15:15:00 crc kubenswrapper[4697]: I0127 15:15:00.192680 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492115-hjr25"] Jan 27 15:15:00 crc kubenswrapper[4697]: I0127 15:15:00.275245 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/257b9c99-2693-4921-b8bb-4ca5c66e711c-config-volume\") pod \"collect-profiles-29492115-hjr25\" (UID: \"257b9c99-2693-4921-b8bb-4ca5c66e711c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492115-hjr25" Jan 27 15:15:00 crc kubenswrapper[4697]: I0127 15:15:00.275397 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/257b9c99-2693-4921-b8bb-4ca5c66e711c-secret-volume\") pod \"collect-profiles-29492115-hjr25\" (UID: \"257b9c99-2693-4921-b8bb-4ca5c66e711c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492115-hjr25" Jan 27 15:15:00 crc kubenswrapper[4697]: I0127 15:15:00.275433 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjkw9\" (UniqueName: \"kubernetes.io/projected/257b9c99-2693-4921-b8bb-4ca5c66e711c-kube-api-access-rjkw9\") pod \"collect-profiles-29492115-hjr25\" (UID: \"257b9c99-2693-4921-b8bb-4ca5c66e711c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492115-hjr25" Jan 27 15:15:00 crc kubenswrapper[4697]: I0127 15:15:00.377147 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/257b9c99-2693-4921-b8bb-4ca5c66e711c-secret-volume\") pod \"collect-profiles-29492115-hjr25\" (UID: \"257b9c99-2693-4921-b8bb-4ca5c66e711c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492115-hjr25" Jan 27 15:15:00 crc kubenswrapper[4697]: I0127 15:15:00.377488 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjkw9\" (UniqueName: \"kubernetes.io/projected/257b9c99-2693-4921-b8bb-4ca5c66e711c-kube-api-access-rjkw9\") pod \"collect-profiles-29492115-hjr25\" (UID: \"257b9c99-2693-4921-b8bb-4ca5c66e711c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492115-hjr25" Jan 27 15:15:00 crc kubenswrapper[4697]: I0127 15:15:00.377551 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/257b9c99-2693-4921-b8bb-4ca5c66e711c-config-volume\") pod \"collect-profiles-29492115-hjr25\" (UID: \"257b9c99-2693-4921-b8bb-4ca5c66e711c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492115-hjr25" Jan 27 15:15:00 crc kubenswrapper[4697]: I0127 15:15:00.378364 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/257b9c99-2693-4921-b8bb-4ca5c66e711c-config-volume\") pod \"collect-profiles-29492115-hjr25\" (UID: \"257b9c99-2693-4921-b8bb-4ca5c66e711c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492115-hjr25" Jan 27 15:15:00 crc kubenswrapper[4697]: I0127 15:15:00.382717 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/257b9c99-2693-4921-b8bb-4ca5c66e711c-secret-volume\") pod \"collect-profiles-29492115-hjr25\" (UID: \"257b9c99-2693-4921-b8bb-4ca5c66e711c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492115-hjr25" Jan 27 15:15:00 crc kubenswrapper[4697]: I0127 15:15:00.408042 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjkw9\" (UniqueName: \"kubernetes.io/projected/257b9c99-2693-4921-b8bb-4ca5c66e711c-kube-api-access-rjkw9\") pod \"collect-profiles-29492115-hjr25\" (UID: \"257b9c99-2693-4921-b8bb-4ca5c66e711c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492115-hjr25" Jan 27 15:15:00 crc kubenswrapper[4697]: I0127 15:15:00.488141 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492115-hjr25" Jan 27 15:15:00 crc kubenswrapper[4697]: I0127 15:15:00.931453 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492115-hjr25"] Jan 27 15:15:00 crc kubenswrapper[4697]: W0127 15:15:00.937300 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod257b9c99_2693_4921_b8bb_4ca5c66e711c.slice/crio-0a86e1a360f7c91a45edc288b6be390583a7823b632cec55f15f078979b49122 WatchSource:0}: Error finding container 0a86e1a360f7c91a45edc288b6be390583a7823b632cec55f15f078979b49122: Status 404 returned error can't find the container with id 0a86e1a360f7c91a45edc288b6be390583a7823b632cec55f15f078979b49122 Jan 27 15:15:00 crc kubenswrapper[4697]: I0127 15:15:00.940145 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sc5n9" event={"ID":"238b6b97-1f60-4c86-a041-351dba477c64","Type":"ContainerStarted","Data":"cd620575483487dab616dc666b951476b268efda152f0d4c44733ae6a97faa1d"} Jan 27 15:15:00 crc kubenswrapper[4697]: I0127 15:15:00.963088 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-sc5n9" podStartSLOduration=2.145168224 podStartE2EDuration="4.963070126s" podCreationTimestamp="2026-01-27 15:14:56 +0000 UTC" firstStartedPulling="2026-01-27 15:14:57.901038483 +0000 UTC m=+394.073438274" lastFinishedPulling="2026-01-27 15:15:00.718940395 +0000 UTC m=+396.891340176" observedRunningTime="2026-01-27 15:15:00.960980166 +0000 UTC m=+397.133379957" watchObservedRunningTime="2026-01-27 15:15:00.963070126 +0000 UTC m=+397.135469907" Jan 27 15:15:01 crc kubenswrapper[4697]: I0127 15:15:01.946574 4697 generic.go:334] "Generic (PLEG): container finished" podID="257b9c99-2693-4921-b8bb-4ca5c66e711c" containerID="f5d2797f5463ee235ca182a38e12058696a35ed747108c80ec94a54264e8f6a9" exitCode=0 Jan 27 15:15:01 crc kubenswrapper[4697]: I0127 15:15:01.946666 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492115-hjr25" event={"ID":"257b9c99-2693-4921-b8bb-4ca5c66e711c","Type":"ContainerDied","Data":"f5d2797f5463ee235ca182a38e12058696a35ed747108c80ec94a54264e8f6a9"} Jan 27 15:15:01 crc kubenswrapper[4697]: I0127 15:15:01.946987 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492115-hjr25" event={"ID":"257b9c99-2693-4921-b8bb-4ca5c66e711c","Type":"ContainerStarted","Data":"0a86e1a360f7c91a45edc288b6be390583a7823b632cec55f15f078979b49122"} Jan 27 15:15:01 crc kubenswrapper[4697]: I0127 15:15:01.948971 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6d8lm" event={"ID":"53f86999-4825-49b5-8652-a6b6bcc1dc5e","Type":"ContainerStarted","Data":"bfc503d8d0abb6a470c581b93e6eb31493bd4c0d1281242d9f33100f78caa2f4"} Jan 27 15:15:01 crc kubenswrapper[4697]: I0127 15:15:01.985450 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-6d8lm" podStartSLOduration=2.455445741 podStartE2EDuration="5.985430916s" podCreationTimestamp="2026-01-27 15:14:56 +0000 UTC" firstStartedPulling="2026-01-27 15:14:57.889720936 +0000 UTC m=+394.062120717" lastFinishedPulling="2026-01-27 15:15:01.419706111 +0000 UTC m=+397.592105892" observedRunningTime="2026-01-27 15:15:01.984902664 +0000 UTC m=+398.157302455" watchObservedRunningTime="2026-01-27 15:15:01.985430916 +0000 UTC m=+398.157830697" Jan 27 15:15:03 crc kubenswrapper[4697]: I0127 15:15:03.961141 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492115-hjr25" event={"ID":"257b9c99-2693-4921-b8bb-4ca5c66e711c","Type":"ContainerDied","Data":"0a86e1a360f7c91a45edc288b6be390583a7823b632cec55f15f078979b49122"} Jan 27 15:15:03 crc kubenswrapper[4697]: I0127 15:15:03.961649 4697 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0a86e1a360f7c91a45edc288b6be390583a7823b632cec55f15f078979b49122" Jan 27 15:15:04 crc kubenswrapper[4697]: I0127 15:15:04.227043 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-q9lwr" Jan 27 15:15:04 crc kubenswrapper[4697]: I0127 15:15:04.227102 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-q9lwr" Jan 27 15:15:04 crc kubenswrapper[4697]: I0127 15:15:04.272013 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-q9lwr" Jan 27 15:15:04 crc kubenswrapper[4697]: I0127 15:15:04.406649 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-cbj7r" Jan 27 15:15:04 crc kubenswrapper[4697]: I0127 15:15:04.406720 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-cbj7r" Jan 27 15:15:04 crc kubenswrapper[4697]: I0127 15:15:04.686892 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492115-hjr25" Jan 27 15:15:04 crc kubenswrapper[4697]: I0127 15:15:04.837289 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rjkw9\" (UniqueName: \"kubernetes.io/projected/257b9c99-2693-4921-b8bb-4ca5c66e711c-kube-api-access-rjkw9\") pod \"257b9c99-2693-4921-b8bb-4ca5c66e711c\" (UID: \"257b9c99-2693-4921-b8bb-4ca5c66e711c\") " Jan 27 15:15:04 crc kubenswrapper[4697]: I0127 15:15:04.837334 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/257b9c99-2693-4921-b8bb-4ca5c66e711c-config-volume\") pod \"257b9c99-2693-4921-b8bb-4ca5c66e711c\" (UID: \"257b9c99-2693-4921-b8bb-4ca5c66e711c\") " Jan 27 15:15:04 crc kubenswrapper[4697]: I0127 15:15:04.837392 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/257b9c99-2693-4921-b8bb-4ca5c66e711c-secret-volume\") pod \"257b9c99-2693-4921-b8bb-4ca5c66e711c\" (UID: \"257b9c99-2693-4921-b8bb-4ca5c66e711c\") " Jan 27 15:15:04 crc kubenswrapper[4697]: I0127 15:15:04.838512 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/257b9c99-2693-4921-b8bb-4ca5c66e711c-config-volume" (OuterVolumeSpecName: "config-volume") pod "257b9c99-2693-4921-b8bb-4ca5c66e711c" (UID: "257b9c99-2693-4921-b8bb-4ca5c66e711c"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:15:04 crc kubenswrapper[4697]: I0127 15:15:04.842963 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/257b9c99-2693-4921-b8bb-4ca5c66e711c-kube-api-access-rjkw9" (OuterVolumeSpecName: "kube-api-access-rjkw9") pod "257b9c99-2693-4921-b8bb-4ca5c66e711c" (UID: "257b9c99-2693-4921-b8bb-4ca5c66e711c"). InnerVolumeSpecName "kube-api-access-rjkw9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:15:04 crc kubenswrapper[4697]: I0127 15:15:04.846915 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/257b9c99-2693-4921-b8bb-4ca5c66e711c-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "257b9c99-2693-4921-b8bb-4ca5c66e711c" (UID: "257b9c99-2693-4921-b8bb-4ca5c66e711c"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:15:04 crc kubenswrapper[4697]: I0127 15:15:04.941931 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rjkw9\" (UniqueName: \"kubernetes.io/projected/257b9c99-2693-4921-b8bb-4ca5c66e711c-kube-api-access-rjkw9\") on node \"crc\" DevicePath \"\"" Jan 27 15:15:04 crc kubenswrapper[4697]: I0127 15:15:04.941985 4697 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/257b9c99-2693-4921-b8bb-4ca5c66e711c-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 15:15:04 crc kubenswrapper[4697]: I0127 15:15:04.941995 4697 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/257b9c99-2693-4921-b8bb-4ca5c66e711c-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 27 15:15:04 crc kubenswrapper[4697]: I0127 15:15:04.965158 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492115-hjr25" Jan 27 15:15:05 crc kubenswrapper[4697]: I0127 15:15:05.007636 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-q9lwr" Jan 27 15:15:05 crc kubenswrapper[4697]: I0127 15:15:05.448466 4697 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-cbj7r" podUID="52ecc276-9ad2-4527-9e59-a4e19c63d851" containerName="registry-server" probeResult="failure" output=< Jan 27 15:15:05 crc kubenswrapper[4697]: timeout: failed to connect service ":50051" within 1s Jan 27 15:15:05 crc kubenswrapper[4697]: > Jan 27 15:15:06 crc kubenswrapper[4697]: I0127 15:15:06.605237 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-sc5n9" Jan 27 15:15:06 crc kubenswrapper[4697]: I0127 15:15:06.605571 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-sc5n9" Jan 27 15:15:06 crc kubenswrapper[4697]: I0127 15:15:06.640433 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-sc5n9" Jan 27 15:15:06 crc kubenswrapper[4697]: I0127 15:15:06.746834 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-cq5lk" Jan 27 15:15:06 crc kubenswrapper[4697]: I0127 15:15:06.808515 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-6d8lm" Jan 27 15:15:06 crc kubenswrapper[4697]: I0127 15:15:06.808557 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-6d8lm" Jan 27 15:15:06 crc kubenswrapper[4697]: I0127 15:15:06.822130 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-qlprf"] Jan 27 15:15:06 crc kubenswrapper[4697]: I0127 15:15:06.874038 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-6d8lm" Jan 27 15:15:07 crc kubenswrapper[4697]: I0127 15:15:07.016717 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-6d8lm" Jan 27 15:15:07 crc kubenswrapper[4697]: I0127 15:15:07.021096 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-sc5n9" Jan 27 15:15:14 crc kubenswrapper[4697]: I0127 15:15:14.459478 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-cbj7r" Jan 27 15:15:14 crc kubenswrapper[4697]: I0127 15:15:14.510481 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-cbj7r" Jan 27 15:15:25 crc kubenswrapper[4697]: I0127 15:15:25.108624 4697 patch_prober.go:28] interesting pod/machine-config-daemon-wz495 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 15:15:25 crc kubenswrapper[4697]: I0127 15:15:25.109135 4697 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 15:15:25 crc kubenswrapper[4697]: I0127 15:15:25.109193 4697 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wz495" Jan 27 15:15:25 crc kubenswrapper[4697]: I0127 15:15:25.109991 4697 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f9e4101694f1899c8f44fa50fe32233101f8e492ef340427ddc5bf1941a9a036"} pod="openshift-machine-config-operator/machine-config-daemon-wz495" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 15:15:25 crc kubenswrapper[4697]: I0127 15:15:25.110061 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" containerName="machine-config-daemon" containerID="cri-o://f9e4101694f1899c8f44fa50fe32233101f8e492ef340427ddc5bf1941a9a036" gracePeriod=600 Jan 27 15:15:25 crc kubenswrapper[4697]: I0127 15:15:25.292169 4697 generic.go:334] "Generic (PLEG): container finished" podID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" containerID="f9e4101694f1899c8f44fa50fe32233101f8e492ef340427ddc5bf1941a9a036" exitCode=0 Jan 27 15:15:25 crc kubenswrapper[4697]: I0127 15:15:25.292211 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wz495" event={"ID":"e9bec8bc-b2a6-4865-83ca-692ae5c022a6","Type":"ContainerDied","Data":"f9e4101694f1899c8f44fa50fe32233101f8e492ef340427ddc5bf1941a9a036"} Jan 27 15:15:25 crc kubenswrapper[4697]: I0127 15:15:25.292241 4697 scope.go:117] "RemoveContainer" containerID="faaced835dbc76e880a1fd29824b00fca5f720686e476bcba6ad4f807e28e8e2" Jan 27 15:15:26 crc kubenswrapper[4697]: I0127 15:15:26.300018 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wz495" event={"ID":"e9bec8bc-b2a6-4865-83ca-692ae5c022a6","Type":"ContainerStarted","Data":"160554eb4c1c1a0e1f168d0c1e6bb97842cc86fd35dee22ef4a9ea3ffb4e7b6c"} Jan 27 15:15:31 crc kubenswrapper[4697]: I0127 15:15:31.873463 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-qlprf" podUID="43fd9fa4-b232-4d49-8f52-27d016de4cad" containerName="registry" containerID="cri-o://4c05c973d10dc4b2a7a36b30c3e7bb6079e30ac7362f7b4d2d5a568e049272cd" gracePeriod=30 Jan 27 15:15:32 crc kubenswrapper[4697]: I0127 15:15:32.298508 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-qlprf" Jan 27 15:15:32 crc kubenswrapper[4697]: I0127 15:15:32.328674 4697 generic.go:334] "Generic (PLEG): container finished" podID="43fd9fa4-b232-4d49-8f52-27d016de4cad" containerID="4c05c973d10dc4b2a7a36b30c3e7bb6079e30ac7362f7b4d2d5a568e049272cd" exitCode=0 Jan 27 15:15:32 crc kubenswrapper[4697]: I0127 15:15:32.328716 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-qlprf" event={"ID":"43fd9fa4-b232-4d49-8f52-27d016de4cad","Type":"ContainerDied","Data":"4c05c973d10dc4b2a7a36b30c3e7bb6079e30ac7362f7b4d2d5a568e049272cd"} Jan 27 15:15:32 crc kubenswrapper[4697]: I0127 15:15:32.328759 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-qlprf" event={"ID":"43fd9fa4-b232-4d49-8f52-27d016de4cad","Type":"ContainerDied","Data":"ecb5fb67f830321bb79acff313b3649d77744cb96b27d21a807a3b03c69d1093"} Jan 27 15:15:32 crc kubenswrapper[4697]: I0127 15:15:32.328778 4697 scope.go:117] "RemoveContainer" containerID="4c05c973d10dc4b2a7a36b30c3e7bb6079e30ac7362f7b4d2d5a568e049272cd" Jan 27 15:15:32 crc kubenswrapper[4697]: I0127 15:15:32.329447 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-qlprf" Jan 27 15:15:32 crc kubenswrapper[4697]: I0127 15:15:32.330935 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"43fd9fa4-b232-4d49-8f52-27d016de4cad\" (UID: \"43fd9fa4-b232-4d49-8f52-27d016de4cad\") " Jan 27 15:15:32 crc kubenswrapper[4697]: I0127 15:15:32.330977 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hs6nr\" (UniqueName: \"kubernetes.io/projected/43fd9fa4-b232-4d49-8f52-27d016de4cad-kube-api-access-hs6nr\") pod \"43fd9fa4-b232-4d49-8f52-27d016de4cad\" (UID: \"43fd9fa4-b232-4d49-8f52-27d016de4cad\") " Jan 27 15:15:32 crc kubenswrapper[4697]: I0127 15:15:32.331027 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/43fd9fa4-b232-4d49-8f52-27d016de4cad-trusted-ca\") pod \"43fd9fa4-b232-4d49-8f52-27d016de4cad\" (UID: \"43fd9fa4-b232-4d49-8f52-27d016de4cad\") " Jan 27 15:15:32 crc kubenswrapper[4697]: I0127 15:15:32.331107 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/43fd9fa4-b232-4d49-8f52-27d016de4cad-installation-pull-secrets\") pod \"43fd9fa4-b232-4d49-8f52-27d016de4cad\" (UID: \"43fd9fa4-b232-4d49-8f52-27d016de4cad\") " Jan 27 15:15:32 crc kubenswrapper[4697]: I0127 15:15:32.331238 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/43fd9fa4-b232-4d49-8f52-27d016de4cad-bound-sa-token\") pod \"43fd9fa4-b232-4d49-8f52-27d016de4cad\" (UID: \"43fd9fa4-b232-4d49-8f52-27d016de4cad\") " Jan 27 15:15:32 crc kubenswrapper[4697]: I0127 15:15:32.331283 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/43fd9fa4-b232-4d49-8f52-27d016de4cad-registry-tls\") pod \"43fd9fa4-b232-4d49-8f52-27d016de4cad\" (UID: \"43fd9fa4-b232-4d49-8f52-27d016de4cad\") " Jan 27 15:15:32 crc kubenswrapper[4697]: I0127 15:15:32.331303 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/43fd9fa4-b232-4d49-8f52-27d016de4cad-registry-certificates\") pod \"43fd9fa4-b232-4d49-8f52-27d016de4cad\" (UID: \"43fd9fa4-b232-4d49-8f52-27d016de4cad\") " Jan 27 15:15:32 crc kubenswrapper[4697]: I0127 15:15:32.331337 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/43fd9fa4-b232-4d49-8f52-27d016de4cad-ca-trust-extracted\") pod \"43fd9fa4-b232-4d49-8f52-27d016de4cad\" (UID: \"43fd9fa4-b232-4d49-8f52-27d016de4cad\") " Jan 27 15:15:32 crc kubenswrapper[4697]: I0127 15:15:32.332094 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43fd9fa4-b232-4d49-8f52-27d016de4cad-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "43fd9fa4-b232-4d49-8f52-27d016de4cad" (UID: "43fd9fa4-b232-4d49-8f52-27d016de4cad"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:15:32 crc kubenswrapper[4697]: I0127 15:15:32.333065 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43fd9fa4-b232-4d49-8f52-27d016de4cad-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "43fd9fa4-b232-4d49-8f52-27d016de4cad" (UID: "43fd9fa4-b232-4d49-8f52-27d016de4cad"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:15:32 crc kubenswrapper[4697]: I0127 15:15:32.354586 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/43fd9fa4-b232-4d49-8f52-27d016de4cad-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "43fd9fa4-b232-4d49-8f52-27d016de4cad" (UID: "43fd9fa4-b232-4d49-8f52-27d016de4cad"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:15:32 crc kubenswrapper[4697]: I0127 15:15:32.380007 4697 scope.go:117] "RemoveContainer" containerID="4c05c973d10dc4b2a7a36b30c3e7bb6079e30ac7362f7b4d2d5a568e049272cd" Jan 27 15:15:32 crc kubenswrapper[4697]: E0127 15:15:32.380353 4697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c05c973d10dc4b2a7a36b30c3e7bb6079e30ac7362f7b4d2d5a568e049272cd\": container with ID starting with 4c05c973d10dc4b2a7a36b30c3e7bb6079e30ac7362f7b4d2d5a568e049272cd not found: ID does not exist" containerID="4c05c973d10dc4b2a7a36b30c3e7bb6079e30ac7362f7b4d2d5a568e049272cd" Jan 27 15:15:32 crc kubenswrapper[4697]: I0127 15:15:32.380463 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c05c973d10dc4b2a7a36b30c3e7bb6079e30ac7362f7b4d2d5a568e049272cd"} err="failed to get container status \"4c05c973d10dc4b2a7a36b30c3e7bb6079e30ac7362f7b4d2d5a568e049272cd\": rpc error: code = NotFound desc = could not find container \"4c05c973d10dc4b2a7a36b30c3e7bb6079e30ac7362f7b4d2d5a568e049272cd\": container with ID starting with 4c05c973d10dc4b2a7a36b30c3e7bb6079e30ac7362f7b4d2d5a568e049272cd not found: ID does not exist" Jan 27 15:15:32 crc kubenswrapper[4697]: I0127 15:15:32.432827 4697 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/43fd9fa4-b232-4d49-8f52-27d016de4cad-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 27 15:15:32 crc kubenswrapper[4697]: I0127 15:15:32.433098 4697 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/43fd9fa4-b232-4d49-8f52-27d016de4cad-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 27 15:15:32 crc kubenswrapper[4697]: I0127 15:15:32.433186 4697 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/43fd9fa4-b232-4d49-8f52-27d016de4cad-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 15:15:32 crc kubenswrapper[4697]: I0127 15:15:32.442500 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43fd9fa4-b232-4d49-8f52-27d016de4cad-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "43fd9fa4-b232-4d49-8f52-27d016de4cad" (UID: "43fd9fa4-b232-4d49-8f52-27d016de4cad"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:15:32 crc kubenswrapper[4697]: I0127 15:15:32.442700 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "43fd9fa4-b232-4d49-8f52-27d016de4cad" (UID: "43fd9fa4-b232-4d49-8f52-27d016de4cad"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 27 15:15:32 crc kubenswrapper[4697]: I0127 15:15:32.442747 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43fd9fa4-b232-4d49-8f52-27d016de4cad-kube-api-access-hs6nr" (OuterVolumeSpecName: "kube-api-access-hs6nr") pod "43fd9fa4-b232-4d49-8f52-27d016de4cad" (UID: "43fd9fa4-b232-4d49-8f52-27d016de4cad"). InnerVolumeSpecName "kube-api-access-hs6nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:15:32 crc kubenswrapper[4697]: I0127 15:15:32.445350 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43fd9fa4-b232-4d49-8f52-27d016de4cad-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "43fd9fa4-b232-4d49-8f52-27d016de4cad" (UID: "43fd9fa4-b232-4d49-8f52-27d016de4cad"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:15:32 crc kubenswrapper[4697]: I0127 15:15:32.452854 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43fd9fa4-b232-4d49-8f52-27d016de4cad-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "43fd9fa4-b232-4d49-8f52-27d016de4cad" (UID: "43fd9fa4-b232-4d49-8f52-27d016de4cad"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:15:32 crc kubenswrapper[4697]: I0127 15:15:32.534757 4697 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/43fd9fa4-b232-4d49-8f52-27d016de4cad-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 27 15:15:32 crc kubenswrapper[4697]: I0127 15:15:32.534849 4697 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/43fd9fa4-b232-4d49-8f52-27d016de4cad-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 27 15:15:32 crc kubenswrapper[4697]: I0127 15:15:32.534863 4697 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/43fd9fa4-b232-4d49-8f52-27d016de4cad-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 27 15:15:32 crc kubenswrapper[4697]: I0127 15:15:32.534872 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hs6nr\" (UniqueName: \"kubernetes.io/projected/43fd9fa4-b232-4d49-8f52-27d016de4cad-kube-api-access-hs6nr\") on node \"crc\" DevicePath \"\"" Jan 27 15:15:32 crc kubenswrapper[4697]: I0127 15:15:32.652799 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-qlprf"] Jan 27 15:15:32 crc kubenswrapper[4697]: I0127 15:15:32.656194 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-qlprf"] Jan 27 15:15:34 crc kubenswrapper[4697]: I0127 15:15:34.577071 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43fd9fa4-b232-4d49-8f52-27d016de4cad" path="/var/lib/kubelet/pods/43fd9fa4-b232-4d49-8f52-27d016de4cad/volumes" Jan 27 15:17:24 crc kubenswrapper[4697]: I0127 15:17:24.852042 4697 scope.go:117] "RemoveContainer" containerID="2fc1f82f5af8a5feb11e657399a7ee2576c5e556ea67cadb25f04438e85c53ca" Jan 27 15:17:25 crc kubenswrapper[4697]: I0127 15:17:25.108757 4697 patch_prober.go:28] interesting pod/machine-config-daemon-wz495 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 15:17:25 crc kubenswrapper[4697]: I0127 15:17:25.108901 4697 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 15:17:55 crc kubenswrapper[4697]: I0127 15:17:55.109467 4697 patch_prober.go:28] interesting pod/machine-config-daemon-wz495 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 15:17:55 crc kubenswrapper[4697]: I0127 15:17:55.110109 4697 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 15:18:25 crc kubenswrapper[4697]: I0127 15:18:25.108945 4697 patch_prober.go:28] interesting pod/machine-config-daemon-wz495 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 15:18:25 crc kubenswrapper[4697]: I0127 15:18:25.109675 4697 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 15:18:25 crc kubenswrapper[4697]: I0127 15:18:25.109751 4697 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wz495" Jan 27 15:18:25 crc kubenswrapper[4697]: I0127 15:18:25.110541 4697 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"160554eb4c1c1a0e1f168d0c1e6bb97842cc86fd35dee22ef4a9ea3ffb4e7b6c"} pod="openshift-machine-config-operator/machine-config-daemon-wz495" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 15:18:25 crc kubenswrapper[4697]: I0127 15:18:25.110797 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" containerName="machine-config-daemon" containerID="cri-o://160554eb4c1c1a0e1f168d0c1e6bb97842cc86fd35dee22ef4a9ea3ffb4e7b6c" gracePeriod=600 Jan 27 15:18:25 crc kubenswrapper[4697]: I0127 15:18:25.303195 4697 generic.go:334] "Generic (PLEG): container finished" podID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" containerID="160554eb4c1c1a0e1f168d0c1e6bb97842cc86fd35dee22ef4a9ea3ffb4e7b6c" exitCode=0 Jan 27 15:18:25 crc kubenswrapper[4697]: I0127 15:18:25.303233 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wz495" event={"ID":"e9bec8bc-b2a6-4865-83ca-692ae5c022a6","Type":"ContainerDied","Data":"160554eb4c1c1a0e1f168d0c1e6bb97842cc86fd35dee22ef4a9ea3ffb4e7b6c"} Jan 27 15:18:25 crc kubenswrapper[4697]: I0127 15:18:25.303264 4697 scope.go:117] "RemoveContainer" containerID="f9e4101694f1899c8f44fa50fe32233101f8e492ef340427ddc5bf1941a9a036" Jan 27 15:18:26 crc kubenswrapper[4697]: I0127 15:18:26.312204 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wz495" event={"ID":"e9bec8bc-b2a6-4865-83ca-692ae5c022a6","Type":"ContainerStarted","Data":"29b143d9c88ca58d5e8f4a44a13b9ecc0a8f5a18f7aa625b7c0810002ed2b91e"} Jan 27 15:20:25 crc kubenswrapper[4697]: I0127 15:20:25.109849 4697 patch_prober.go:28] interesting pod/machine-config-daemon-wz495 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 15:20:25 crc kubenswrapper[4697]: I0127 15:20:25.110863 4697 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 15:20:37 crc kubenswrapper[4697]: I0127 15:20:37.085091 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-2vqhk"] Jan 27 15:20:37 crc kubenswrapper[4697]: E0127 15:20:37.085817 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="257b9c99-2693-4921-b8bb-4ca5c66e711c" containerName="collect-profiles" Jan 27 15:20:37 crc kubenswrapper[4697]: I0127 15:20:37.085834 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="257b9c99-2693-4921-b8bb-4ca5c66e711c" containerName="collect-profiles" Jan 27 15:20:37 crc kubenswrapper[4697]: E0127 15:20:37.085861 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43fd9fa4-b232-4d49-8f52-27d016de4cad" containerName="registry" Jan 27 15:20:37 crc kubenswrapper[4697]: I0127 15:20:37.085870 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="43fd9fa4-b232-4d49-8f52-27d016de4cad" containerName="registry" Jan 27 15:20:37 crc kubenswrapper[4697]: I0127 15:20:37.085975 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="43fd9fa4-b232-4d49-8f52-27d016de4cad" containerName="registry" Jan 27 15:20:37 crc kubenswrapper[4697]: I0127 15:20:37.085992 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="257b9c99-2693-4921-b8bb-4ca5c66e711c" containerName="collect-profiles" Jan 27 15:20:37 crc kubenswrapper[4697]: I0127 15:20:37.086429 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-2vqhk" Jan 27 15:20:37 crc kubenswrapper[4697]: I0127 15:20:37.088751 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 27 15:20:37 crc kubenswrapper[4697]: I0127 15:20:37.088881 4697 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-lglcp" Jan 27 15:20:37 crc kubenswrapper[4697]: I0127 15:20:37.094435 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 27 15:20:37 crc kubenswrapper[4697]: I0127 15:20:37.098420 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-2vqhk"] Jan 27 15:20:37 crc kubenswrapper[4697]: I0127 15:20:37.102931 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-z8rp7"] Jan 27 15:20:37 crc kubenswrapper[4697]: I0127 15:20:37.103746 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-z8rp7" Jan 27 15:20:37 crc kubenswrapper[4697]: I0127 15:20:37.106121 4697 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-lt7dg" Jan 27 15:20:37 crc kubenswrapper[4697]: I0127 15:20:37.131690 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-8n4gj"] Jan 27 15:20:37 crc kubenswrapper[4697]: I0127 15:20:37.132347 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-8n4gj" Jan 27 15:20:37 crc kubenswrapper[4697]: I0127 15:20:37.135999 4697 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-chshk" Jan 27 15:20:37 crc kubenswrapper[4697]: I0127 15:20:37.154416 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zg8gc\" (UniqueName: \"kubernetes.io/projected/276bf9e3-2608-4096-bc3a-fff69d9dfc64-kube-api-access-zg8gc\") pod \"cert-manager-858654f9db-z8rp7\" (UID: \"276bf9e3-2608-4096-bc3a-fff69d9dfc64\") " pod="cert-manager/cert-manager-858654f9db-z8rp7" Jan 27 15:20:37 crc kubenswrapper[4697]: I0127 15:20:37.154509 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btswh\" (UniqueName: \"kubernetes.io/projected/4cf2332b-1a6a-460c-a3a8-d7110b0960a2-kube-api-access-btswh\") pod \"cert-manager-cainjector-cf98fcc89-2vqhk\" (UID: \"4cf2332b-1a6a-460c-a3a8-d7110b0960a2\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-2vqhk" Jan 27 15:20:37 crc kubenswrapper[4697]: I0127 15:20:37.154571 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h25hf\" (UniqueName: \"kubernetes.io/projected/a29b72d6-fcd5-4a5a-b779-437cfc4c8365-kube-api-access-h25hf\") pod \"cert-manager-webhook-687f57d79b-8n4gj\" (UID: \"a29b72d6-fcd5-4a5a-b779-437cfc4c8365\") " pod="cert-manager/cert-manager-webhook-687f57d79b-8n4gj" Jan 27 15:20:37 crc kubenswrapper[4697]: I0127 15:20:37.156755 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-z8rp7"] Jan 27 15:20:37 crc kubenswrapper[4697]: I0127 15:20:37.172775 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-8n4gj"] Jan 27 15:20:37 crc kubenswrapper[4697]: I0127 15:20:37.256036 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h25hf\" (UniqueName: \"kubernetes.io/projected/a29b72d6-fcd5-4a5a-b779-437cfc4c8365-kube-api-access-h25hf\") pod \"cert-manager-webhook-687f57d79b-8n4gj\" (UID: \"a29b72d6-fcd5-4a5a-b779-437cfc4c8365\") " pod="cert-manager/cert-manager-webhook-687f57d79b-8n4gj" Jan 27 15:20:37 crc kubenswrapper[4697]: I0127 15:20:37.256105 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zg8gc\" (UniqueName: \"kubernetes.io/projected/276bf9e3-2608-4096-bc3a-fff69d9dfc64-kube-api-access-zg8gc\") pod \"cert-manager-858654f9db-z8rp7\" (UID: \"276bf9e3-2608-4096-bc3a-fff69d9dfc64\") " pod="cert-manager/cert-manager-858654f9db-z8rp7" Jan 27 15:20:37 crc kubenswrapper[4697]: I0127 15:20:37.256146 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-btswh\" (UniqueName: \"kubernetes.io/projected/4cf2332b-1a6a-460c-a3a8-d7110b0960a2-kube-api-access-btswh\") pod \"cert-manager-cainjector-cf98fcc89-2vqhk\" (UID: \"4cf2332b-1a6a-460c-a3a8-d7110b0960a2\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-2vqhk" Jan 27 15:20:37 crc kubenswrapper[4697]: I0127 15:20:37.275107 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-btswh\" (UniqueName: \"kubernetes.io/projected/4cf2332b-1a6a-460c-a3a8-d7110b0960a2-kube-api-access-btswh\") pod \"cert-manager-cainjector-cf98fcc89-2vqhk\" (UID: \"4cf2332b-1a6a-460c-a3a8-d7110b0960a2\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-2vqhk" Jan 27 15:20:37 crc kubenswrapper[4697]: I0127 15:20:37.276125 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h25hf\" (UniqueName: \"kubernetes.io/projected/a29b72d6-fcd5-4a5a-b779-437cfc4c8365-kube-api-access-h25hf\") pod \"cert-manager-webhook-687f57d79b-8n4gj\" (UID: \"a29b72d6-fcd5-4a5a-b779-437cfc4c8365\") " pod="cert-manager/cert-manager-webhook-687f57d79b-8n4gj" Jan 27 15:20:37 crc kubenswrapper[4697]: I0127 15:20:37.278483 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zg8gc\" (UniqueName: \"kubernetes.io/projected/276bf9e3-2608-4096-bc3a-fff69d9dfc64-kube-api-access-zg8gc\") pod \"cert-manager-858654f9db-z8rp7\" (UID: \"276bf9e3-2608-4096-bc3a-fff69d9dfc64\") " pod="cert-manager/cert-manager-858654f9db-z8rp7" Jan 27 15:20:37 crc kubenswrapper[4697]: I0127 15:20:37.403613 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-2vqhk" Jan 27 15:20:37 crc kubenswrapper[4697]: I0127 15:20:37.415557 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-z8rp7" Jan 27 15:20:37 crc kubenswrapper[4697]: I0127 15:20:37.449195 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-8n4gj" Jan 27 15:20:37 crc kubenswrapper[4697]: I0127 15:20:37.724182 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-8n4gj"] Jan 27 15:20:37 crc kubenswrapper[4697]: I0127 15:20:37.731448 4697 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 15:20:37 crc kubenswrapper[4697]: I0127 15:20:37.828876 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-z8rp7"] Jan 27 15:20:37 crc kubenswrapper[4697]: W0127 15:20:37.834348 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod276bf9e3_2608_4096_bc3a_fff69d9dfc64.slice/crio-962b775bed781838ca175fdd36719f9a8ec42c197e6c2eee92eebc65b9d83f32 WatchSource:0}: Error finding container 962b775bed781838ca175fdd36719f9a8ec42c197e6c2eee92eebc65b9d83f32: Status 404 returned error can't find the container with id 962b775bed781838ca175fdd36719f9a8ec42c197e6c2eee92eebc65b9d83f32 Jan 27 15:20:37 crc kubenswrapper[4697]: I0127 15:20:37.885625 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-2vqhk"] Jan 27 15:20:37 crc kubenswrapper[4697]: W0127 15:20:37.893033 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4cf2332b_1a6a_460c_a3a8_d7110b0960a2.slice/crio-2b6d3279f87cceb3a00b60f4a383b0ad42307bb9f3e53f9e33d210b96e02499d WatchSource:0}: Error finding container 2b6d3279f87cceb3a00b60f4a383b0ad42307bb9f3e53f9e33d210b96e02499d: Status 404 returned error can't find the container with id 2b6d3279f87cceb3a00b60f4a383b0ad42307bb9f3e53f9e33d210b96e02499d Jan 27 15:20:38 crc kubenswrapper[4697]: I0127 15:20:38.594700 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-2vqhk" event={"ID":"4cf2332b-1a6a-460c-a3a8-d7110b0960a2","Type":"ContainerStarted","Data":"2b6d3279f87cceb3a00b60f4a383b0ad42307bb9f3e53f9e33d210b96e02499d"} Jan 27 15:20:38 crc kubenswrapper[4697]: I0127 15:20:38.596646 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-8n4gj" event={"ID":"a29b72d6-fcd5-4a5a-b779-437cfc4c8365","Type":"ContainerStarted","Data":"bb907da22e20277b8b3851f34f24624e2e2759943ff0c88b2f154bff895391c8"} Jan 27 15:20:38 crc kubenswrapper[4697]: I0127 15:20:38.598251 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-z8rp7" event={"ID":"276bf9e3-2608-4096-bc3a-fff69d9dfc64","Type":"ContainerStarted","Data":"962b775bed781838ca175fdd36719f9a8ec42c197e6c2eee92eebc65b9d83f32"} Jan 27 15:20:45 crc kubenswrapper[4697]: I0127 15:20:45.645076 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-8n4gj" event={"ID":"a29b72d6-fcd5-4a5a-b779-437cfc4c8365","Type":"ContainerStarted","Data":"aae9b69bc1d3c009a0ba7cc912e4917f174206f027c95da442323323548e86b4"} Jan 27 15:20:45 crc kubenswrapper[4697]: I0127 15:20:45.645616 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-8n4gj" Jan 27 15:20:45 crc kubenswrapper[4697]: I0127 15:20:45.647389 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-z8rp7" event={"ID":"276bf9e3-2608-4096-bc3a-fff69d9dfc64","Type":"ContainerStarted","Data":"0c22e46bc29c6831b2e862b13c109a4731e471ffaa708b43b5831ed837aa635d"} Jan 27 15:20:45 crc kubenswrapper[4697]: I0127 15:20:45.680850 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-8n4gj" podStartSLOduration=1.513226143 podStartE2EDuration="8.680825837s" podCreationTimestamp="2026-01-27 15:20:37 +0000 UTC" firstStartedPulling="2026-01-27 15:20:37.731230733 +0000 UTC m=+733.903630514" lastFinishedPulling="2026-01-27 15:20:44.898830427 +0000 UTC m=+741.071230208" observedRunningTime="2026-01-27 15:20:45.661158877 +0000 UTC m=+741.833558678" watchObservedRunningTime="2026-01-27 15:20:45.680825837 +0000 UTC m=+741.853225638" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.282035 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-z8rp7" podStartSLOduration=2.220343372 podStartE2EDuration="9.282016346s" podCreationTimestamp="2026-01-27 15:20:37 +0000 UTC" firstStartedPulling="2026-01-27 15:20:37.837102122 +0000 UTC m=+734.009501903" lastFinishedPulling="2026-01-27 15:20:44.898775096 +0000 UTC m=+741.071174877" observedRunningTime="2026-01-27 15:20:45.687911911 +0000 UTC m=+741.860311702" watchObservedRunningTime="2026-01-27 15:20:46.282016346 +0000 UTC m=+742.454416127" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.283715 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-z6jxw"] Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.284152 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-z6jxw" podUID="6a1ce5ad-1a8c-4a28-99d8-fc71649954ad" containerName="ovn-controller" containerID="cri-o://eea7c2b7dbea8198cc4709a808f8ecab760514224f4e3eb96d04c3bd7f16df6d" gracePeriod=30 Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.284284 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-z6jxw" podUID="6a1ce5ad-1a8c-4a28-99d8-fc71649954ad" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://25f52622d494cffbbd36c21f76148b896a10d3c1ace649ac0824e847b812a277" gracePeriod=30 Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.284361 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-z6jxw" podUID="6a1ce5ad-1a8c-4a28-99d8-fc71649954ad" containerName="nbdb" containerID="cri-o://c9146d3d41cb348c99ea78d62aef3aa7d46c5f99855e042fdf5bc38b18556e8d" gracePeriod=30 Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.284393 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-z6jxw" podUID="6a1ce5ad-1a8c-4a28-99d8-fc71649954ad" containerName="ovn-acl-logging" containerID="cri-o://f8784cf473729161592d08c782f4754724d6609756a30040715cbff8c732a09c" gracePeriod=30 Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.284358 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-z6jxw" podUID="6a1ce5ad-1a8c-4a28-99d8-fc71649954ad" containerName="northd" containerID="cri-o://e33c68fac5ef11b2704b8a1460588937489a191ea2eacb70548b1e99cf718822" gracePeriod=30 Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.284359 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-z6jxw" podUID="6a1ce5ad-1a8c-4a28-99d8-fc71649954ad" containerName="sbdb" containerID="cri-o://971bf4362650664f5133d9b68b7a5ce76e54dafbf28c88730f678ada0256ffd9" gracePeriod=30 Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.284567 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-z6jxw" podUID="6a1ce5ad-1a8c-4a28-99d8-fc71649954ad" containerName="kube-rbac-proxy-node" containerID="cri-o://24ac4a674c5fb98082daeabf52736988951ea5c66064ff4bb63f0d40c43b947d" gracePeriod=30 Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.376139 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-z6jxw" podUID="6a1ce5ad-1a8c-4a28-99d8-fc71649954ad" containerName="ovnkube-controller" containerID="cri-o://918734811d71295456ae6cb8f392e78a32b0db85d73470ffd6ddfaadc1efa3bd" gracePeriod=30 Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.652301 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-z6jxw_6a1ce5ad-1a8c-4a28-99d8-fc71649954ad/ovnkube-controller/3.log" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.655750 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-2vqhk" event={"ID":"4cf2332b-1a6a-460c-a3a8-d7110b0960a2","Type":"ContainerStarted","Data":"d12e95068802a6411268325fe52e021f5ba2d3c69f41d00881d3e46b884ddd59"} Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.656510 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-z6jxw_6a1ce5ad-1a8c-4a28-99d8-fc71649954ad/ovn-acl-logging/0.log" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.657094 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-z6jxw_6a1ce5ad-1a8c-4a28-99d8-fc71649954ad/ovn-controller/0.log" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.657657 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-z6jxw" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.657675 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rq89t_7fbc1c27-fba2-40df-95dd-3842bd1f1906/kube-multus/2.log" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.658212 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rq89t_7fbc1c27-fba2-40df-95dd-3842bd1f1906/kube-multus/1.log" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.658255 4697 generic.go:334] "Generic (PLEG): container finished" podID="7fbc1c27-fba2-40df-95dd-3842bd1f1906" containerID="5609c867cf313ac9913a53179ba2ad268eb1d131842e8928d5d9605076980d92" exitCode=2 Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.658308 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-rq89t" event={"ID":"7fbc1c27-fba2-40df-95dd-3842bd1f1906","Type":"ContainerDied","Data":"5609c867cf313ac9913a53179ba2ad268eb1d131842e8928d5d9605076980d92"} Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.658342 4697 scope.go:117] "RemoveContainer" containerID="55217260dcb8aebc9ddf2d903bc0257bc8a122956102c0215d6a5a20451d6afe" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.658926 4697 scope.go:117] "RemoveContainer" containerID="5609c867cf313ac9913a53179ba2ad268eb1d131842e8928d5d9605076980d92" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.661833 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-z6jxw_6a1ce5ad-1a8c-4a28-99d8-fc71649954ad/ovnkube-controller/3.log" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.666449 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-z6jxw_6a1ce5ad-1a8c-4a28-99d8-fc71649954ad/ovn-acl-logging/0.log" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.666943 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-z6jxw_6a1ce5ad-1a8c-4a28-99d8-fc71649954ad/ovn-controller/0.log" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.667335 4697 generic.go:334] "Generic (PLEG): container finished" podID="6a1ce5ad-1a8c-4a28-99d8-fc71649954ad" containerID="918734811d71295456ae6cb8f392e78a32b0db85d73470ffd6ddfaadc1efa3bd" exitCode=0 Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.667364 4697 generic.go:334] "Generic (PLEG): container finished" podID="6a1ce5ad-1a8c-4a28-99d8-fc71649954ad" containerID="971bf4362650664f5133d9b68b7a5ce76e54dafbf28c88730f678ada0256ffd9" exitCode=0 Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.667374 4697 generic.go:334] "Generic (PLEG): container finished" podID="6a1ce5ad-1a8c-4a28-99d8-fc71649954ad" containerID="c9146d3d41cb348c99ea78d62aef3aa7d46c5f99855e042fdf5bc38b18556e8d" exitCode=0 Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.667402 4697 generic.go:334] "Generic (PLEG): container finished" podID="6a1ce5ad-1a8c-4a28-99d8-fc71649954ad" containerID="e33c68fac5ef11b2704b8a1460588937489a191ea2eacb70548b1e99cf718822" exitCode=0 Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.667412 4697 generic.go:334] "Generic (PLEG): container finished" podID="6a1ce5ad-1a8c-4a28-99d8-fc71649954ad" containerID="25f52622d494cffbbd36c21f76148b896a10d3c1ace649ac0824e847b812a277" exitCode=0 Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.667421 4697 generic.go:334] "Generic (PLEG): container finished" podID="6a1ce5ad-1a8c-4a28-99d8-fc71649954ad" containerID="24ac4a674c5fb98082daeabf52736988951ea5c66064ff4bb63f0d40c43b947d" exitCode=0 Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.667429 4697 generic.go:334] "Generic (PLEG): container finished" podID="6a1ce5ad-1a8c-4a28-99d8-fc71649954ad" containerID="f8784cf473729161592d08c782f4754724d6609756a30040715cbff8c732a09c" exitCode=143 Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.667438 4697 generic.go:334] "Generic (PLEG): container finished" podID="6a1ce5ad-1a8c-4a28-99d8-fc71649954ad" containerID="eea7c2b7dbea8198cc4709a808f8ecab760514224f4e3eb96d04c3bd7f16df6d" exitCode=143 Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.667481 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-z6jxw" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.667481 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6jxw" event={"ID":"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad","Type":"ContainerDied","Data":"918734811d71295456ae6cb8f392e78a32b0db85d73470ffd6ddfaadc1efa3bd"} Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.667555 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6jxw" event={"ID":"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad","Type":"ContainerDied","Data":"971bf4362650664f5133d9b68b7a5ce76e54dafbf28c88730f678ada0256ffd9"} Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.667580 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6jxw" event={"ID":"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad","Type":"ContainerDied","Data":"c9146d3d41cb348c99ea78d62aef3aa7d46c5f99855e042fdf5bc38b18556e8d"} Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.667599 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6jxw" event={"ID":"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad","Type":"ContainerDied","Data":"e33c68fac5ef11b2704b8a1460588937489a191ea2eacb70548b1e99cf718822"} Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.667617 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6jxw" event={"ID":"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad","Type":"ContainerDied","Data":"25f52622d494cffbbd36c21f76148b896a10d3c1ace649ac0824e847b812a277"} Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.667637 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6jxw" event={"ID":"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad","Type":"ContainerDied","Data":"24ac4a674c5fb98082daeabf52736988951ea5c66064ff4bb63f0d40c43b947d"} Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.667656 4697 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"918734811d71295456ae6cb8f392e78a32b0db85d73470ffd6ddfaadc1efa3bd"} Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.667672 4697 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8434917bca076a475c1e4b907733bca9cee4559bea25a20542bc654c51f925fd"} Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.667683 4697 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"971bf4362650664f5133d9b68b7a5ce76e54dafbf28c88730f678ada0256ffd9"} Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.667695 4697 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c9146d3d41cb348c99ea78d62aef3aa7d46c5f99855e042fdf5bc38b18556e8d"} Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.667705 4697 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e33c68fac5ef11b2704b8a1460588937489a191ea2eacb70548b1e99cf718822"} Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.667717 4697 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"25f52622d494cffbbd36c21f76148b896a10d3c1ace649ac0824e847b812a277"} Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.667727 4697 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"24ac4a674c5fb98082daeabf52736988951ea5c66064ff4bb63f0d40c43b947d"} Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.667737 4697 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f8784cf473729161592d08c782f4754724d6609756a30040715cbff8c732a09c"} Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.667748 4697 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"eea7c2b7dbea8198cc4709a808f8ecab760514224f4e3eb96d04c3bd7f16df6d"} Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.667757 4697 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b9666b8a501ef015431ee3be1fc34ca2b196011df3007d2e4d508f09f9967785"} Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.667770 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6jxw" event={"ID":"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad","Type":"ContainerDied","Data":"f8784cf473729161592d08c782f4754724d6609756a30040715cbff8c732a09c"} Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.667808 4697 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"918734811d71295456ae6cb8f392e78a32b0db85d73470ffd6ddfaadc1efa3bd"} Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.667821 4697 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8434917bca076a475c1e4b907733bca9cee4559bea25a20542bc654c51f925fd"} Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.667831 4697 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"971bf4362650664f5133d9b68b7a5ce76e54dafbf28c88730f678ada0256ffd9"} Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.667840 4697 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c9146d3d41cb348c99ea78d62aef3aa7d46c5f99855e042fdf5bc38b18556e8d"} Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.667850 4697 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e33c68fac5ef11b2704b8a1460588937489a191ea2eacb70548b1e99cf718822"} Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.667860 4697 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"25f52622d494cffbbd36c21f76148b896a10d3c1ace649ac0824e847b812a277"} Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.667870 4697 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"24ac4a674c5fb98082daeabf52736988951ea5c66064ff4bb63f0d40c43b947d"} Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.667880 4697 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f8784cf473729161592d08c782f4754724d6609756a30040715cbff8c732a09c"} Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.667889 4697 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"eea7c2b7dbea8198cc4709a808f8ecab760514224f4e3eb96d04c3bd7f16df6d"} Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.667898 4697 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b9666b8a501ef015431ee3be1fc34ca2b196011df3007d2e4d508f09f9967785"} Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.667948 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6jxw" event={"ID":"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad","Type":"ContainerDied","Data":"eea7c2b7dbea8198cc4709a808f8ecab760514224f4e3eb96d04c3bd7f16df6d"} Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.667965 4697 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"918734811d71295456ae6cb8f392e78a32b0db85d73470ffd6ddfaadc1efa3bd"} Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.667976 4697 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8434917bca076a475c1e4b907733bca9cee4559bea25a20542bc654c51f925fd"} Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.667986 4697 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"971bf4362650664f5133d9b68b7a5ce76e54dafbf28c88730f678ada0256ffd9"} Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.667997 4697 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c9146d3d41cb348c99ea78d62aef3aa7d46c5f99855e042fdf5bc38b18556e8d"} Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.668005 4697 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e33c68fac5ef11b2704b8a1460588937489a191ea2eacb70548b1e99cf718822"} Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.668015 4697 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"25f52622d494cffbbd36c21f76148b896a10d3c1ace649ac0824e847b812a277"} Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.668023 4697 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"24ac4a674c5fb98082daeabf52736988951ea5c66064ff4bb63f0d40c43b947d"} Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.668032 4697 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f8784cf473729161592d08c782f4754724d6609756a30040715cbff8c732a09c"} Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.668041 4697 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"eea7c2b7dbea8198cc4709a808f8ecab760514224f4e3eb96d04c3bd7f16df6d"} Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.668050 4697 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b9666b8a501ef015431ee3be1fc34ca2b196011df3007d2e4d508f09f9967785"} Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.668062 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6jxw" event={"ID":"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad","Type":"ContainerDied","Data":"24ccad5ec43b98acb432bf323d3f81e8e30f928ca13d69c59aa9557597dfee96"} Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.668076 4697 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"918734811d71295456ae6cb8f392e78a32b0db85d73470ffd6ddfaadc1efa3bd"} Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.668086 4697 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8434917bca076a475c1e4b907733bca9cee4559bea25a20542bc654c51f925fd"} Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.668097 4697 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"971bf4362650664f5133d9b68b7a5ce76e54dafbf28c88730f678ada0256ffd9"} Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.668105 4697 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c9146d3d41cb348c99ea78d62aef3aa7d46c5f99855e042fdf5bc38b18556e8d"} Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.668114 4697 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e33c68fac5ef11b2704b8a1460588937489a191ea2eacb70548b1e99cf718822"} Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.668123 4697 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"25f52622d494cffbbd36c21f76148b896a10d3c1ace649ac0824e847b812a277"} Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.668132 4697 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"24ac4a674c5fb98082daeabf52736988951ea5c66064ff4bb63f0d40c43b947d"} Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.668141 4697 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f8784cf473729161592d08c782f4754724d6609756a30040715cbff8c732a09c"} Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.668152 4697 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"eea7c2b7dbea8198cc4709a808f8ecab760514224f4e3eb96d04c3bd7f16df6d"} Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.668162 4697 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b9666b8a501ef015431ee3be1fc34ca2b196011df3007d2e4d508f09f9967785"} Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.681414 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6a1ce5ad-1a8c-4a28-99d8-fc71649954ad-host-var-lib-cni-networks-ovn-kubernetes\") pod \"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad\" (UID: \"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad\") " Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.681466 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6a1ce5ad-1a8c-4a28-99d8-fc71649954ad-etc-openvswitch\") pod \"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad\" (UID: \"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad\") " Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.681494 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6a1ce5ad-1a8c-4a28-99d8-fc71649954ad-ovnkube-config\") pod \"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad\" (UID: \"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad\") " Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.681524 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6a1ce5ad-1a8c-4a28-99d8-fc71649954ad-ovn-node-metrics-cert\") pod \"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad\" (UID: \"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad\") " Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.681559 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6a1ce5ad-1a8c-4a28-99d8-fc71649954ad-host-kubelet\") pod \"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad\" (UID: \"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad\") " Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.681579 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6a1ce5ad-1a8c-4a28-99d8-fc71649954ad-ovnkube-script-lib\") pod \"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad\" (UID: \"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad\") " Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.681603 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6a1ce5ad-1a8c-4a28-99d8-fc71649954ad-var-lib-openvswitch\") pod \"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad\" (UID: \"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad\") " Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.681631 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6a1ce5ad-1a8c-4a28-99d8-fc71649954ad-run-systemd\") pod \"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad\" (UID: \"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad\") " Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.681665 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6a1ce5ad-1a8c-4a28-99d8-fc71649954ad-host-cni-bin\") pod \"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad\" (UID: \"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad\") " Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.681686 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6a1ce5ad-1a8c-4a28-99d8-fc71649954ad-systemd-units\") pod \"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad\" (UID: \"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad\") " Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.681708 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6a1ce5ad-1a8c-4a28-99d8-fc71649954ad-run-openvswitch\") pod \"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad\" (UID: \"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad\") " Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.681729 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6a1ce5ad-1a8c-4a28-99d8-fc71649954ad-log-socket\") pod \"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad\" (UID: \"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad\") " Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.681761 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6a1ce5ad-1a8c-4a28-99d8-fc71649954ad-run-ovn\") pod \"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad\" (UID: \"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad\") " Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.681817 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5jp8x\" (UniqueName: \"kubernetes.io/projected/6a1ce5ad-1a8c-4a28-99d8-fc71649954ad-kube-api-access-5jp8x\") pod \"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad\" (UID: \"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad\") " Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.681842 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6a1ce5ad-1a8c-4a28-99d8-fc71649954ad-host-slash\") pod \"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad\" (UID: \"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad\") " Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.681861 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6a1ce5ad-1a8c-4a28-99d8-fc71649954ad-host-run-netns\") pod \"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad\" (UID: \"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad\") " Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.681886 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6a1ce5ad-1a8c-4a28-99d8-fc71649954ad-host-cni-netd\") pod \"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad\" (UID: \"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad\") " Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.681915 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6a1ce5ad-1a8c-4a28-99d8-fc71649954ad-host-run-ovn-kubernetes\") pod \"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad\" (UID: \"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad\") " Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.681940 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6a1ce5ad-1a8c-4a28-99d8-fc71649954ad-env-overrides\") pod \"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad\" (UID: \"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad\") " Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.681959 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6a1ce5ad-1a8c-4a28-99d8-fc71649954ad-node-log\") pod \"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad\" (UID: \"6a1ce5ad-1a8c-4a28-99d8-fc71649954ad\") " Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.682204 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6a1ce5ad-1a8c-4a28-99d8-fc71649954ad-log-socket" (OuterVolumeSpecName: "log-socket") pod "6a1ce5ad-1a8c-4a28-99d8-fc71649954ad" (UID: "6a1ce5ad-1a8c-4a28-99d8-fc71649954ad"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.682249 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6a1ce5ad-1a8c-4a28-99d8-fc71649954ad-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "6a1ce5ad-1a8c-4a28-99d8-fc71649954ad" (UID: "6a1ce5ad-1a8c-4a28-99d8-fc71649954ad"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.682257 4697 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6a1ce5ad-1a8c-4a28-99d8-fc71649954ad-log-socket\") on node \"crc\" DevicePath \"\"" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.682257 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6a1ce5ad-1a8c-4a28-99d8-fc71649954ad-host-slash" (OuterVolumeSpecName: "host-slash") pod "6a1ce5ad-1a8c-4a28-99d8-fc71649954ad" (UID: "6a1ce5ad-1a8c-4a28-99d8-fc71649954ad"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.682275 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6a1ce5ad-1a8c-4a28-99d8-fc71649954ad-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "6a1ce5ad-1a8c-4a28-99d8-fc71649954ad" (UID: "6a1ce5ad-1a8c-4a28-99d8-fc71649954ad"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.682320 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6a1ce5ad-1a8c-4a28-99d8-fc71649954ad-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "6a1ce5ad-1a8c-4a28-99d8-fc71649954ad" (UID: "6a1ce5ad-1a8c-4a28-99d8-fc71649954ad"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.682458 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6a1ce5ad-1a8c-4a28-99d8-fc71649954ad-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "6a1ce5ad-1a8c-4a28-99d8-fc71649954ad" (UID: "6a1ce5ad-1a8c-4a28-99d8-fc71649954ad"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.682464 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6a1ce5ad-1a8c-4a28-99d8-fc71649954ad-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "6a1ce5ad-1a8c-4a28-99d8-fc71649954ad" (UID: "6a1ce5ad-1a8c-4a28-99d8-fc71649954ad"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.682619 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6a1ce5ad-1a8c-4a28-99d8-fc71649954ad-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "6a1ce5ad-1a8c-4a28-99d8-fc71649954ad" (UID: "6a1ce5ad-1a8c-4a28-99d8-fc71649954ad"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.682650 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6a1ce5ad-1a8c-4a28-99d8-fc71649954ad-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "6a1ce5ad-1a8c-4a28-99d8-fc71649954ad" (UID: "6a1ce5ad-1a8c-4a28-99d8-fc71649954ad"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.682671 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6a1ce5ad-1a8c-4a28-99d8-fc71649954ad-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "6a1ce5ad-1a8c-4a28-99d8-fc71649954ad" (UID: "6a1ce5ad-1a8c-4a28-99d8-fc71649954ad"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.682692 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6a1ce5ad-1a8c-4a28-99d8-fc71649954ad-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "6a1ce5ad-1a8c-4a28-99d8-fc71649954ad" (UID: "6a1ce5ad-1a8c-4a28-99d8-fc71649954ad"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.682713 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6a1ce5ad-1a8c-4a28-99d8-fc71649954ad-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "6a1ce5ad-1a8c-4a28-99d8-fc71649954ad" (UID: "6a1ce5ad-1a8c-4a28-99d8-fc71649954ad"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.682730 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a1ce5ad-1a8c-4a28-99d8-fc71649954ad-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6a1ce5ad-1a8c-4a28-99d8-fc71649954ad" (UID: "6a1ce5ad-1a8c-4a28-99d8-fc71649954ad"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.682739 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a1ce5ad-1a8c-4a28-99d8-fc71649954ad-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6a1ce5ad-1a8c-4a28-99d8-fc71649954ad" (UID: "6a1ce5ad-1a8c-4a28-99d8-fc71649954ad"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.682762 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6a1ce5ad-1a8c-4a28-99d8-fc71649954ad-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "6a1ce5ad-1a8c-4a28-99d8-fc71649954ad" (UID: "6a1ce5ad-1a8c-4a28-99d8-fc71649954ad"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.683273 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6a1ce5ad-1a8c-4a28-99d8-fc71649954ad-node-log" (OuterVolumeSpecName: "node-log") pod "6a1ce5ad-1a8c-4a28-99d8-fc71649954ad" (UID: "6a1ce5ad-1a8c-4a28-99d8-fc71649954ad"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.684358 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a1ce5ad-1a8c-4a28-99d8-fc71649954ad-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6a1ce5ad-1a8c-4a28-99d8-fc71649954ad" (UID: "6a1ce5ad-1a8c-4a28-99d8-fc71649954ad"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.688355 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a1ce5ad-1a8c-4a28-99d8-fc71649954ad-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6a1ce5ad-1a8c-4a28-99d8-fc71649954ad" (UID: "6a1ce5ad-1a8c-4a28-99d8-fc71649954ad"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.688576 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a1ce5ad-1a8c-4a28-99d8-fc71649954ad-kube-api-access-5jp8x" (OuterVolumeSpecName: "kube-api-access-5jp8x") pod "6a1ce5ad-1a8c-4a28-99d8-fc71649954ad" (UID: "6a1ce5ad-1a8c-4a28-99d8-fc71649954ad"). InnerVolumeSpecName "kube-api-access-5jp8x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.692887 4697 scope.go:117] "RemoveContainer" containerID="918734811d71295456ae6cb8f392e78a32b0db85d73470ffd6ddfaadc1efa3bd" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.698065 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6a1ce5ad-1a8c-4a28-99d8-fc71649954ad-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "6a1ce5ad-1a8c-4a28-99d8-fc71649954ad" (UID: "6a1ce5ad-1a8c-4a28-99d8-fc71649954ad"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.704563 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-2vqhk" podStartSLOduration=1.9158465470000001 podStartE2EDuration="9.704541547s" podCreationTimestamp="2026-01-27 15:20:37 +0000 UTC" firstStartedPulling="2026-01-27 15:20:37.895291524 +0000 UTC m=+734.067691305" lastFinishedPulling="2026-01-27 15:20:45.683986494 +0000 UTC m=+741.856386305" observedRunningTime="2026-01-27 15:20:46.671814227 +0000 UTC m=+742.844214028" watchObservedRunningTime="2026-01-27 15:20:46.704541547 +0000 UTC m=+742.876941328" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.742337 4697 scope.go:117] "RemoveContainer" containerID="8434917bca076a475c1e4b907733bca9cee4559bea25a20542bc654c51f925fd" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.742490 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-zbnv4"] Jan 27 15:20:46 crc kubenswrapper[4697]: E0127 15:20:46.743293 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a1ce5ad-1a8c-4a28-99d8-fc71649954ad" containerName="northd" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.743318 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a1ce5ad-1a8c-4a28-99d8-fc71649954ad" containerName="northd" Jan 27 15:20:46 crc kubenswrapper[4697]: E0127 15:20:46.743339 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a1ce5ad-1a8c-4a28-99d8-fc71649954ad" containerName="kube-rbac-proxy-node" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.743347 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a1ce5ad-1a8c-4a28-99d8-fc71649954ad" containerName="kube-rbac-proxy-node" Jan 27 15:20:46 crc kubenswrapper[4697]: E0127 15:20:46.743367 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a1ce5ad-1a8c-4a28-99d8-fc71649954ad" containerName="ovnkube-controller" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.743376 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a1ce5ad-1a8c-4a28-99d8-fc71649954ad" containerName="ovnkube-controller" Jan 27 15:20:46 crc kubenswrapper[4697]: E0127 15:20:46.743396 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a1ce5ad-1a8c-4a28-99d8-fc71649954ad" containerName="ovnkube-controller" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.743404 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a1ce5ad-1a8c-4a28-99d8-fc71649954ad" containerName="ovnkube-controller" Jan 27 15:20:46 crc kubenswrapper[4697]: E0127 15:20:46.743415 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a1ce5ad-1a8c-4a28-99d8-fc71649954ad" containerName="kubecfg-setup" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.743422 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a1ce5ad-1a8c-4a28-99d8-fc71649954ad" containerName="kubecfg-setup" Jan 27 15:20:46 crc kubenswrapper[4697]: E0127 15:20:46.743435 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a1ce5ad-1a8c-4a28-99d8-fc71649954ad" containerName="ovn-controller" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.743442 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a1ce5ad-1a8c-4a28-99d8-fc71649954ad" containerName="ovn-controller" Jan 27 15:20:46 crc kubenswrapper[4697]: E0127 15:20:46.743457 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a1ce5ad-1a8c-4a28-99d8-fc71649954ad" containerName="nbdb" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.743466 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a1ce5ad-1a8c-4a28-99d8-fc71649954ad" containerName="nbdb" Jan 27 15:20:46 crc kubenswrapper[4697]: E0127 15:20:46.743474 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a1ce5ad-1a8c-4a28-99d8-fc71649954ad" containerName="sbdb" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.743481 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a1ce5ad-1a8c-4a28-99d8-fc71649954ad" containerName="sbdb" Jan 27 15:20:46 crc kubenswrapper[4697]: E0127 15:20:46.743499 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a1ce5ad-1a8c-4a28-99d8-fc71649954ad" containerName="kube-rbac-proxy-ovn-metrics" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.743506 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a1ce5ad-1a8c-4a28-99d8-fc71649954ad" containerName="kube-rbac-proxy-ovn-metrics" Jan 27 15:20:46 crc kubenswrapper[4697]: E0127 15:20:46.743524 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a1ce5ad-1a8c-4a28-99d8-fc71649954ad" containerName="ovn-acl-logging" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.743532 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a1ce5ad-1a8c-4a28-99d8-fc71649954ad" containerName="ovn-acl-logging" Jan 27 15:20:46 crc kubenswrapper[4697]: E0127 15:20:46.743546 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a1ce5ad-1a8c-4a28-99d8-fc71649954ad" containerName="ovnkube-controller" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.743553 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a1ce5ad-1a8c-4a28-99d8-fc71649954ad" containerName="ovnkube-controller" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.744241 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a1ce5ad-1a8c-4a28-99d8-fc71649954ad" containerName="ovn-acl-logging" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.744264 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a1ce5ad-1a8c-4a28-99d8-fc71649954ad" containerName="ovnkube-controller" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.744273 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a1ce5ad-1a8c-4a28-99d8-fc71649954ad" containerName="ovnkube-controller" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.744282 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a1ce5ad-1a8c-4a28-99d8-fc71649954ad" containerName="sbdb" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.744302 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a1ce5ad-1a8c-4a28-99d8-fc71649954ad" containerName="kube-rbac-proxy-node" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.744314 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a1ce5ad-1a8c-4a28-99d8-fc71649954ad" containerName="kube-rbac-proxy-ovn-metrics" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.744329 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a1ce5ad-1a8c-4a28-99d8-fc71649954ad" containerName="nbdb" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.744339 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a1ce5ad-1a8c-4a28-99d8-fc71649954ad" containerName="northd" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.744351 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a1ce5ad-1a8c-4a28-99d8-fc71649954ad" containerName="ovn-controller" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.744361 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a1ce5ad-1a8c-4a28-99d8-fc71649954ad" containerName="ovnkube-controller" Jan 27 15:20:46 crc kubenswrapper[4697]: E0127 15:20:46.744580 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a1ce5ad-1a8c-4a28-99d8-fc71649954ad" containerName="ovnkube-controller" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.744592 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a1ce5ad-1a8c-4a28-99d8-fc71649954ad" containerName="ovnkube-controller" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.744865 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a1ce5ad-1a8c-4a28-99d8-fc71649954ad" containerName="ovnkube-controller" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.744881 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a1ce5ad-1a8c-4a28-99d8-fc71649954ad" containerName="ovnkube-controller" Jan 27 15:20:46 crc kubenswrapper[4697]: E0127 15:20:46.745133 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a1ce5ad-1a8c-4a28-99d8-fc71649954ad" containerName="ovnkube-controller" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.745153 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a1ce5ad-1a8c-4a28-99d8-fc71649954ad" containerName="ovnkube-controller" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.751453 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-zbnv4" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.773770 4697 scope.go:117] "RemoveContainer" containerID="971bf4362650664f5133d9b68b7a5ce76e54dafbf28c88730f678ada0256ffd9" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.783061 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d0bce6c4-5db7-4f8d-96f5-6afec6a4d438-host-cni-netd\") pod \"ovnkube-node-zbnv4\" (UID: \"d0bce6c4-5db7-4f8d-96f5-6afec6a4d438\") " pod="openshift-ovn-kubernetes/ovnkube-node-zbnv4" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.783116 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d0bce6c4-5db7-4f8d-96f5-6afec6a4d438-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-zbnv4\" (UID: \"d0bce6c4-5db7-4f8d-96f5-6afec6a4d438\") " pod="openshift-ovn-kubernetes/ovnkube-node-zbnv4" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.783144 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d0bce6c4-5db7-4f8d-96f5-6afec6a4d438-run-systemd\") pod \"ovnkube-node-zbnv4\" (UID: \"d0bce6c4-5db7-4f8d-96f5-6afec6a4d438\") " pod="openshift-ovn-kubernetes/ovnkube-node-zbnv4" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.783165 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d0bce6c4-5db7-4f8d-96f5-6afec6a4d438-env-overrides\") pod \"ovnkube-node-zbnv4\" (UID: \"d0bce6c4-5db7-4f8d-96f5-6afec6a4d438\") " pod="openshift-ovn-kubernetes/ovnkube-node-zbnv4" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.783189 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d0bce6c4-5db7-4f8d-96f5-6afec6a4d438-systemd-units\") pod \"ovnkube-node-zbnv4\" (UID: \"d0bce6c4-5db7-4f8d-96f5-6afec6a4d438\") " pod="openshift-ovn-kubernetes/ovnkube-node-zbnv4" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.783218 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d0bce6c4-5db7-4f8d-96f5-6afec6a4d438-host-kubelet\") pod \"ovnkube-node-zbnv4\" (UID: \"d0bce6c4-5db7-4f8d-96f5-6afec6a4d438\") " pod="openshift-ovn-kubernetes/ovnkube-node-zbnv4" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.783288 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d0bce6c4-5db7-4f8d-96f5-6afec6a4d438-etc-openvswitch\") pod \"ovnkube-node-zbnv4\" (UID: \"d0bce6c4-5db7-4f8d-96f5-6afec6a4d438\") " pod="openshift-ovn-kubernetes/ovnkube-node-zbnv4" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.783341 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d0bce6c4-5db7-4f8d-96f5-6afec6a4d438-ovnkube-script-lib\") pod \"ovnkube-node-zbnv4\" (UID: \"d0bce6c4-5db7-4f8d-96f5-6afec6a4d438\") " pod="openshift-ovn-kubernetes/ovnkube-node-zbnv4" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.783373 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpzgz\" (UniqueName: \"kubernetes.io/projected/d0bce6c4-5db7-4f8d-96f5-6afec6a4d438-kube-api-access-dpzgz\") pod \"ovnkube-node-zbnv4\" (UID: \"d0bce6c4-5db7-4f8d-96f5-6afec6a4d438\") " pod="openshift-ovn-kubernetes/ovnkube-node-zbnv4" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.783443 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d0bce6c4-5db7-4f8d-96f5-6afec6a4d438-run-openvswitch\") pod \"ovnkube-node-zbnv4\" (UID: \"d0bce6c4-5db7-4f8d-96f5-6afec6a4d438\") " pod="openshift-ovn-kubernetes/ovnkube-node-zbnv4" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.783463 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d0bce6c4-5db7-4f8d-96f5-6afec6a4d438-run-ovn\") pod \"ovnkube-node-zbnv4\" (UID: \"d0bce6c4-5db7-4f8d-96f5-6afec6a4d438\") " pod="openshift-ovn-kubernetes/ovnkube-node-zbnv4" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.783515 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d0bce6c4-5db7-4f8d-96f5-6afec6a4d438-ovn-node-metrics-cert\") pod \"ovnkube-node-zbnv4\" (UID: \"d0bce6c4-5db7-4f8d-96f5-6afec6a4d438\") " pod="openshift-ovn-kubernetes/ovnkube-node-zbnv4" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.783543 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d0bce6c4-5db7-4f8d-96f5-6afec6a4d438-node-log\") pod \"ovnkube-node-zbnv4\" (UID: \"d0bce6c4-5db7-4f8d-96f5-6afec6a4d438\") " pod="openshift-ovn-kubernetes/ovnkube-node-zbnv4" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.783590 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d0bce6c4-5db7-4f8d-96f5-6afec6a4d438-host-cni-bin\") pod \"ovnkube-node-zbnv4\" (UID: \"d0bce6c4-5db7-4f8d-96f5-6afec6a4d438\") " pod="openshift-ovn-kubernetes/ovnkube-node-zbnv4" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.783614 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d0bce6c4-5db7-4f8d-96f5-6afec6a4d438-host-slash\") pod \"ovnkube-node-zbnv4\" (UID: \"d0bce6c4-5db7-4f8d-96f5-6afec6a4d438\") " pod="openshift-ovn-kubernetes/ovnkube-node-zbnv4" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.783636 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d0bce6c4-5db7-4f8d-96f5-6afec6a4d438-var-lib-openvswitch\") pod \"ovnkube-node-zbnv4\" (UID: \"d0bce6c4-5db7-4f8d-96f5-6afec6a4d438\") " pod="openshift-ovn-kubernetes/ovnkube-node-zbnv4" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.783688 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d0bce6c4-5db7-4f8d-96f5-6afec6a4d438-host-run-netns\") pod \"ovnkube-node-zbnv4\" (UID: \"d0bce6c4-5db7-4f8d-96f5-6afec6a4d438\") " pod="openshift-ovn-kubernetes/ovnkube-node-zbnv4" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.783712 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d0bce6c4-5db7-4f8d-96f5-6afec6a4d438-host-run-ovn-kubernetes\") pod \"ovnkube-node-zbnv4\" (UID: \"d0bce6c4-5db7-4f8d-96f5-6afec6a4d438\") " pod="openshift-ovn-kubernetes/ovnkube-node-zbnv4" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.783761 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d0bce6c4-5db7-4f8d-96f5-6afec6a4d438-log-socket\") pod \"ovnkube-node-zbnv4\" (UID: \"d0bce6c4-5db7-4f8d-96f5-6afec6a4d438\") " pod="openshift-ovn-kubernetes/ovnkube-node-zbnv4" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.783908 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d0bce6c4-5db7-4f8d-96f5-6afec6a4d438-ovnkube-config\") pod \"ovnkube-node-zbnv4\" (UID: \"d0bce6c4-5db7-4f8d-96f5-6afec6a4d438\") " pod="openshift-ovn-kubernetes/ovnkube-node-zbnv4" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.783982 4697 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6a1ce5ad-1a8c-4a28-99d8-fc71649954ad-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.784002 4697 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6a1ce5ad-1a8c-4a28-99d8-fc71649954ad-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.784014 4697 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6a1ce5ad-1a8c-4a28-99d8-fc71649954ad-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.784027 4697 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6a1ce5ad-1a8c-4a28-99d8-fc71649954ad-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.784065 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5jp8x\" (UniqueName: \"kubernetes.io/projected/6a1ce5ad-1a8c-4a28-99d8-fc71649954ad-kube-api-access-5jp8x\") on node \"crc\" DevicePath \"\"" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.784079 4697 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6a1ce5ad-1a8c-4a28-99d8-fc71649954ad-host-slash\") on node \"crc\" DevicePath \"\"" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.784090 4697 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6a1ce5ad-1a8c-4a28-99d8-fc71649954ad-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.784101 4697 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6a1ce5ad-1a8c-4a28-99d8-fc71649954ad-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.784113 4697 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6a1ce5ad-1a8c-4a28-99d8-fc71649954ad-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.784153 4697 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6a1ce5ad-1a8c-4a28-99d8-fc71649954ad-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.784164 4697 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6a1ce5ad-1a8c-4a28-99d8-fc71649954ad-node-log\") on node \"crc\" DevicePath \"\"" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.784177 4697 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6a1ce5ad-1a8c-4a28-99d8-fc71649954ad-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.784189 4697 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6a1ce5ad-1a8c-4a28-99d8-fc71649954ad-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.784200 4697 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6a1ce5ad-1a8c-4a28-99d8-fc71649954ad-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.784241 4697 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6a1ce5ad-1a8c-4a28-99d8-fc71649954ad-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.784254 4697 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6a1ce5ad-1a8c-4a28-99d8-fc71649954ad-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.784266 4697 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6a1ce5ad-1a8c-4a28-99d8-fc71649954ad-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.784277 4697 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6a1ce5ad-1a8c-4a28-99d8-fc71649954ad-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.784328 4697 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6a1ce5ad-1a8c-4a28-99d8-fc71649954ad-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.797031 4697 scope.go:117] "RemoveContainer" containerID="c9146d3d41cb348c99ea78d62aef3aa7d46c5f99855e042fdf5bc38b18556e8d" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.809090 4697 scope.go:117] "RemoveContainer" containerID="e33c68fac5ef11b2704b8a1460588937489a191ea2eacb70548b1e99cf718822" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.824206 4697 scope.go:117] "RemoveContainer" containerID="25f52622d494cffbbd36c21f76148b896a10d3c1ace649ac0824e847b812a277" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.841188 4697 scope.go:117] "RemoveContainer" containerID="24ac4a674c5fb98082daeabf52736988951ea5c66064ff4bb63f0d40c43b947d" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.855666 4697 scope.go:117] "RemoveContainer" containerID="f8784cf473729161592d08c782f4754724d6609756a30040715cbff8c732a09c" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.866183 4697 scope.go:117] "RemoveContainer" containerID="eea7c2b7dbea8198cc4709a808f8ecab760514224f4e3eb96d04c3bd7f16df6d" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.877649 4697 scope.go:117] "RemoveContainer" containerID="b9666b8a501ef015431ee3be1fc34ca2b196011df3007d2e4d508f09f9967785" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.885298 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d0bce6c4-5db7-4f8d-96f5-6afec6a4d438-host-run-ovn-kubernetes\") pod \"ovnkube-node-zbnv4\" (UID: \"d0bce6c4-5db7-4f8d-96f5-6afec6a4d438\") " pod="openshift-ovn-kubernetes/ovnkube-node-zbnv4" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.885336 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d0bce6c4-5db7-4f8d-96f5-6afec6a4d438-log-socket\") pod \"ovnkube-node-zbnv4\" (UID: \"d0bce6c4-5db7-4f8d-96f5-6afec6a4d438\") " pod="openshift-ovn-kubernetes/ovnkube-node-zbnv4" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.885361 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d0bce6c4-5db7-4f8d-96f5-6afec6a4d438-ovnkube-config\") pod \"ovnkube-node-zbnv4\" (UID: \"d0bce6c4-5db7-4f8d-96f5-6afec6a4d438\") " pod="openshift-ovn-kubernetes/ovnkube-node-zbnv4" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.885378 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d0bce6c4-5db7-4f8d-96f5-6afec6a4d438-host-cni-netd\") pod \"ovnkube-node-zbnv4\" (UID: \"d0bce6c4-5db7-4f8d-96f5-6afec6a4d438\") " pod="openshift-ovn-kubernetes/ovnkube-node-zbnv4" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.885394 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d0bce6c4-5db7-4f8d-96f5-6afec6a4d438-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-zbnv4\" (UID: \"d0bce6c4-5db7-4f8d-96f5-6afec6a4d438\") " pod="openshift-ovn-kubernetes/ovnkube-node-zbnv4" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.885410 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d0bce6c4-5db7-4f8d-96f5-6afec6a4d438-run-systemd\") pod \"ovnkube-node-zbnv4\" (UID: \"d0bce6c4-5db7-4f8d-96f5-6afec6a4d438\") " pod="openshift-ovn-kubernetes/ovnkube-node-zbnv4" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.885429 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d0bce6c4-5db7-4f8d-96f5-6afec6a4d438-env-overrides\") pod \"ovnkube-node-zbnv4\" (UID: \"d0bce6c4-5db7-4f8d-96f5-6afec6a4d438\") " pod="openshift-ovn-kubernetes/ovnkube-node-zbnv4" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.885447 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d0bce6c4-5db7-4f8d-96f5-6afec6a4d438-systemd-units\") pod \"ovnkube-node-zbnv4\" (UID: \"d0bce6c4-5db7-4f8d-96f5-6afec6a4d438\") " pod="openshift-ovn-kubernetes/ovnkube-node-zbnv4" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.885469 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d0bce6c4-5db7-4f8d-96f5-6afec6a4d438-host-kubelet\") pod \"ovnkube-node-zbnv4\" (UID: \"d0bce6c4-5db7-4f8d-96f5-6afec6a4d438\") " pod="openshift-ovn-kubernetes/ovnkube-node-zbnv4" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.885497 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d0bce6c4-5db7-4f8d-96f5-6afec6a4d438-etc-openvswitch\") pod \"ovnkube-node-zbnv4\" (UID: \"d0bce6c4-5db7-4f8d-96f5-6afec6a4d438\") " pod="openshift-ovn-kubernetes/ovnkube-node-zbnv4" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.885511 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d0bce6c4-5db7-4f8d-96f5-6afec6a4d438-ovnkube-script-lib\") pod \"ovnkube-node-zbnv4\" (UID: \"d0bce6c4-5db7-4f8d-96f5-6afec6a4d438\") " pod="openshift-ovn-kubernetes/ovnkube-node-zbnv4" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.885534 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dpzgz\" (UniqueName: \"kubernetes.io/projected/d0bce6c4-5db7-4f8d-96f5-6afec6a4d438-kube-api-access-dpzgz\") pod \"ovnkube-node-zbnv4\" (UID: \"d0bce6c4-5db7-4f8d-96f5-6afec6a4d438\") " pod="openshift-ovn-kubernetes/ovnkube-node-zbnv4" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.885556 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d0bce6c4-5db7-4f8d-96f5-6afec6a4d438-run-openvswitch\") pod \"ovnkube-node-zbnv4\" (UID: \"d0bce6c4-5db7-4f8d-96f5-6afec6a4d438\") " pod="openshift-ovn-kubernetes/ovnkube-node-zbnv4" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.885574 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d0bce6c4-5db7-4f8d-96f5-6afec6a4d438-run-ovn\") pod \"ovnkube-node-zbnv4\" (UID: \"d0bce6c4-5db7-4f8d-96f5-6afec6a4d438\") " pod="openshift-ovn-kubernetes/ovnkube-node-zbnv4" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.885595 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d0bce6c4-5db7-4f8d-96f5-6afec6a4d438-ovn-node-metrics-cert\") pod \"ovnkube-node-zbnv4\" (UID: \"d0bce6c4-5db7-4f8d-96f5-6afec6a4d438\") " pod="openshift-ovn-kubernetes/ovnkube-node-zbnv4" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.885618 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d0bce6c4-5db7-4f8d-96f5-6afec6a4d438-node-log\") pod \"ovnkube-node-zbnv4\" (UID: \"d0bce6c4-5db7-4f8d-96f5-6afec6a4d438\") " pod="openshift-ovn-kubernetes/ovnkube-node-zbnv4" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.885637 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d0bce6c4-5db7-4f8d-96f5-6afec6a4d438-host-cni-bin\") pod \"ovnkube-node-zbnv4\" (UID: \"d0bce6c4-5db7-4f8d-96f5-6afec6a4d438\") " pod="openshift-ovn-kubernetes/ovnkube-node-zbnv4" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.885654 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d0bce6c4-5db7-4f8d-96f5-6afec6a4d438-host-slash\") pod \"ovnkube-node-zbnv4\" (UID: \"d0bce6c4-5db7-4f8d-96f5-6afec6a4d438\") " pod="openshift-ovn-kubernetes/ovnkube-node-zbnv4" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.885673 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d0bce6c4-5db7-4f8d-96f5-6afec6a4d438-var-lib-openvswitch\") pod \"ovnkube-node-zbnv4\" (UID: \"d0bce6c4-5db7-4f8d-96f5-6afec6a4d438\") " pod="openshift-ovn-kubernetes/ovnkube-node-zbnv4" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.885693 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d0bce6c4-5db7-4f8d-96f5-6afec6a4d438-host-run-netns\") pod \"ovnkube-node-zbnv4\" (UID: \"d0bce6c4-5db7-4f8d-96f5-6afec6a4d438\") " pod="openshift-ovn-kubernetes/ovnkube-node-zbnv4" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.885762 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d0bce6c4-5db7-4f8d-96f5-6afec6a4d438-host-run-netns\") pod \"ovnkube-node-zbnv4\" (UID: \"d0bce6c4-5db7-4f8d-96f5-6afec6a4d438\") " pod="openshift-ovn-kubernetes/ovnkube-node-zbnv4" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.885818 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d0bce6c4-5db7-4f8d-96f5-6afec6a4d438-host-run-ovn-kubernetes\") pod \"ovnkube-node-zbnv4\" (UID: \"d0bce6c4-5db7-4f8d-96f5-6afec6a4d438\") " pod="openshift-ovn-kubernetes/ovnkube-node-zbnv4" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.885841 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d0bce6c4-5db7-4f8d-96f5-6afec6a4d438-log-socket\") pod \"ovnkube-node-zbnv4\" (UID: \"d0bce6c4-5db7-4f8d-96f5-6afec6a4d438\") " pod="openshift-ovn-kubernetes/ovnkube-node-zbnv4" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.886409 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d0bce6c4-5db7-4f8d-96f5-6afec6a4d438-ovnkube-config\") pod \"ovnkube-node-zbnv4\" (UID: \"d0bce6c4-5db7-4f8d-96f5-6afec6a4d438\") " pod="openshift-ovn-kubernetes/ovnkube-node-zbnv4" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.886457 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d0bce6c4-5db7-4f8d-96f5-6afec6a4d438-host-cni-netd\") pod \"ovnkube-node-zbnv4\" (UID: \"d0bce6c4-5db7-4f8d-96f5-6afec6a4d438\") " pod="openshift-ovn-kubernetes/ovnkube-node-zbnv4" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.886484 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d0bce6c4-5db7-4f8d-96f5-6afec6a4d438-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-zbnv4\" (UID: \"d0bce6c4-5db7-4f8d-96f5-6afec6a4d438\") " pod="openshift-ovn-kubernetes/ovnkube-node-zbnv4" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.886511 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d0bce6c4-5db7-4f8d-96f5-6afec6a4d438-run-systemd\") pod \"ovnkube-node-zbnv4\" (UID: \"d0bce6c4-5db7-4f8d-96f5-6afec6a4d438\") " pod="openshift-ovn-kubernetes/ovnkube-node-zbnv4" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.886835 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d0bce6c4-5db7-4f8d-96f5-6afec6a4d438-env-overrides\") pod \"ovnkube-node-zbnv4\" (UID: \"d0bce6c4-5db7-4f8d-96f5-6afec6a4d438\") " pod="openshift-ovn-kubernetes/ovnkube-node-zbnv4" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.886875 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d0bce6c4-5db7-4f8d-96f5-6afec6a4d438-systemd-units\") pod \"ovnkube-node-zbnv4\" (UID: \"d0bce6c4-5db7-4f8d-96f5-6afec6a4d438\") " pod="openshift-ovn-kubernetes/ovnkube-node-zbnv4" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.886899 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d0bce6c4-5db7-4f8d-96f5-6afec6a4d438-host-kubelet\") pod \"ovnkube-node-zbnv4\" (UID: \"d0bce6c4-5db7-4f8d-96f5-6afec6a4d438\") " pod="openshift-ovn-kubernetes/ovnkube-node-zbnv4" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.886923 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d0bce6c4-5db7-4f8d-96f5-6afec6a4d438-etc-openvswitch\") pod \"ovnkube-node-zbnv4\" (UID: \"d0bce6c4-5db7-4f8d-96f5-6afec6a4d438\") " pod="openshift-ovn-kubernetes/ovnkube-node-zbnv4" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.887347 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d0bce6c4-5db7-4f8d-96f5-6afec6a4d438-ovnkube-script-lib\") pod \"ovnkube-node-zbnv4\" (UID: \"d0bce6c4-5db7-4f8d-96f5-6afec6a4d438\") " pod="openshift-ovn-kubernetes/ovnkube-node-zbnv4" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.887690 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d0bce6c4-5db7-4f8d-96f5-6afec6a4d438-run-openvswitch\") pod \"ovnkube-node-zbnv4\" (UID: \"d0bce6c4-5db7-4f8d-96f5-6afec6a4d438\") " pod="openshift-ovn-kubernetes/ovnkube-node-zbnv4" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.887750 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d0bce6c4-5db7-4f8d-96f5-6afec6a4d438-run-ovn\") pod \"ovnkube-node-zbnv4\" (UID: \"d0bce6c4-5db7-4f8d-96f5-6afec6a4d438\") " pod="openshift-ovn-kubernetes/ovnkube-node-zbnv4" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.888558 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d0bce6c4-5db7-4f8d-96f5-6afec6a4d438-host-cni-bin\") pod \"ovnkube-node-zbnv4\" (UID: \"d0bce6c4-5db7-4f8d-96f5-6afec6a4d438\") " pod="openshift-ovn-kubernetes/ovnkube-node-zbnv4" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.888643 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d0bce6c4-5db7-4f8d-96f5-6afec6a4d438-var-lib-openvswitch\") pod \"ovnkube-node-zbnv4\" (UID: \"d0bce6c4-5db7-4f8d-96f5-6afec6a4d438\") " pod="openshift-ovn-kubernetes/ovnkube-node-zbnv4" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.888658 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d0bce6c4-5db7-4f8d-96f5-6afec6a4d438-host-slash\") pod \"ovnkube-node-zbnv4\" (UID: \"d0bce6c4-5db7-4f8d-96f5-6afec6a4d438\") " pod="openshift-ovn-kubernetes/ovnkube-node-zbnv4" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.888666 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d0bce6c4-5db7-4f8d-96f5-6afec6a4d438-node-log\") pod \"ovnkube-node-zbnv4\" (UID: \"d0bce6c4-5db7-4f8d-96f5-6afec6a4d438\") " pod="openshift-ovn-kubernetes/ovnkube-node-zbnv4" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.890162 4697 scope.go:117] "RemoveContainer" containerID="918734811d71295456ae6cb8f392e78a32b0db85d73470ffd6ddfaadc1efa3bd" Jan 27 15:20:46 crc kubenswrapper[4697]: E0127 15:20:46.890635 4697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"918734811d71295456ae6cb8f392e78a32b0db85d73470ffd6ddfaadc1efa3bd\": container with ID starting with 918734811d71295456ae6cb8f392e78a32b0db85d73470ffd6ddfaadc1efa3bd not found: ID does not exist" containerID="918734811d71295456ae6cb8f392e78a32b0db85d73470ffd6ddfaadc1efa3bd" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.890661 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"918734811d71295456ae6cb8f392e78a32b0db85d73470ffd6ddfaadc1efa3bd"} err="failed to get container status \"918734811d71295456ae6cb8f392e78a32b0db85d73470ffd6ddfaadc1efa3bd\": rpc error: code = NotFound desc = could not find container \"918734811d71295456ae6cb8f392e78a32b0db85d73470ffd6ddfaadc1efa3bd\": container with ID starting with 918734811d71295456ae6cb8f392e78a32b0db85d73470ffd6ddfaadc1efa3bd not found: ID does not exist" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.890680 4697 scope.go:117] "RemoveContainer" containerID="8434917bca076a475c1e4b907733bca9cee4559bea25a20542bc654c51f925fd" Jan 27 15:20:46 crc kubenswrapper[4697]: E0127 15:20:46.890948 4697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8434917bca076a475c1e4b907733bca9cee4559bea25a20542bc654c51f925fd\": container with ID starting with 8434917bca076a475c1e4b907733bca9cee4559bea25a20542bc654c51f925fd not found: ID does not exist" containerID="8434917bca076a475c1e4b907733bca9cee4559bea25a20542bc654c51f925fd" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.890987 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8434917bca076a475c1e4b907733bca9cee4559bea25a20542bc654c51f925fd"} err="failed to get container status \"8434917bca076a475c1e4b907733bca9cee4559bea25a20542bc654c51f925fd\": rpc error: code = NotFound desc = could not find container \"8434917bca076a475c1e4b907733bca9cee4559bea25a20542bc654c51f925fd\": container with ID starting with 8434917bca076a475c1e4b907733bca9cee4559bea25a20542bc654c51f925fd not found: ID does not exist" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.891001 4697 scope.go:117] "RemoveContainer" containerID="971bf4362650664f5133d9b68b7a5ce76e54dafbf28c88730f678ada0256ffd9" Jan 27 15:20:46 crc kubenswrapper[4697]: E0127 15:20:46.891335 4697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"971bf4362650664f5133d9b68b7a5ce76e54dafbf28c88730f678ada0256ffd9\": container with ID starting with 971bf4362650664f5133d9b68b7a5ce76e54dafbf28c88730f678ada0256ffd9 not found: ID does not exist" containerID="971bf4362650664f5133d9b68b7a5ce76e54dafbf28c88730f678ada0256ffd9" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.891355 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"971bf4362650664f5133d9b68b7a5ce76e54dafbf28c88730f678ada0256ffd9"} err="failed to get container status \"971bf4362650664f5133d9b68b7a5ce76e54dafbf28c88730f678ada0256ffd9\": rpc error: code = NotFound desc = could not find container \"971bf4362650664f5133d9b68b7a5ce76e54dafbf28c88730f678ada0256ffd9\": container with ID starting with 971bf4362650664f5133d9b68b7a5ce76e54dafbf28c88730f678ada0256ffd9 not found: ID does not exist" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.891367 4697 scope.go:117] "RemoveContainer" containerID="c9146d3d41cb348c99ea78d62aef3aa7d46c5f99855e042fdf5bc38b18556e8d" Jan 27 15:20:46 crc kubenswrapper[4697]: E0127 15:20:46.892050 4697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c9146d3d41cb348c99ea78d62aef3aa7d46c5f99855e042fdf5bc38b18556e8d\": container with ID starting with c9146d3d41cb348c99ea78d62aef3aa7d46c5f99855e042fdf5bc38b18556e8d not found: ID does not exist" containerID="c9146d3d41cb348c99ea78d62aef3aa7d46c5f99855e042fdf5bc38b18556e8d" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.892090 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9146d3d41cb348c99ea78d62aef3aa7d46c5f99855e042fdf5bc38b18556e8d"} err="failed to get container status \"c9146d3d41cb348c99ea78d62aef3aa7d46c5f99855e042fdf5bc38b18556e8d\": rpc error: code = NotFound desc = could not find container \"c9146d3d41cb348c99ea78d62aef3aa7d46c5f99855e042fdf5bc38b18556e8d\": container with ID starting with c9146d3d41cb348c99ea78d62aef3aa7d46c5f99855e042fdf5bc38b18556e8d not found: ID does not exist" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.892106 4697 scope.go:117] "RemoveContainer" containerID="e33c68fac5ef11b2704b8a1460588937489a191ea2eacb70548b1e99cf718822" Jan 27 15:20:46 crc kubenswrapper[4697]: E0127 15:20:46.892404 4697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e33c68fac5ef11b2704b8a1460588937489a191ea2eacb70548b1e99cf718822\": container with ID starting with e33c68fac5ef11b2704b8a1460588937489a191ea2eacb70548b1e99cf718822 not found: ID does not exist" containerID="e33c68fac5ef11b2704b8a1460588937489a191ea2eacb70548b1e99cf718822" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.892425 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e33c68fac5ef11b2704b8a1460588937489a191ea2eacb70548b1e99cf718822"} err="failed to get container status \"e33c68fac5ef11b2704b8a1460588937489a191ea2eacb70548b1e99cf718822\": rpc error: code = NotFound desc = could not find container \"e33c68fac5ef11b2704b8a1460588937489a191ea2eacb70548b1e99cf718822\": container with ID starting with e33c68fac5ef11b2704b8a1460588937489a191ea2eacb70548b1e99cf718822 not found: ID does not exist" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.892439 4697 scope.go:117] "RemoveContainer" containerID="25f52622d494cffbbd36c21f76148b896a10d3c1ace649ac0824e847b812a277" Jan 27 15:20:46 crc kubenswrapper[4697]: E0127 15:20:46.892778 4697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"25f52622d494cffbbd36c21f76148b896a10d3c1ace649ac0824e847b812a277\": container with ID starting with 25f52622d494cffbbd36c21f76148b896a10d3c1ace649ac0824e847b812a277 not found: ID does not exist" containerID="25f52622d494cffbbd36c21f76148b896a10d3c1ace649ac0824e847b812a277" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.892833 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25f52622d494cffbbd36c21f76148b896a10d3c1ace649ac0824e847b812a277"} err="failed to get container status \"25f52622d494cffbbd36c21f76148b896a10d3c1ace649ac0824e847b812a277\": rpc error: code = NotFound desc = could not find container \"25f52622d494cffbbd36c21f76148b896a10d3c1ace649ac0824e847b812a277\": container with ID starting with 25f52622d494cffbbd36c21f76148b896a10d3c1ace649ac0824e847b812a277 not found: ID does not exist" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.892862 4697 scope.go:117] "RemoveContainer" containerID="24ac4a674c5fb98082daeabf52736988951ea5c66064ff4bb63f0d40c43b947d" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.893396 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d0bce6c4-5db7-4f8d-96f5-6afec6a4d438-ovn-node-metrics-cert\") pod \"ovnkube-node-zbnv4\" (UID: \"d0bce6c4-5db7-4f8d-96f5-6afec6a4d438\") " pod="openshift-ovn-kubernetes/ovnkube-node-zbnv4" Jan 27 15:20:46 crc kubenswrapper[4697]: E0127 15:20:46.893470 4697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"24ac4a674c5fb98082daeabf52736988951ea5c66064ff4bb63f0d40c43b947d\": container with ID starting with 24ac4a674c5fb98082daeabf52736988951ea5c66064ff4bb63f0d40c43b947d not found: ID does not exist" containerID="24ac4a674c5fb98082daeabf52736988951ea5c66064ff4bb63f0d40c43b947d" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.893490 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24ac4a674c5fb98082daeabf52736988951ea5c66064ff4bb63f0d40c43b947d"} err="failed to get container status \"24ac4a674c5fb98082daeabf52736988951ea5c66064ff4bb63f0d40c43b947d\": rpc error: code = NotFound desc = could not find container \"24ac4a674c5fb98082daeabf52736988951ea5c66064ff4bb63f0d40c43b947d\": container with ID starting with 24ac4a674c5fb98082daeabf52736988951ea5c66064ff4bb63f0d40c43b947d not found: ID does not exist" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.893504 4697 scope.go:117] "RemoveContainer" containerID="f8784cf473729161592d08c782f4754724d6609756a30040715cbff8c732a09c" Jan 27 15:20:46 crc kubenswrapper[4697]: E0127 15:20:46.893715 4697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f8784cf473729161592d08c782f4754724d6609756a30040715cbff8c732a09c\": container with ID starting with f8784cf473729161592d08c782f4754724d6609756a30040715cbff8c732a09c not found: ID does not exist" containerID="f8784cf473729161592d08c782f4754724d6609756a30040715cbff8c732a09c" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.893736 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8784cf473729161592d08c782f4754724d6609756a30040715cbff8c732a09c"} err="failed to get container status \"f8784cf473729161592d08c782f4754724d6609756a30040715cbff8c732a09c\": rpc error: code = NotFound desc = could not find container \"f8784cf473729161592d08c782f4754724d6609756a30040715cbff8c732a09c\": container with ID starting with f8784cf473729161592d08c782f4754724d6609756a30040715cbff8c732a09c not found: ID does not exist" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.893748 4697 scope.go:117] "RemoveContainer" containerID="eea7c2b7dbea8198cc4709a808f8ecab760514224f4e3eb96d04c3bd7f16df6d" Jan 27 15:20:46 crc kubenswrapper[4697]: E0127 15:20:46.893964 4697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eea7c2b7dbea8198cc4709a808f8ecab760514224f4e3eb96d04c3bd7f16df6d\": container with ID starting with eea7c2b7dbea8198cc4709a808f8ecab760514224f4e3eb96d04c3bd7f16df6d not found: ID does not exist" containerID="eea7c2b7dbea8198cc4709a808f8ecab760514224f4e3eb96d04c3bd7f16df6d" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.893985 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eea7c2b7dbea8198cc4709a808f8ecab760514224f4e3eb96d04c3bd7f16df6d"} err="failed to get container status \"eea7c2b7dbea8198cc4709a808f8ecab760514224f4e3eb96d04c3bd7f16df6d\": rpc error: code = NotFound desc = could not find container \"eea7c2b7dbea8198cc4709a808f8ecab760514224f4e3eb96d04c3bd7f16df6d\": container with ID starting with eea7c2b7dbea8198cc4709a808f8ecab760514224f4e3eb96d04c3bd7f16df6d not found: ID does not exist" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.893998 4697 scope.go:117] "RemoveContainer" containerID="b9666b8a501ef015431ee3be1fc34ca2b196011df3007d2e4d508f09f9967785" Jan 27 15:20:46 crc kubenswrapper[4697]: E0127 15:20:46.894162 4697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b9666b8a501ef015431ee3be1fc34ca2b196011df3007d2e4d508f09f9967785\": container with ID starting with b9666b8a501ef015431ee3be1fc34ca2b196011df3007d2e4d508f09f9967785 not found: ID does not exist" containerID="b9666b8a501ef015431ee3be1fc34ca2b196011df3007d2e4d508f09f9967785" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.894180 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9666b8a501ef015431ee3be1fc34ca2b196011df3007d2e4d508f09f9967785"} err="failed to get container status \"b9666b8a501ef015431ee3be1fc34ca2b196011df3007d2e4d508f09f9967785\": rpc error: code = NotFound desc = could not find container \"b9666b8a501ef015431ee3be1fc34ca2b196011df3007d2e4d508f09f9967785\": container with ID starting with b9666b8a501ef015431ee3be1fc34ca2b196011df3007d2e4d508f09f9967785 not found: ID does not exist" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.894191 4697 scope.go:117] "RemoveContainer" containerID="918734811d71295456ae6cb8f392e78a32b0db85d73470ffd6ddfaadc1efa3bd" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.894388 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"918734811d71295456ae6cb8f392e78a32b0db85d73470ffd6ddfaadc1efa3bd"} err="failed to get container status \"918734811d71295456ae6cb8f392e78a32b0db85d73470ffd6ddfaadc1efa3bd\": rpc error: code = NotFound desc = could not find container \"918734811d71295456ae6cb8f392e78a32b0db85d73470ffd6ddfaadc1efa3bd\": container with ID starting with 918734811d71295456ae6cb8f392e78a32b0db85d73470ffd6ddfaadc1efa3bd not found: ID does not exist" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.894412 4697 scope.go:117] "RemoveContainer" containerID="8434917bca076a475c1e4b907733bca9cee4559bea25a20542bc654c51f925fd" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.894687 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8434917bca076a475c1e4b907733bca9cee4559bea25a20542bc654c51f925fd"} err="failed to get container status \"8434917bca076a475c1e4b907733bca9cee4559bea25a20542bc654c51f925fd\": rpc error: code = NotFound desc = could not find container \"8434917bca076a475c1e4b907733bca9cee4559bea25a20542bc654c51f925fd\": container with ID starting with 8434917bca076a475c1e4b907733bca9cee4559bea25a20542bc654c51f925fd not found: ID does not exist" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.894711 4697 scope.go:117] "RemoveContainer" containerID="971bf4362650664f5133d9b68b7a5ce76e54dafbf28c88730f678ada0256ffd9" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.894916 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"971bf4362650664f5133d9b68b7a5ce76e54dafbf28c88730f678ada0256ffd9"} err="failed to get container status \"971bf4362650664f5133d9b68b7a5ce76e54dafbf28c88730f678ada0256ffd9\": rpc error: code = NotFound desc = could not find container \"971bf4362650664f5133d9b68b7a5ce76e54dafbf28c88730f678ada0256ffd9\": container with ID starting with 971bf4362650664f5133d9b68b7a5ce76e54dafbf28c88730f678ada0256ffd9 not found: ID does not exist" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.894937 4697 scope.go:117] "RemoveContainer" containerID="c9146d3d41cb348c99ea78d62aef3aa7d46c5f99855e042fdf5bc38b18556e8d" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.895304 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9146d3d41cb348c99ea78d62aef3aa7d46c5f99855e042fdf5bc38b18556e8d"} err="failed to get container status \"c9146d3d41cb348c99ea78d62aef3aa7d46c5f99855e042fdf5bc38b18556e8d\": rpc error: code = NotFound desc = could not find container \"c9146d3d41cb348c99ea78d62aef3aa7d46c5f99855e042fdf5bc38b18556e8d\": container with ID starting with c9146d3d41cb348c99ea78d62aef3aa7d46c5f99855e042fdf5bc38b18556e8d not found: ID does not exist" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.895325 4697 scope.go:117] "RemoveContainer" containerID="e33c68fac5ef11b2704b8a1460588937489a191ea2eacb70548b1e99cf718822" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.895529 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e33c68fac5ef11b2704b8a1460588937489a191ea2eacb70548b1e99cf718822"} err="failed to get container status \"e33c68fac5ef11b2704b8a1460588937489a191ea2eacb70548b1e99cf718822\": rpc error: code = NotFound desc = could not find container \"e33c68fac5ef11b2704b8a1460588937489a191ea2eacb70548b1e99cf718822\": container with ID starting with e33c68fac5ef11b2704b8a1460588937489a191ea2eacb70548b1e99cf718822 not found: ID does not exist" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.895556 4697 scope.go:117] "RemoveContainer" containerID="25f52622d494cffbbd36c21f76148b896a10d3c1ace649ac0824e847b812a277" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.895743 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25f52622d494cffbbd36c21f76148b896a10d3c1ace649ac0824e847b812a277"} err="failed to get container status \"25f52622d494cffbbd36c21f76148b896a10d3c1ace649ac0824e847b812a277\": rpc error: code = NotFound desc = could not find container \"25f52622d494cffbbd36c21f76148b896a10d3c1ace649ac0824e847b812a277\": container with ID starting with 25f52622d494cffbbd36c21f76148b896a10d3c1ace649ac0824e847b812a277 not found: ID does not exist" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.895765 4697 scope.go:117] "RemoveContainer" containerID="24ac4a674c5fb98082daeabf52736988951ea5c66064ff4bb63f0d40c43b947d" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.896000 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24ac4a674c5fb98082daeabf52736988951ea5c66064ff4bb63f0d40c43b947d"} err="failed to get container status \"24ac4a674c5fb98082daeabf52736988951ea5c66064ff4bb63f0d40c43b947d\": rpc error: code = NotFound desc = could not find container \"24ac4a674c5fb98082daeabf52736988951ea5c66064ff4bb63f0d40c43b947d\": container with ID starting with 24ac4a674c5fb98082daeabf52736988951ea5c66064ff4bb63f0d40c43b947d not found: ID does not exist" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.896022 4697 scope.go:117] "RemoveContainer" containerID="f8784cf473729161592d08c782f4754724d6609756a30040715cbff8c732a09c" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.896191 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8784cf473729161592d08c782f4754724d6609756a30040715cbff8c732a09c"} err="failed to get container status \"f8784cf473729161592d08c782f4754724d6609756a30040715cbff8c732a09c\": rpc error: code = NotFound desc = could not find container \"f8784cf473729161592d08c782f4754724d6609756a30040715cbff8c732a09c\": container with ID starting with f8784cf473729161592d08c782f4754724d6609756a30040715cbff8c732a09c not found: ID does not exist" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.896214 4697 scope.go:117] "RemoveContainer" containerID="eea7c2b7dbea8198cc4709a808f8ecab760514224f4e3eb96d04c3bd7f16df6d" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.896387 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eea7c2b7dbea8198cc4709a808f8ecab760514224f4e3eb96d04c3bd7f16df6d"} err="failed to get container status \"eea7c2b7dbea8198cc4709a808f8ecab760514224f4e3eb96d04c3bd7f16df6d\": rpc error: code = NotFound desc = could not find container \"eea7c2b7dbea8198cc4709a808f8ecab760514224f4e3eb96d04c3bd7f16df6d\": container with ID starting with eea7c2b7dbea8198cc4709a808f8ecab760514224f4e3eb96d04c3bd7f16df6d not found: ID does not exist" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.896406 4697 scope.go:117] "RemoveContainer" containerID="b9666b8a501ef015431ee3be1fc34ca2b196011df3007d2e4d508f09f9967785" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.896540 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9666b8a501ef015431ee3be1fc34ca2b196011df3007d2e4d508f09f9967785"} err="failed to get container status \"b9666b8a501ef015431ee3be1fc34ca2b196011df3007d2e4d508f09f9967785\": rpc error: code = NotFound desc = could not find container \"b9666b8a501ef015431ee3be1fc34ca2b196011df3007d2e4d508f09f9967785\": container with ID starting with b9666b8a501ef015431ee3be1fc34ca2b196011df3007d2e4d508f09f9967785 not found: ID does not exist" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.896556 4697 scope.go:117] "RemoveContainer" containerID="918734811d71295456ae6cb8f392e78a32b0db85d73470ffd6ddfaadc1efa3bd" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.896692 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"918734811d71295456ae6cb8f392e78a32b0db85d73470ffd6ddfaadc1efa3bd"} err="failed to get container status \"918734811d71295456ae6cb8f392e78a32b0db85d73470ffd6ddfaadc1efa3bd\": rpc error: code = NotFound desc = could not find container \"918734811d71295456ae6cb8f392e78a32b0db85d73470ffd6ddfaadc1efa3bd\": container with ID starting with 918734811d71295456ae6cb8f392e78a32b0db85d73470ffd6ddfaadc1efa3bd not found: ID does not exist" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.896709 4697 scope.go:117] "RemoveContainer" containerID="8434917bca076a475c1e4b907733bca9cee4559bea25a20542bc654c51f925fd" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.896944 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8434917bca076a475c1e4b907733bca9cee4559bea25a20542bc654c51f925fd"} err="failed to get container status \"8434917bca076a475c1e4b907733bca9cee4559bea25a20542bc654c51f925fd\": rpc error: code = NotFound desc = could not find container \"8434917bca076a475c1e4b907733bca9cee4559bea25a20542bc654c51f925fd\": container with ID starting with 8434917bca076a475c1e4b907733bca9cee4559bea25a20542bc654c51f925fd not found: ID does not exist" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.896959 4697 scope.go:117] "RemoveContainer" containerID="971bf4362650664f5133d9b68b7a5ce76e54dafbf28c88730f678ada0256ffd9" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.897122 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"971bf4362650664f5133d9b68b7a5ce76e54dafbf28c88730f678ada0256ffd9"} err="failed to get container status \"971bf4362650664f5133d9b68b7a5ce76e54dafbf28c88730f678ada0256ffd9\": rpc error: code = NotFound desc = could not find container \"971bf4362650664f5133d9b68b7a5ce76e54dafbf28c88730f678ada0256ffd9\": container with ID starting with 971bf4362650664f5133d9b68b7a5ce76e54dafbf28c88730f678ada0256ffd9 not found: ID does not exist" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.897140 4697 scope.go:117] "RemoveContainer" containerID="c9146d3d41cb348c99ea78d62aef3aa7d46c5f99855e042fdf5bc38b18556e8d" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.897279 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9146d3d41cb348c99ea78d62aef3aa7d46c5f99855e042fdf5bc38b18556e8d"} err="failed to get container status \"c9146d3d41cb348c99ea78d62aef3aa7d46c5f99855e042fdf5bc38b18556e8d\": rpc error: code = NotFound desc = could not find container \"c9146d3d41cb348c99ea78d62aef3aa7d46c5f99855e042fdf5bc38b18556e8d\": container with ID starting with c9146d3d41cb348c99ea78d62aef3aa7d46c5f99855e042fdf5bc38b18556e8d not found: ID does not exist" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.897295 4697 scope.go:117] "RemoveContainer" containerID="e33c68fac5ef11b2704b8a1460588937489a191ea2eacb70548b1e99cf718822" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.897423 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e33c68fac5ef11b2704b8a1460588937489a191ea2eacb70548b1e99cf718822"} err="failed to get container status \"e33c68fac5ef11b2704b8a1460588937489a191ea2eacb70548b1e99cf718822\": rpc error: code = NotFound desc = could not find container \"e33c68fac5ef11b2704b8a1460588937489a191ea2eacb70548b1e99cf718822\": container with ID starting with e33c68fac5ef11b2704b8a1460588937489a191ea2eacb70548b1e99cf718822 not found: ID does not exist" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.897443 4697 scope.go:117] "RemoveContainer" containerID="25f52622d494cffbbd36c21f76148b896a10d3c1ace649ac0824e847b812a277" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.897588 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25f52622d494cffbbd36c21f76148b896a10d3c1ace649ac0824e847b812a277"} err="failed to get container status \"25f52622d494cffbbd36c21f76148b896a10d3c1ace649ac0824e847b812a277\": rpc error: code = NotFound desc = could not find container \"25f52622d494cffbbd36c21f76148b896a10d3c1ace649ac0824e847b812a277\": container with ID starting with 25f52622d494cffbbd36c21f76148b896a10d3c1ace649ac0824e847b812a277 not found: ID does not exist" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.897610 4697 scope.go:117] "RemoveContainer" containerID="24ac4a674c5fb98082daeabf52736988951ea5c66064ff4bb63f0d40c43b947d" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.897829 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24ac4a674c5fb98082daeabf52736988951ea5c66064ff4bb63f0d40c43b947d"} err="failed to get container status \"24ac4a674c5fb98082daeabf52736988951ea5c66064ff4bb63f0d40c43b947d\": rpc error: code = NotFound desc = could not find container \"24ac4a674c5fb98082daeabf52736988951ea5c66064ff4bb63f0d40c43b947d\": container with ID starting with 24ac4a674c5fb98082daeabf52736988951ea5c66064ff4bb63f0d40c43b947d not found: ID does not exist" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.897860 4697 scope.go:117] "RemoveContainer" containerID="f8784cf473729161592d08c782f4754724d6609756a30040715cbff8c732a09c" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.898227 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8784cf473729161592d08c782f4754724d6609756a30040715cbff8c732a09c"} err="failed to get container status \"f8784cf473729161592d08c782f4754724d6609756a30040715cbff8c732a09c\": rpc error: code = NotFound desc = could not find container \"f8784cf473729161592d08c782f4754724d6609756a30040715cbff8c732a09c\": container with ID starting with f8784cf473729161592d08c782f4754724d6609756a30040715cbff8c732a09c not found: ID does not exist" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.898252 4697 scope.go:117] "RemoveContainer" containerID="eea7c2b7dbea8198cc4709a808f8ecab760514224f4e3eb96d04c3bd7f16df6d" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.898486 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eea7c2b7dbea8198cc4709a808f8ecab760514224f4e3eb96d04c3bd7f16df6d"} err="failed to get container status \"eea7c2b7dbea8198cc4709a808f8ecab760514224f4e3eb96d04c3bd7f16df6d\": rpc error: code = NotFound desc = could not find container \"eea7c2b7dbea8198cc4709a808f8ecab760514224f4e3eb96d04c3bd7f16df6d\": container with ID starting with eea7c2b7dbea8198cc4709a808f8ecab760514224f4e3eb96d04c3bd7f16df6d not found: ID does not exist" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.898519 4697 scope.go:117] "RemoveContainer" containerID="b9666b8a501ef015431ee3be1fc34ca2b196011df3007d2e4d508f09f9967785" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.898755 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9666b8a501ef015431ee3be1fc34ca2b196011df3007d2e4d508f09f9967785"} err="failed to get container status \"b9666b8a501ef015431ee3be1fc34ca2b196011df3007d2e4d508f09f9967785\": rpc error: code = NotFound desc = could not find container \"b9666b8a501ef015431ee3be1fc34ca2b196011df3007d2e4d508f09f9967785\": container with ID starting with b9666b8a501ef015431ee3be1fc34ca2b196011df3007d2e4d508f09f9967785 not found: ID does not exist" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.898790 4697 scope.go:117] "RemoveContainer" containerID="918734811d71295456ae6cb8f392e78a32b0db85d73470ffd6ddfaadc1efa3bd" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.898972 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"918734811d71295456ae6cb8f392e78a32b0db85d73470ffd6ddfaadc1efa3bd"} err="failed to get container status \"918734811d71295456ae6cb8f392e78a32b0db85d73470ffd6ddfaadc1efa3bd\": rpc error: code = NotFound desc = could not find container \"918734811d71295456ae6cb8f392e78a32b0db85d73470ffd6ddfaadc1efa3bd\": container with ID starting with 918734811d71295456ae6cb8f392e78a32b0db85d73470ffd6ddfaadc1efa3bd not found: ID does not exist" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.898993 4697 scope.go:117] "RemoveContainer" containerID="8434917bca076a475c1e4b907733bca9cee4559bea25a20542bc654c51f925fd" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.899160 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8434917bca076a475c1e4b907733bca9cee4559bea25a20542bc654c51f925fd"} err="failed to get container status \"8434917bca076a475c1e4b907733bca9cee4559bea25a20542bc654c51f925fd\": rpc error: code = NotFound desc = could not find container \"8434917bca076a475c1e4b907733bca9cee4559bea25a20542bc654c51f925fd\": container with ID starting with 8434917bca076a475c1e4b907733bca9cee4559bea25a20542bc654c51f925fd not found: ID does not exist" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.899181 4697 scope.go:117] "RemoveContainer" containerID="971bf4362650664f5133d9b68b7a5ce76e54dafbf28c88730f678ada0256ffd9" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.899384 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"971bf4362650664f5133d9b68b7a5ce76e54dafbf28c88730f678ada0256ffd9"} err="failed to get container status \"971bf4362650664f5133d9b68b7a5ce76e54dafbf28c88730f678ada0256ffd9\": rpc error: code = NotFound desc = could not find container \"971bf4362650664f5133d9b68b7a5ce76e54dafbf28c88730f678ada0256ffd9\": container with ID starting with 971bf4362650664f5133d9b68b7a5ce76e54dafbf28c88730f678ada0256ffd9 not found: ID does not exist" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.899414 4697 scope.go:117] "RemoveContainer" containerID="c9146d3d41cb348c99ea78d62aef3aa7d46c5f99855e042fdf5bc38b18556e8d" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.899607 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9146d3d41cb348c99ea78d62aef3aa7d46c5f99855e042fdf5bc38b18556e8d"} err="failed to get container status \"c9146d3d41cb348c99ea78d62aef3aa7d46c5f99855e042fdf5bc38b18556e8d\": rpc error: code = NotFound desc = could not find container \"c9146d3d41cb348c99ea78d62aef3aa7d46c5f99855e042fdf5bc38b18556e8d\": container with ID starting with c9146d3d41cb348c99ea78d62aef3aa7d46c5f99855e042fdf5bc38b18556e8d not found: ID does not exist" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.899627 4697 scope.go:117] "RemoveContainer" containerID="e33c68fac5ef11b2704b8a1460588937489a191ea2eacb70548b1e99cf718822" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.899776 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e33c68fac5ef11b2704b8a1460588937489a191ea2eacb70548b1e99cf718822"} err="failed to get container status \"e33c68fac5ef11b2704b8a1460588937489a191ea2eacb70548b1e99cf718822\": rpc error: code = NotFound desc = could not find container \"e33c68fac5ef11b2704b8a1460588937489a191ea2eacb70548b1e99cf718822\": container with ID starting with e33c68fac5ef11b2704b8a1460588937489a191ea2eacb70548b1e99cf718822 not found: ID does not exist" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.899878 4697 scope.go:117] "RemoveContainer" containerID="25f52622d494cffbbd36c21f76148b896a10d3c1ace649ac0824e847b812a277" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.900373 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25f52622d494cffbbd36c21f76148b896a10d3c1ace649ac0824e847b812a277"} err="failed to get container status \"25f52622d494cffbbd36c21f76148b896a10d3c1ace649ac0824e847b812a277\": rpc error: code = NotFound desc = could not find container \"25f52622d494cffbbd36c21f76148b896a10d3c1ace649ac0824e847b812a277\": container with ID starting with 25f52622d494cffbbd36c21f76148b896a10d3c1ace649ac0824e847b812a277 not found: ID does not exist" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.900397 4697 scope.go:117] "RemoveContainer" containerID="24ac4a674c5fb98082daeabf52736988951ea5c66064ff4bb63f0d40c43b947d" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.900615 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24ac4a674c5fb98082daeabf52736988951ea5c66064ff4bb63f0d40c43b947d"} err="failed to get container status \"24ac4a674c5fb98082daeabf52736988951ea5c66064ff4bb63f0d40c43b947d\": rpc error: code = NotFound desc = could not find container \"24ac4a674c5fb98082daeabf52736988951ea5c66064ff4bb63f0d40c43b947d\": container with ID starting with 24ac4a674c5fb98082daeabf52736988951ea5c66064ff4bb63f0d40c43b947d not found: ID does not exist" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.900638 4697 scope.go:117] "RemoveContainer" containerID="f8784cf473729161592d08c782f4754724d6609756a30040715cbff8c732a09c" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.900857 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8784cf473729161592d08c782f4754724d6609756a30040715cbff8c732a09c"} err="failed to get container status \"f8784cf473729161592d08c782f4754724d6609756a30040715cbff8c732a09c\": rpc error: code = NotFound desc = could not find container \"f8784cf473729161592d08c782f4754724d6609756a30040715cbff8c732a09c\": container with ID starting with f8784cf473729161592d08c782f4754724d6609756a30040715cbff8c732a09c not found: ID does not exist" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.900877 4697 scope.go:117] "RemoveContainer" containerID="eea7c2b7dbea8198cc4709a808f8ecab760514224f4e3eb96d04c3bd7f16df6d" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.901071 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eea7c2b7dbea8198cc4709a808f8ecab760514224f4e3eb96d04c3bd7f16df6d"} err="failed to get container status \"eea7c2b7dbea8198cc4709a808f8ecab760514224f4e3eb96d04c3bd7f16df6d\": rpc error: code = NotFound desc = could not find container \"eea7c2b7dbea8198cc4709a808f8ecab760514224f4e3eb96d04c3bd7f16df6d\": container with ID starting with eea7c2b7dbea8198cc4709a808f8ecab760514224f4e3eb96d04c3bd7f16df6d not found: ID does not exist" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.901091 4697 scope.go:117] "RemoveContainer" containerID="b9666b8a501ef015431ee3be1fc34ca2b196011df3007d2e4d508f09f9967785" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.901297 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9666b8a501ef015431ee3be1fc34ca2b196011df3007d2e4d508f09f9967785"} err="failed to get container status \"b9666b8a501ef015431ee3be1fc34ca2b196011df3007d2e4d508f09f9967785\": rpc error: code = NotFound desc = could not find container \"b9666b8a501ef015431ee3be1fc34ca2b196011df3007d2e4d508f09f9967785\": container with ID starting with b9666b8a501ef015431ee3be1fc34ca2b196011df3007d2e4d508f09f9967785 not found: ID does not exist" Jan 27 15:20:46 crc kubenswrapper[4697]: I0127 15:20:46.902856 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dpzgz\" (UniqueName: \"kubernetes.io/projected/d0bce6c4-5db7-4f8d-96f5-6afec6a4d438-kube-api-access-dpzgz\") pod \"ovnkube-node-zbnv4\" (UID: \"d0bce6c4-5db7-4f8d-96f5-6afec6a4d438\") " pod="openshift-ovn-kubernetes/ovnkube-node-zbnv4" Jan 27 15:20:47 crc kubenswrapper[4697]: I0127 15:20:47.004593 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-z6jxw"] Jan 27 15:20:47 crc kubenswrapper[4697]: I0127 15:20:47.008012 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-z6jxw"] Jan 27 15:20:47 crc kubenswrapper[4697]: I0127 15:20:47.088676 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-zbnv4" Jan 27 15:20:47 crc kubenswrapper[4697]: W0127 15:20:47.107499 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd0bce6c4_5db7_4f8d_96f5_6afec6a4d438.slice/crio-c0e03ce6358421defcbd01cff0c66b9c1c89d1d1296c20d8598b5566b9a44fac WatchSource:0}: Error finding container c0e03ce6358421defcbd01cff0c66b9c1c89d1d1296c20d8598b5566b9a44fac: Status 404 returned error can't find the container with id c0e03ce6358421defcbd01cff0c66b9c1c89d1d1296c20d8598b5566b9a44fac Jan 27 15:20:47 crc kubenswrapper[4697]: I0127 15:20:47.678220 4697 generic.go:334] "Generic (PLEG): container finished" podID="d0bce6c4-5db7-4f8d-96f5-6afec6a4d438" containerID="63b019bb9645050acdb654e8d267adb09f386198d8de0ed665ace396bac790dd" exitCode=0 Jan 27 15:20:47 crc kubenswrapper[4697]: I0127 15:20:47.678294 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zbnv4" event={"ID":"d0bce6c4-5db7-4f8d-96f5-6afec6a4d438","Type":"ContainerDied","Data":"63b019bb9645050acdb654e8d267adb09f386198d8de0ed665ace396bac790dd"} Jan 27 15:20:47 crc kubenswrapper[4697]: I0127 15:20:47.678326 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zbnv4" event={"ID":"d0bce6c4-5db7-4f8d-96f5-6afec6a4d438","Type":"ContainerStarted","Data":"c0e03ce6358421defcbd01cff0c66b9c1c89d1d1296c20d8598b5566b9a44fac"} Jan 27 15:20:47 crc kubenswrapper[4697]: I0127 15:20:47.685401 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rq89t_7fbc1c27-fba2-40df-95dd-3842bd1f1906/kube-multus/2.log" Jan 27 15:20:47 crc kubenswrapper[4697]: I0127 15:20:47.685538 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-rq89t" event={"ID":"7fbc1c27-fba2-40df-95dd-3842bd1f1906","Type":"ContainerStarted","Data":"0dc17a26aaf9557a22ad5b69f1312de16a5282dc073443e33527f15c09f4589d"} Jan 27 15:20:48 crc kubenswrapper[4697]: I0127 15:20:48.576321 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a1ce5ad-1a8c-4a28-99d8-fc71649954ad" path="/var/lib/kubelet/pods/6a1ce5ad-1a8c-4a28-99d8-fc71649954ad/volumes" Jan 27 15:20:48 crc kubenswrapper[4697]: I0127 15:20:48.695375 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zbnv4" event={"ID":"d0bce6c4-5db7-4f8d-96f5-6afec6a4d438","Type":"ContainerStarted","Data":"07a734721ea095c6d8279534c57c11fdef1d518af6492e8606f190d742b6a8f1"} Jan 27 15:20:48 crc kubenswrapper[4697]: I0127 15:20:48.695746 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zbnv4" event={"ID":"d0bce6c4-5db7-4f8d-96f5-6afec6a4d438","Type":"ContainerStarted","Data":"0191c1caaa9637282a5b467be33778122e5585320d60cb8971e27f36b9956f71"} Jan 27 15:20:48 crc kubenswrapper[4697]: I0127 15:20:48.695766 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zbnv4" event={"ID":"d0bce6c4-5db7-4f8d-96f5-6afec6a4d438","Type":"ContainerStarted","Data":"743585f9b7a961e152c269134d2c2e9fe5d85e8bf54acdf0f274746c2cf6d859"} Jan 27 15:20:48 crc kubenswrapper[4697]: I0127 15:20:48.695780 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zbnv4" event={"ID":"d0bce6c4-5db7-4f8d-96f5-6afec6a4d438","Type":"ContainerStarted","Data":"d05aadf02f5b33bc0c554056e15eb9584e53030301b24b729616f1ac3e42cecd"} Jan 27 15:20:48 crc kubenswrapper[4697]: I0127 15:20:48.695813 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zbnv4" event={"ID":"d0bce6c4-5db7-4f8d-96f5-6afec6a4d438","Type":"ContainerStarted","Data":"627d16a63b74aea08b7c4b030a100d04114fa1e5284bb34c4e1ef49d189df2ab"} Jan 27 15:20:48 crc kubenswrapper[4697]: I0127 15:20:48.695829 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zbnv4" event={"ID":"d0bce6c4-5db7-4f8d-96f5-6afec6a4d438","Type":"ContainerStarted","Data":"1c19ac30cbf981ba5502e477a6f5cd7743f20901ce7705eef9420295107a798b"} Jan 27 15:20:50 crc kubenswrapper[4697]: I0127 15:20:50.720090 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zbnv4" event={"ID":"d0bce6c4-5db7-4f8d-96f5-6afec6a4d438","Type":"ContainerStarted","Data":"78ce6c4de54129ae7df19ae7674846662e00cf653c3e5bae1679472470800a4a"} Jan 27 15:20:52 crc kubenswrapper[4697]: I0127 15:20:52.453125 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-8n4gj" Jan 27 15:20:53 crc kubenswrapper[4697]: I0127 15:20:53.773876 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zbnv4" event={"ID":"d0bce6c4-5db7-4f8d-96f5-6afec6a4d438","Type":"ContainerStarted","Data":"762f502a5b222265c1f2e04141268cb8d17f4c68e871211c6fbeb5acfb0c9f33"} Jan 27 15:20:54 crc kubenswrapper[4697]: I0127 15:20:54.780281 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-zbnv4" Jan 27 15:20:54 crc kubenswrapper[4697]: I0127 15:20:54.780322 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-zbnv4" Jan 27 15:20:54 crc kubenswrapper[4697]: I0127 15:20:54.813113 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-zbnv4" Jan 27 15:20:54 crc kubenswrapper[4697]: I0127 15:20:54.850263 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-zbnv4" podStartSLOduration=8.850243886 podStartE2EDuration="8.850243886s" podCreationTimestamp="2026-01-27 15:20:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:20:54.819463954 +0000 UTC m=+750.991863755" watchObservedRunningTime="2026-01-27 15:20:54.850243886 +0000 UTC m=+751.022643667" Jan 27 15:20:55 crc kubenswrapper[4697]: I0127 15:20:55.109074 4697 patch_prober.go:28] interesting pod/machine-config-daemon-wz495 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 15:20:55 crc kubenswrapper[4697]: I0127 15:20:55.109157 4697 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 15:20:55 crc kubenswrapper[4697]: I0127 15:20:55.787231 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-zbnv4" Jan 27 15:20:55 crc kubenswrapper[4697]: I0127 15:20:55.824883 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-zbnv4" Jan 27 15:21:01 crc kubenswrapper[4697]: I0127 15:21:01.768567 4697 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 27 15:21:17 crc kubenswrapper[4697]: I0127 15:21:17.226624 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-zbnv4" Jan 27 15:21:25 crc kubenswrapper[4697]: I0127 15:21:25.109001 4697 patch_prober.go:28] interesting pod/machine-config-daemon-wz495 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 15:21:25 crc kubenswrapper[4697]: I0127 15:21:25.109460 4697 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 15:21:25 crc kubenswrapper[4697]: I0127 15:21:25.109511 4697 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wz495" Jan 27 15:21:25 crc kubenswrapper[4697]: I0127 15:21:25.110001 4697 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"29b143d9c88ca58d5e8f4a44a13b9ecc0a8f5a18f7aa625b7c0810002ed2b91e"} pod="openshift-machine-config-operator/machine-config-daemon-wz495" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 15:21:25 crc kubenswrapper[4697]: I0127 15:21:25.110051 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" containerName="machine-config-daemon" containerID="cri-o://29b143d9c88ca58d5e8f4a44a13b9ecc0a8f5a18f7aa625b7c0810002ed2b91e" gracePeriod=600 Jan 27 15:21:25 crc kubenswrapper[4697]: I0127 15:21:25.260564 4697 generic.go:334] "Generic (PLEG): container finished" podID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" containerID="29b143d9c88ca58d5e8f4a44a13b9ecc0a8f5a18f7aa625b7c0810002ed2b91e" exitCode=0 Jan 27 15:21:25 crc kubenswrapper[4697]: I0127 15:21:25.260923 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wz495" event={"ID":"e9bec8bc-b2a6-4865-83ca-692ae5c022a6","Type":"ContainerDied","Data":"29b143d9c88ca58d5e8f4a44a13b9ecc0a8f5a18f7aa625b7c0810002ed2b91e"} Jan 27 15:21:25 crc kubenswrapper[4697]: I0127 15:21:25.261011 4697 scope.go:117] "RemoveContainer" containerID="160554eb4c1c1a0e1f168d0c1e6bb97842cc86fd35dee22ef4a9ea3ffb4e7b6c" Jan 27 15:21:26 crc kubenswrapper[4697]: I0127 15:21:26.274629 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wz495" event={"ID":"e9bec8bc-b2a6-4865-83ca-692ae5c022a6","Type":"ContainerStarted","Data":"939f9c93ba265c5d99e68011d55d9135f74940c6f260b8c578f1d67844ceb0ed"} Jan 27 15:21:39 crc kubenswrapper[4697]: I0127 15:21:39.595587 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136h4fd"] Jan 27 15:21:39 crc kubenswrapper[4697]: I0127 15:21:39.597506 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136h4fd" Jan 27 15:21:39 crc kubenswrapper[4697]: I0127 15:21:39.601345 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 27 15:21:39 crc kubenswrapper[4697]: I0127 15:21:39.618638 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136h4fd"] Jan 27 15:21:39 crc kubenswrapper[4697]: I0127 15:21:39.717426 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/efe31ae7-f928-4690-b47c-57c996d20817-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136h4fd\" (UID: \"efe31ae7-f928-4690-b47c-57c996d20817\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136h4fd" Jan 27 15:21:39 crc kubenswrapper[4697]: I0127 15:21:39.717478 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7qln\" (UniqueName: \"kubernetes.io/projected/efe31ae7-f928-4690-b47c-57c996d20817-kube-api-access-p7qln\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136h4fd\" (UID: \"efe31ae7-f928-4690-b47c-57c996d20817\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136h4fd" Jan 27 15:21:39 crc kubenswrapper[4697]: I0127 15:21:39.717511 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/efe31ae7-f928-4690-b47c-57c996d20817-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136h4fd\" (UID: \"efe31ae7-f928-4690-b47c-57c996d20817\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136h4fd" Jan 27 15:21:39 crc kubenswrapper[4697]: I0127 15:21:39.818876 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7qln\" (UniqueName: \"kubernetes.io/projected/efe31ae7-f928-4690-b47c-57c996d20817-kube-api-access-p7qln\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136h4fd\" (UID: \"efe31ae7-f928-4690-b47c-57c996d20817\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136h4fd" Jan 27 15:21:39 crc kubenswrapper[4697]: I0127 15:21:39.818966 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/efe31ae7-f928-4690-b47c-57c996d20817-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136h4fd\" (UID: \"efe31ae7-f928-4690-b47c-57c996d20817\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136h4fd" Jan 27 15:21:39 crc kubenswrapper[4697]: I0127 15:21:39.819083 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/efe31ae7-f928-4690-b47c-57c996d20817-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136h4fd\" (UID: \"efe31ae7-f928-4690-b47c-57c996d20817\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136h4fd" Jan 27 15:21:39 crc kubenswrapper[4697]: I0127 15:21:39.819403 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/efe31ae7-f928-4690-b47c-57c996d20817-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136h4fd\" (UID: \"efe31ae7-f928-4690-b47c-57c996d20817\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136h4fd" Jan 27 15:21:39 crc kubenswrapper[4697]: I0127 15:21:39.819508 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/efe31ae7-f928-4690-b47c-57c996d20817-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136h4fd\" (UID: \"efe31ae7-f928-4690-b47c-57c996d20817\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136h4fd" Jan 27 15:21:39 crc kubenswrapper[4697]: I0127 15:21:39.845903 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7qln\" (UniqueName: \"kubernetes.io/projected/efe31ae7-f928-4690-b47c-57c996d20817-kube-api-access-p7qln\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136h4fd\" (UID: \"efe31ae7-f928-4690-b47c-57c996d20817\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136h4fd" Jan 27 15:21:39 crc kubenswrapper[4697]: I0127 15:21:39.916258 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136h4fd" Jan 27 15:21:40 crc kubenswrapper[4697]: I0127 15:21:40.317857 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136h4fd"] Jan 27 15:21:40 crc kubenswrapper[4697]: I0127 15:21:40.383005 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136h4fd" event={"ID":"efe31ae7-f928-4690-b47c-57c996d20817","Type":"ContainerStarted","Data":"611b3e4369b8938ea0b9639b3c36734840f4f5b248c9e0097f707e66a43a6e67"} Jan 27 15:21:41 crc kubenswrapper[4697]: I0127 15:21:41.389690 4697 generic.go:334] "Generic (PLEG): container finished" podID="efe31ae7-f928-4690-b47c-57c996d20817" containerID="6df314d7dcc65bdb56345de4babecad037350063d2ed6b13493f5453388359d6" exitCode=0 Jan 27 15:21:41 crc kubenswrapper[4697]: I0127 15:21:41.389729 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136h4fd" event={"ID":"efe31ae7-f928-4690-b47c-57c996d20817","Type":"ContainerDied","Data":"6df314d7dcc65bdb56345de4babecad037350063d2ed6b13493f5453388359d6"} Jan 27 15:21:41 crc kubenswrapper[4697]: I0127 15:21:41.942047 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-px4d6"] Jan 27 15:21:41 crc kubenswrapper[4697]: I0127 15:21:41.943234 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-px4d6" Jan 27 15:21:41 crc kubenswrapper[4697]: I0127 15:21:41.970547 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-px4d6"] Jan 27 15:21:42 crc kubenswrapper[4697]: I0127 15:21:42.050377 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4k7ps\" (UniqueName: \"kubernetes.io/projected/9fb950f8-a2c7-493d-a09f-dea6fe067370-kube-api-access-4k7ps\") pod \"redhat-operators-px4d6\" (UID: \"9fb950f8-a2c7-493d-a09f-dea6fe067370\") " pod="openshift-marketplace/redhat-operators-px4d6" Jan 27 15:21:42 crc kubenswrapper[4697]: I0127 15:21:42.050448 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9fb950f8-a2c7-493d-a09f-dea6fe067370-catalog-content\") pod \"redhat-operators-px4d6\" (UID: \"9fb950f8-a2c7-493d-a09f-dea6fe067370\") " pod="openshift-marketplace/redhat-operators-px4d6" Jan 27 15:21:42 crc kubenswrapper[4697]: I0127 15:21:42.050528 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9fb950f8-a2c7-493d-a09f-dea6fe067370-utilities\") pod \"redhat-operators-px4d6\" (UID: \"9fb950f8-a2c7-493d-a09f-dea6fe067370\") " pod="openshift-marketplace/redhat-operators-px4d6" Jan 27 15:21:42 crc kubenswrapper[4697]: I0127 15:21:42.151958 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4k7ps\" (UniqueName: \"kubernetes.io/projected/9fb950f8-a2c7-493d-a09f-dea6fe067370-kube-api-access-4k7ps\") pod \"redhat-operators-px4d6\" (UID: \"9fb950f8-a2c7-493d-a09f-dea6fe067370\") " pod="openshift-marketplace/redhat-operators-px4d6" Jan 27 15:21:42 crc kubenswrapper[4697]: I0127 15:21:42.152053 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9fb950f8-a2c7-493d-a09f-dea6fe067370-catalog-content\") pod \"redhat-operators-px4d6\" (UID: \"9fb950f8-a2c7-493d-a09f-dea6fe067370\") " pod="openshift-marketplace/redhat-operators-px4d6" Jan 27 15:21:42 crc kubenswrapper[4697]: I0127 15:21:42.152082 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9fb950f8-a2c7-493d-a09f-dea6fe067370-utilities\") pod \"redhat-operators-px4d6\" (UID: \"9fb950f8-a2c7-493d-a09f-dea6fe067370\") " pod="openshift-marketplace/redhat-operators-px4d6" Jan 27 15:21:42 crc kubenswrapper[4697]: I0127 15:21:42.152643 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9fb950f8-a2c7-493d-a09f-dea6fe067370-catalog-content\") pod \"redhat-operators-px4d6\" (UID: \"9fb950f8-a2c7-493d-a09f-dea6fe067370\") " pod="openshift-marketplace/redhat-operators-px4d6" Jan 27 15:21:42 crc kubenswrapper[4697]: I0127 15:21:42.152937 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9fb950f8-a2c7-493d-a09f-dea6fe067370-utilities\") pod \"redhat-operators-px4d6\" (UID: \"9fb950f8-a2c7-493d-a09f-dea6fe067370\") " pod="openshift-marketplace/redhat-operators-px4d6" Jan 27 15:21:42 crc kubenswrapper[4697]: I0127 15:21:42.174677 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4k7ps\" (UniqueName: \"kubernetes.io/projected/9fb950f8-a2c7-493d-a09f-dea6fe067370-kube-api-access-4k7ps\") pod \"redhat-operators-px4d6\" (UID: \"9fb950f8-a2c7-493d-a09f-dea6fe067370\") " pod="openshift-marketplace/redhat-operators-px4d6" Jan 27 15:21:42 crc kubenswrapper[4697]: I0127 15:21:42.279874 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-px4d6" Jan 27 15:21:42 crc kubenswrapper[4697]: I0127 15:21:42.479183 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-px4d6"] Jan 27 15:21:42 crc kubenswrapper[4697]: W0127 15:21:42.485161 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9fb950f8_a2c7_493d_a09f_dea6fe067370.slice/crio-b5b81def41532311e4b47be4d6b6f0645be5c3671d0f0bcbf4f88b2cd2b96217 WatchSource:0}: Error finding container b5b81def41532311e4b47be4d6b6f0645be5c3671d0f0bcbf4f88b2cd2b96217: Status 404 returned error can't find the container with id b5b81def41532311e4b47be4d6b6f0645be5c3671d0f0bcbf4f88b2cd2b96217 Jan 27 15:21:43 crc kubenswrapper[4697]: I0127 15:21:43.408301 4697 generic.go:334] "Generic (PLEG): container finished" podID="9fb950f8-a2c7-493d-a09f-dea6fe067370" containerID="7d5a2579e4e43d8a9dd99c02e30ba9e511f1341190f40fd3db1259ccb6b5c208" exitCode=0 Jan 27 15:21:43 crc kubenswrapper[4697]: I0127 15:21:43.408375 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-px4d6" event={"ID":"9fb950f8-a2c7-493d-a09f-dea6fe067370","Type":"ContainerDied","Data":"7d5a2579e4e43d8a9dd99c02e30ba9e511f1341190f40fd3db1259ccb6b5c208"} Jan 27 15:21:43 crc kubenswrapper[4697]: I0127 15:21:43.408958 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-px4d6" event={"ID":"9fb950f8-a2c7-493d-a09f-dea6fe067370","Type":"ContainerStarted","Data":"b5b81def41532311e4b47be4d6b6f0645be5c3671d0f0bcbf4f88b2cd2b96217"} Jan 27 15:21:43 crc kubenswrapper[4697]: I0127 15:21:43.413507 4697 generic.go:334] "Generic (PLEG): container finished" podID="efe31ae7-f928-4690-b47c-57c996d20817" containerID="a95a922c5194c881b1b11bf70e52adf57e79607cb23b2d881f4341ca9b73eb2f" exitCode=0 Jan 27 15:21:43 crc kubenswrapper[4697]: I0127 15:21:43.413541 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136h4fd" event={"ID":"efe31ae7-f928-4690-b47c-57c996d20817","Type":"ContainerDied","Data":"a95a922c5194c881b1b11bf70e52adf57e79607cb23b2d881f4341ca9b73eb2f"} Jan 27 15:21:44 crc kubenswrapper[4697]: I0127 15:21:44.421524 4697 generic.go:334] "Generic (PLEG): container finished" podID="efe31ae7-f928-4690-b47c-57c996d20817" containerID="18e08a3ce03989d45bc01b7f6d518a344e82d9c2164682d908c72c7bb4b0e8e6" exitCode=0 Jan 27 15:21:44 crc kubenswrapper[4697]: I0127 15:21:44.422082 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136h4fd" event={"ID":"efe31ae7-f928-4690-b47c-57c996d20817","Type":"ContainerDied","Data":"18e08a3ce03989d45bc01b7f6d518a344e82d9c2164682d908c72c7bb4b0e8e6"} Jan 27 15:21:44 crc kubenswrapper[4697]: I0127 15:21:44.423179 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-px4d6" event={"ID":"9fb950f8-a2c7-493d-a09f-dea6fe067370","Type":"ContainerStarted","Data":"867e4d5b8aa913644d6dbe952dd54b059411c64e96b8a20b8cd471ed8de3bd45"} Jan 27 15:21:45 crc kubenswrapper[4697]: I0127 15:21:45.434697 4697 generic.go:334] "Generic (PLEG): container finished" podID="9fb950f8-a2c7-493d-a09f-dea6fe067370" containerID="867e4d5b8aa913644d6dbe952dd54b059411c64e96b8a20b8cd471ed8de3bd45" exitCode=0 Jan 27 15:21:45 crc kubenswrapper[4697]: I0127 15:21:45.434749 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-px4d6" event={"ID":"9fb950f8-a2c7-493d-a09f-dea6fe067370","Type":"ContainerDied","Data":"867e4d5b8aa913644d6dbe952dd54b059411c64e96b8a20b8cd471ed8de3bd45"} Jan 27 15:21:45 crc kubenswrapper[4697]: I0127 15:21:45.709243 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136h4fd" Jan 27 15:21:45 crc kubenswrapper[4697]: I0127 15:21:45.902161 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p7qln\" (UniqueName: \"kubernetes.io/projected/efe31ae7-f928-4690-b47c-57c996d20817-kube-api-access-p7qln\") pod \"efe31ae7-f928-4690-b47c-57c996d20817\" (UID: \"efe31ae7-f928-4690-b47c-57c996d20817\") " Jan 27 15:21:45 crc kubenswrapper[4697]: I0127 15:21:45.902262 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/efe31ae7-f928-4690-b47c-57c996d20817-util\") pod \"efe31ae7-f928-4690-b47c-57c996d20817\" (UID: \"efe31ae7-f928-4690-b47c-57c996d20817\") " Jan 27 15:21:45 crc kubenswrapper[4697]: I0127 15:21:45.902358 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/efe31ae7-f928-4690-b47c-57c996d20817-bundle\") pod \"efe31ae7-f928-4690-b47c-57c996d20817\" (UID: \"efe31ae7-f928-4690-b47c-57c996d20817\") " Jan 27 15:21:45 crc kubenswrapper[4697]: I0127 15:21:45.903063 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/efe31ae7-f928-4690-b47c-57c996d20817-bundle" (OuterVolumeSpecName: "bundle") pod "efe31ae7-f928-4690-b47c-57c996d20817" (UID: "efe31ae7-f928-4690-b47c-57c996d20817"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:21:45 crc kubenswrapper[4697]: I0127 15:21:45.917161 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efe31ae7-f928-4690-b47c-57c996d20817-kube-api-access-p7qln" (OuterVolumeSpecName: "kube-api-access-p7qln") pod "efe31ae7-f928-4690-b47c-57c996d20817" (UID: "efe31ae7-f928-4690-b47c-57c996d20817"). InnerVolumeSpecName "kube-api-access-p7qln". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:21:46 crc kubenswrapper[4697]: I0127 15:21:46.003942 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p7qln\" (UniqueName: \"kubernetes.io/projected/efe31ae7-f928-4690-b47c-57c996d20817-kube-api-access-p7qln\") on node \"crc\" DevicePath \"\"" Jan 27 15:21:46 crc kubenswrapper[4697]: I0127 15:21:46.003993 4697 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/efe31ae7-f928-4690-b47c-57c996d20817-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 15:21:46 crc kubenswrapper[4697]: I0127 15:21:46.248047 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/efe31ae7-f928-4690-b47c-57c996d20817-util" (OuterVolumeSpecName: "util") pod "efe31ae7-f928-4690-b47c-57c996d20817" (UID: "efe31ae7-f928-4690-b47c-57c996d20817"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:21:46 crc kubenswrapper[4697]: I0127 15:21:46.306767 4697 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/efe31ae7-f928-4690-b47c-57c996d20817-util\") on node \"crc\" DevicePath \"\"" Jan 27 15:21:46 crc kubenswrapper[4697]: I0127 15:21:46.448521 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136h4fd" event={"ID":"efe31ae7-f928-4690-b47c-57c996d20817","Type":"ContainerDied","Data":"611b3e4369b8938ea0b9639b3c36734840f4f5b248c9e0097f707e66a43a6e67"} Jan 27 15:21:46 crc kubenswrapper[4697]: I0127 15:21:46.448573 4697 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="611b3e4369b8938ea0b9639b3c36734840f4f5b248c9e0097f707e66a43a6e67" Jan 27 15:21:46 crc kubenswrapper[4697]: I0127 15:21:46.448543 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136h4fd" Jan 27 15:21:46 crc kubenswrapper[4697]: I0127 15:21:46.454634 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-px4d6" event={"ID":"9fb950f8-a2c7-493d-a09f-dea6fe067370","Type":"ContainerStarted","Data":"065973e512ac85d9a8e4d23acc1d90e12e867f0a2b095a5c5ed17fb201b6bd3f"} Jan 27 15:21:46 crc kubenswrapper[4697]: I0127 15:21:46.752532 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-px4d6" podStartSLOduration=3.168486277 podStartE2EDuration="5.752510505s" podCreationTimestamp="2026-01-27 15:21:41 +0000 UTC" firstStartedPulling="2026-01-27 15:21:43.412315119 +0000 UTC m=+799.584714940" lastFinishedPulling="2026-01-27 15:21:45.996339397 +0000 UTC m=+802.168739168" observedRunningTime="2026-01-27 15:21:46.476115078 +0000 UTC m=+802.648514869" watchObservedRunningTime="2026-01-27 15:21:46.752510505 +0000 UTC m=+802.924910296" Jan 27 15:21:49 crc kubenswrapper[4697]: I0127 15:21:49.856140 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-9ztbp"] Jan 27 15:21:49 crc kubenswrapper[4697]: E0127 15:21:49.857017 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efe31ae7-f928-4690-b47c-57c996d20817" containerName="extract" Jan 27 15:21:49 crc kubenswrapper[4697]: I0127 15:21:49.857108 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="efe31ae7-f928-4690-b47c-57c996d20817" containerName="extract" Jan 27 15:21:49 crc kubenswrapper[4697]: E0127 15:21:49.857204 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efe31ae7-f928-4690-b47c-57c996d20817" containerName="pull" Jan 27 15:21:49 crc kubenswrapper[4697]: I0127 15:21:49.857260 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="efe31ae7-f928-4690-b47c-57c996d20817" containerName="pull" Jan 27 15:21:49 crc kubenswrapper[4697]: E0127 15:21:49.857337 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efe31ae7-f928-4690-b47c-57c996d20817" containerName="util" Jan 27 15:21:49 crc kubenswrapper[4697]: I0127 15:21:49.857392 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="efe31ae7-f928-4690-b47c-57c996d20817" containerName="util" Jan 27 15:21:49 crc kubenswrapper[4697]: I0127 15:21:49.857541 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="efe31ae7-f928-4690-b47c-57c996d20817" containerName="extract" Jan 27 15:21:49 crc kubenswrapper[4697]: I0127 15:21:49.857973 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-9ztbp" Jan 27 15:21:49 crc kubenswrapper[4697]: I0127 15:21:49.859592 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-pcnbm" Jan 27 15:21:49 crc kubenswrapper[4697]: I0127 15:21:49.859864 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 27 15:21:49 crc kubenswrapper[4697]: I0127 15:21:49.863662 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 27 15:21:49 crc kubenswrapper[4697]: I0127 15:21:49.877253 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-9ztbp"] Jan 27 15:21:49 crc kubenswrapper[4697]: I0127 15:21:49.947832 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzq6q\" (UniqueName: \"kubernetes.io/projected/a886f00e-2d21-4e80-81d0-06650c1e178f-kube-api-access-hzq6q\") pod \"nmstate-operator-646758c888-9ztbp\" (UID: \"a886f00e-2d21-4e80-81d0-06650c1e178f\") " pod="openshift-nmstate/nmstate-operator-646758c888-9ztbp" Jan 27 15:21:50 crc kubenswrapper[4697]: I0127 15:21:50.049260 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hzq6q\" (UniqueName: \"kubernetes.io/projected/a886f00e-2d21-4e80-81d0-06650c1e178f-kube-api-access-hzq6q\") pod \"nmstate-operator-646758c888-9ztbp\" (UID: \"a886f00e-2d21-4e80-81d0-06650c1e178f\") " pod="openshift-nmstate/nmstate-operator-646758c888-9ztbp" Jan 27 15:21:50 crc kubenswrapper[4697]: I0127 15:21:50.077862 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hzq6q\" (UniqueName: \"kubernetes.io/projected/a886f00e-2d21-4e80-81d0-06650c1e178f-kube-api-access-hzq6q\") pod \"nmstate-operator-646758c888-9ztbp\" (UID: \"a886f00e-2d21-4e80-81d0-06650c1e178f\") " pod="openshift-nmstate/nmstate-operator-646758c888-9ztbp" Jan 27 15:21:50 crc kubenswrapper[4697]: I0127 15:21:50.175833 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-9ztbp" Jan 27 15:21:50 crc kubenswrapper[4697]: I0127 15:21:50.384598 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-9ztbp"] Jan 27 15:21:50 crc kubenswrapper[4697]: W0127 15:21:50.389975 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda886f00e_2d21_4e80_81d0_06650c1e178f.slice/crio-a9af46fbd74c368bbe5e0960554568590c80f0b2f71910bda5274d6d0dca19df WatchSource:0}: Error finding container a9af46fbd74c368bbe5e0960554568590c80f0b2f71910bda5274d6d0dca19df: Status 404 returned error can't find the container with id a9af46fbd74c368bbe5e0960554568590c80f0b2f71910bda5274d6d0dca19df Jan 27 15:21:50 crc kubenswrapper[4697]: I0127 15:21:50.473665 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-9ztbp" event={"ID":"a886f00e-2d21-4e80-81d0-06650c1e178f","Type":"ContainerStarted","Data":"a9af46fbd74c368bbe5e0960554568590c80f0b2f71910bda5274d6d0dca19df"} Jan 27 15:21:52 crc kubenswrapper[4697]: I0127 15:21:52.280393 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-px4d6" Jan 27 15:21:52 crc kubenswrapper[4697]: I0127 15:21:52.281725 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-px4d6" Jan 27 15:21:52 crc kubenswrapper[4697]: I0127 15:21:52.336856 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-px4d6" Jan 27 15:21:52 crc kubenswrapper[4697]: I0127 15:21:52.532725 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-px4d6" Jan 27 15:21:53 crc kubenswrapper[4697]: I0127 15:21:53.491072 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-9ztbp" event={"ID":"a886f00e-2d21-4e80-81d0-06650c1e178f","Type":"ContainerStarted","Data":"67774060e2285592022acce741fecd7f7f4cb2bd5fbb89e9ba18cb3f3a2d0c8f"} Jan 27 15:21:53 crc kubenswrapper[4697]: I0127 15:21:53.506143 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-9ztbp" podStartSLOduration=1.75684584 podStartE2EDuration="4.506120388s" podCreationTimestamp="2026-01-27 15:21:49 +0000 UTC" firstStartedPulling="2026-01-27 15:21:50.395908595 +0000 UTC m=+806.568308376" lastFinishedPulling="2026-01-27 15:21:53.145183143 +0000 UTC m=+809.317582924" observedRunningTime="2026-01-27 15:21:53.50415883 +0000 UTC m=+809.676558611" watchObservedRunningTime="2026-01-27 15:21:53.506120388 +0000 UTC m=+809.678520179" Jan 27 15:21:54 crc kubenswrapper[4697]: I0127 15:21:54.932463 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-px4d6"] Jan 27 15:21:55 crc kubenswrapper[4697]: I0127 15:21:55.502337 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-px4d6" podUID="9fb950f8-a2c7-493d-a09f-dea6fe067370" containerName="registry-server" containerID="cri-o://065973e512ac85d9a8e4d23acc1d90e12e867f0a2b095a5c5ed17fb201b6bd3f" gracePeriod=2 Jan 27 15:21:56 crc kubenswrapper[4697]: I0127 15:21:56.519844 4697 generic.go:334] "Generic (PLEG): container finished" podID="9fb950f8-a2c7-493d-a09f-dea6fe067370" containerID="065973e512ac85d9a8e4d23acc1d90e12e867f0a2b095a5c5ed17fb201b6bd3f" exitCode=0 Jan 27 15:21:56 crc kubenswrapper[4697]: I0127 15:21:56.520016 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-px4d6" event={"ID":"9fb950f8-a2c7-493d-a09f-dea6fe067370","Type":"ContainerDied","Data":"065973e512ac85d9a8e4d23acc1d90e12e867f0a2b095a5c5ed17fb201b6bd3f"} Jan 27 15:21:56 crc kubenswrapper[4697]: I0127 15:21:56.520229 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-px4d6" event={"ID":"9fb950f8-a2c7-493d-a09f-dea6fe067370","Type":"ContainerDied","Data":"b5b81def41532311e4b47be4d6b6f0645be5c3671d0f0bcbf4f88b2cd2b96217"} Jan 27 15:21:56 crc kubenswrapper[4697]: I0127 15:21:56.520249 4697 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b5b81def41532311e4b47be4d6b6f0645be5c3671d0f0bcbf4f88b2cd2b96217" Jan 27 15:21:56 crc kubenswrapper[4697]: I0127 15:21:56.550872 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-px4d6" Jan 27 15:21:56 crc kubenswrapper[4697]: I0127 15:21:56.724758 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4k7ps\" (UniqueName: \"kubernetes.io/projected/9fb950f8-a2c7-493d-a09f-dea6fe067370-kube-api-access-4k7ps\") pod \"9fb950f8-a2c7-493d-a09f-dea6fe067370\" (UID: \"9fb950f8-a2c7-493d-a09f-dea6fe067370\") " Jan 27 15:21:56 crc kubenswrapper[4697]: I0127 15:21:56.724833 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9fb950f8-a2c7-493d-a09f-dea6fe067370-utilities\") pod \"9fb950f8-a2c7-493d-a09f-dea6fe067370\" (UID: \"9fb950f8-a2c7-493d-a09f-dea6fe067370\") " Jan 27 15:21:56 crc kubenswrapper[4697]: I0127 15:21:56.724891 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9fb950f8-a2c7-493d-a09f-dea6fe067370-catalog-content\") pod \"9fb950f8-a2c7-493d-a09f-dea6fe067370\" (UID: \"9fb950f8-a2c7-493d-a09f-dea6fe067370\") " Jan 27 15:21:56 crc kubenswrapper[4697]: I0127 15:21:56.726766 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9fb950f8-a2c7-493d-a09f-dea6fe067370-utilities" (OuterVolumeSpecName: "utilities") pod "9fb950f8-a2c7-493d-a09f-dea6fe067370" (UID: "9fb950f8-a2c7-493d-a09f-dea6fe067370"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:21:56 crc kubenswrapper[4697]: I0127 15:21:56.732990 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9fb950f8-a2c7-493d-a09f-dea6fe067370-kube-api-access-4k7ps" (OuterVolumeSpecName: "kube-api-access-4k7ps") pod "9fb950f8-a2c7-493d-a09f-dea6fe067370" (UID: "9fb950f8-a2c7-493d-a09f-dea6fe067370"). InnerVolumeSpecName "kube-api-access-4k7ps". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:21:56 crc kubenswrapper[4697]: I0127 15:21:56.825996 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4k7ps\" (UniqueName: \"kubernetes.io/projected/9fb950f8-a2c7-493d-a09f-dea6fe067370-kube-api-access-4k7ps\") on node \"crc\" DevicePath \"\"" Jan 27 15:21:56 crc kubenswrapper[4697]: I0127 15:21:56.826035 4697 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9fb950f8-a2c7-493d-a09f-dea6fe067370-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 15:21:56 crc kubenswrapper[4697]: I0127 15:21:56.855263 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9fb950f8-a2c7-493d-a09f-dea6fe067370-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9fb950f8-a2c7-493d-a09f-dea6fe067370" (UID: "9fb950f8-a2c7-493d-a09f-dea6fe067370"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:21:56 crc kubenswrapper[4697]: I0127 15:21:56.927351 4697 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9fb950f8-a2c7-493d-a09f-dea6fe067370-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 15:21:57 crc kubenswrapper[4697]: I0127 15:21:57.525316 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-px4d6" Jan 27 15:21:57 crc kubenswrapper[4697]: I0127 15:21:57.579900 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-px4d6"] Jan 27 15:21:57 crc kubenswrapper[4697]: I0127 15:21:57.583634 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-px4d6"] Jan 27 15:21:58 crc kubenswrapper[4697]: I0127 15:21:58.575764 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9fb950f8-a2c7-493d-a09f-dea6fe067370" path="/var/lib/kubelet/pods/9fb950f8-a2c7-493d-a09f-dea6fe067370/volumes" Jan 27 15:21:59 crc kubenswrapper[4697]: I0127 15:21:59.911002 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-hkj82"] Jan 27 15:21:59 crc kubenswrapper[4697]: E0127 15:21:59.911499 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fb950f8-a2c7-493d-a09f-dea6fe067370" containerName="extract-utilities" Jan 27 15:21:59 crc kubenswrapper[4697]: I0127 15:21:59.911510 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fb950f8-a2c7-493d-a09f-dea6fe067370" containerName="extract-utilities" Jan 27 15:21:59 crc kubenswrapper[4697]: E0127 15:21:59.911518 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fb950f8-a2c7-493d-a09f-dea6fe067370" containerName="extract-content" Jan 27 15:21:59 crc kubenswrapper[4697]: I0127 15:21:59.911524 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fb950f8-a2c7-493d-a09f-dea6fe067370" containerName="extract-content" Jan 27 15:21:59 crc kubenswrapper[4697]: E0127 15:21:59.911539 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fb950f8-a2c7-493d-a09f-dea6fe067370" containerName="registry-server" Jan 27 15:21:59 crc kubenswrapper[4697]: I0127 15:21:59.911546 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fb950f8-a2c7-493d-a09f-dea6fe067370" containerName="registry-server" Jan 27 15:21:59 crc kubenswrapper[4697]: I0127 15:21:59.911650 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="9fb950f8-a2c7-493d-a09f-dea6fe067370" containerName="registry-server" Jan 27 15:21:59 crc kubenswrapper[4697]: I0127 15:21:59.912177 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-hkj82" Jan 27 15:21:59 crc kubenswrapper[4697]: I0127 15:21:59.913719 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-plzh7" Jan 27 15:21:59 crc kubenswrapper[4697]: I0127 15:21:59.915490 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-vt4ml"] Jan 27 15:21:59 crc kubenswrapper[4697]: I0127 15:21:59.916445 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-vt4ml" Jan 27 15:21:59 crc kubenswrapper[4697]: I0127 15:21:59.922344 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 27 15:21:59 crc kubenswrapper[4697]: I0127 15:21:59.949897 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-nlwcd"] Jan 27 15:21:59 crc kubenswrapper[4697]: I0127 15:21:59.950570 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-nlwcd" Jan 27 15:21:59 crc kubenswrapper[4697]: I0127 15:21:59.959523 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-vt4ml"] Jan 27 15:21:59 crc kubenswrapper[4697]: I0127 15:21:59.982758 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/0cb7d58a-50bd-4ae2-9e83-5c689667726d-dbus-socket\") pod \"nmstate-handler-nlwcd\" (UID: \"0cb7d58a-50bd-4ae2-9e83-5c689667726d\") " pod="openshift-nmstate/nmstate-handler-nlwcd" Jan 27 15:21:59 crc kubenswrapper[4697]: I0127 15:21:59.982878 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/0133bab9-91e4-4ff6-8dc1-cf282e197dd0-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-vt4ml\" (UID: \"0133bab9-91e4-4ff6-8dc1-cf282e197dd0\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-vt4ml" Jan 27 15:21:59 crc kubenswrapper[4697]: I0127 15:21:59.982928 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qppwq\" (UniqueName: \"kubernetes.io/projected/c1ae0702-73b7-45df-88fb-4e93ab7f6496-kube-api-access-qppwq\") pod \"nmstate-metrics-54757c584b-hkj82\" (UID: \"c1ae0702-73b7-45df-88fb-4e93ab7f6496\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-hkj82" Jan 27 15:21:59 crc kubenswrapper[4697]: I0127 15:21:59.982958 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2jht\" (UniqueName: \"kubernetes.io/projected/0cb7d58a-50bd-4ae2-9e83-5c689667726d-kube-api-access-k2jht\") pod \"nmstate-handler-nlwcd\" (UID: \"0cb7d58a-50bd-4ae2-9e83-5c689667726d\") " pod="openshift-nmstate/nmstate-handler-nlwcd" Jan 27 15:21:59 crc kubenswrapper[4697]: I0127 15:21:59.982982 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/0cb7d58a-50bd-4ae2-9e83-5c689667726d-nmstate-lock\") pod \"nmstate-handler-nlwcd\" (UID: \"0cb7d58a-50bd-4ae2-9e83-5c689667726d\") " pod="openshift-nmstate/nmstate-handler-nlwcd" Jan 27 15:21:59 crc kubenswrapper[4697]: I0127 15:21:59.983011 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5svrn\" (UniqueName: \"kubernetes.io/projected/0133bab9-91e4-4ff6-8dc1-cf282e197dd0-kube-api-access-5svrn\") pod \"nmstate-webhook-8474b5b9d8-vt4ml\" (UID: \"0133bab9-91e4-4ff6-8dc1-cf282e197dd0\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-vt4ml" Jan 27 15:21:59 crc kubenswrapper[4697]: I0127 15:21:59.983039 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/0cb7d58a-50bd-4ae2-9e83-5c689667726d-ovs-socket\") pod \"nmstate-handler-nlwcd\" (UID: \"0cb7d58a-50bd-4ae2-9e83-5c689667726d\") " pod="openshift-nmstate/nmstate-handler-nlwcd" Jan 27 15:21:59 crc kubenswrapper[4697]: I0127 15:21:59.995947 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-hkj82"] Jan 27 15:22:00 crc kubenswrapper[4697]: I0127 15:22:00.047833 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-7brtn"] Jan 27 15:22:00 crc kubenswrapper[4697]: I0127 15:22:00.048631 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-7brtn" Jan 27 15:22:00 crc kubenswrapper[4697]: I0127 15:22:00.058889 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 27 15:22:00 crc kubenswrapper[4697]: I0127 15:22:00.059119 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-gnf7s" Jan 27 15:22:00 crc kubenswrapper[4697]: I0127 15:22:00.062038 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 27 15:22:00 crc kubenswrapper[4697]: I0127 15:22:00.068919 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-7brtn"] Jan 27 15:22:00 crc kubenswrapper[4697]: I0127 15:22:00.083540 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5svrn\" (UniqueName: \"kubernetes.io/projected/0133bab9-91e4-4ff6-8dc1-cf282e197dd0-kube-api-access-5svrn\") pod \"nmstate-webhook-8474b5b9d8-vt4ml\" (UID: \"0133bab9-91e4-4ff6-8dc1-cf282e197dd0\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-vt4ml" Jan 27 15:22:00 crc kubenswrapper[4697]: I0127 15:22:00.083590 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/0cb7d58a-50bd-4ae2-9e83-5c689667726d-ovs-socket\") pod \"nmstate-handler-nlwcd\" (UID: \"0cb7d58a-50bd-4ae2-9e83-5c689667726d\") " pod="openshift-nmstate/nmstate-handler-nlwcd" Jan 27 15:22:00 crc kubenswrapper[4697]: I0127 15:22:00.083620 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/a10fdbc2-63e2-4b0b-afee-5ce01520801e-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-7brtn\" (UID: \"a10fdbc2-63e2-4b0b-afee-5ce01520801e\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-7brtn" Jan 27 15:22:00 crc kubenswrapper[4697]: I0127 15:22:00.083653 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/0cb7d58a-50bd-4ae2-9e83-5c689667726d-dbus-socket\") pod \"nmstate-handler-nlwcd\" (UID: \"0cb7d58a-50bd-4ae2-9e83-5c689667726d\") " pod="openshift-nmstate/nmstate-handler-nlwcd" Jan 27 15:22:00 crc kubenswrapper[4697]: I0127 15:22:00.083670 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/0133bab9-91e4-4ff6-8dc1-cf282e197dd0-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-vt4ml\" (UID: \"0133bab9-91e4-4ff6-8dc1-cf282e197dd0\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-vt4ml" Jan 27 15:22:00 crc kubenswrapper[4697]: I0127 15:22:00.083700 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qppwq\" (UniqueName: \"kubernetes.io/projected/c1ae0702-73b7-45df-88fb-4e93ab7f6496-kube-api-access-qppwq\") pod \"nmstate-metrics-54757c584b-hkj82\" (UID: \"c1ae0702-73b7-45df-88fb-4e93ab7f6496\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-hkj82" Jan 27 15:22:00 crc kubenswrapper[4697]: I0127 15:22:00.083716 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fs5fg\" (UniqueName: \"kubernetes.io/projected/a10fdbc2-63e2-4b0b-afee-5ce01520801e-kube-api-access-fs5fg\") pod \"nmstate-console-plugin-7754f76f8b-7brtn\" (UID: \"a10fdbc2-63e2-4b0b-afee-5ce01520801e\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-7brtn" Jan 27 15:22:00 crc kubenswrapper[4697]: I0127 15:22:00.083737 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k2jht\" (UniqueName: \"kubernetes.io/projected/0cb7d58a-50bd-4ae2-9e83-5c689667726d-kube-api-access-k2jht\") pod \"nmstate-handler-nlwcd\" (UID: \"0cb7d58a-50bd-4ae2-9e83-5c689667726d\") " pod="openshift-nmstate/nmstate-handler-nlwcd" Jan 27 15:22:00 crc kubenswrapper[4697]: I0127 15:22:00.083757 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/0cb7d58a-50bd-4ae2-9e83-5c689667726d-nmstate-lock\") pod \"nmstate-handler-nlwcd\" (UID: \"0cb7d58a-50bd-4ae2-9e83-5c689667726d\") " pod="openshift-nmstate/nmstate-handler-nlwcd" Jan 27 15:22:00 crc kubenswrapper[4697]: I0127 15:22:00.083771 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/a10fdbc2-63e2-4b0b-afee-5ce01520801e-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-7brtn\" (UID: \"a10fdbc2-63e2-4b0b-afee-5ce01520801e\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-7brtn" Jan 27 15:22:00 crc kubenswrapper[4697]: I0127 15:22:00.083836 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/0cb7d58a-50bd-4ae2-9e83-5c689667726d-ovs-socket\") pod \"nmstate-handler-nlwcd\" (UID: \"0cb7d58a-50bd-4ae2-9e83-5c689667726d\") " pod="openshift-nmstate/nmstate-handler-nlwcd" Jan 27 15:22:00 crc kubenswrapper[4697]: E0127 15:22:00.083912 4697 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Jan 27 15:22:00 crc kubenswrapper[4697]: E0127 15:22:00.083958 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0133bab9-91e4-4ff6-8dc1-cf282e197dd0-tls-key-pair podName:0133bab9-91e4-4ff6-8dc1-cf282e197dd0 nodeName:}" failed. No retries permitted until 2026-01-27 15:22:00.583939642 +0000 UTC m=+816.756339423 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/0133bab9-91e4-4ff6-8dc1-cf282e197dd0-tls-key-pair") pod "nmstate-webhook-8474b5b9d8-vt4ml" (UID: "0133bab9-91e4-4ff6-8dc1-cf282e197dd0") : secret "openshift-nmstate-webhook" not found Jan 27 15:22:00 crc kubenswrapper[4697]: I0127 15:22:00.084129 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/0cb7d58a-50bd-4ae2-9e83-5c689667726d-nmstate-lock\") pod \"nmstate-handler-nlwcd\" (UID: \"0cb7d58a-50bd-4ae2-9e83-5c689667726d\") " pod="openshift-nmstate/nmstate-handler-nlwcd" Jan 27 15:22:00 crc kubenswrapper[4697]: I0127 15:22:00.084398 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/0cb7d58a-50bd-4ae2-9e83-5c689667726d-dbus-socket\") pod \"nmstate-handler-nlwcd\" (UID: \"0cb7d58a-50bd-4ae2-9e83-5c689667726d\") " pod="openshift-nmstate/nmstate-handler-nlwcd" Jan 27 15:22:00 crc kubenswrapper[4697]: I0127 15:22:00.104148 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qppwq\" (UniqueName: \"kubernetes.io/projected/c1ae0702-73b7-45df-88fb-4e93ab7f6496-kube-api-access-qppwq\") pod \"nmstate-metrics-54757c584b-hkj82\" (UID: \"c1ae0702-73b7-45df-88fb-4e93ab7f6496\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-hkj82" Jan 27 15:22:00 crc kubenswrapper[4697]: I0127 15:22:00.104729 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5svrn\" (UniqueName: \"kubernetes.io/projected/0133bab9-91e4-4ff6-8dc1-cf282e197dd0-kube-api-access-5svrn\") pod \"nmstate-webhook-8474b5b9d8-vt4ml\" (UID: \"0133bab9-91e4-4ff6-8dc1-cf282e197dd0\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-vt4ml" Jan 27 15:22:00 crc kubenswrapper[4697]: I0127 15:22:00.106406 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k2jht\" (UniqueName: \"kubernetes.io/projected/0cb7d58a-50bd-4ae2-9e83-5c689667726d-kube-api-access-k2jht\") pod \"nmstate-handler-nlwcd\" (UID: \"0cb7d58a-50bd-4ae2-9e83-5c689667726d\") " pod="openshift-nmstate/nmstate-handler-nlwcd" Jan 27 15:22:00 crc kubenswrapper[4697]: I0127 15:22:00.184931 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fs5fg\" (UniqueName: \"kubernetes.io/projected/a10fdbc2-63e2-4b0b-afee-5ce01520801e-kube-api-access-fs5fg\") pod \"nmstate-console-plugin-7754f76f8b-7brtn\" (UID: \"a10fdbc2-63e2-4b0b-afee-5ce01520801e\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-7brtn" Jan 27 15:22:00 crc kubenswrapper[4697]: I0127 15:22:00.184995 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/a10fdbc2-63e2-4b0b-afee-5ce01520801e-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-7brtn\" (UID: \"a10fdbc2-63e2-4b0b-afee-5ce01520801e\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-7brtn" Jan 27 15:22:00 crc kubenswrapper[4697]: I0127 15:22:00.185041 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/a10fdbc2-63e2-4b0b-afee-5ce01520801e-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-7brtn\" (UID: \"a10fdbc2-63e2-4b0b-afee-5ce01520801e\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-7brtn" Jan 27 15:22:00 crc kubenswrapper[4697]: E0127 15:22:00.185165 4697 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Jan 27 15:22:00 crc kubenswrapper[4697]: E0127 15:22:00.185209 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a10fdbc2-63e2-4b0b-afee-5ce01520801e-plugin-serving-cert podName:a10fdbc2-63e2-4b0b-afee-5ce01520801e nodeName:}" failed. No retries permitted until 2026-01-27 15:22:00.685193508 +0000 UTC m=+816.857593289 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/a10fdbc2-63e2-4b0b-afee-5ce01520801e-plugin-serving-cert") pod "nmstate-console-plugin-7754f76f8b-7brtn" (UID: "a10fdbc2-63e2-4b0b-afee-5ce01520801e") : secret "plugin-serving-cert" not found Jan 27 15:22:00 crc kubenswrapper[4697]: I0127 15:22:00.186186 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/a10fdbc2-63e2-4b0b-afee-5ce01520801e-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-7brtn\" (UID: \"a10fdbc2-63e2-4b0b-afee-5ce01520801e\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-7brtn" Jan 27 15:22:00 crc kubenswrapper[4697]: I0127 15:22:00.204339 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fs5fg\" (UniqueName: \"kubernetes.io/projected/a10fdbc2-63e2-4b0b-afee-5ce01520801e-kube-api-access-fs5fg\") pod \"nmstate-console-plugin-7754f76f8b-7brtn\" (UID: \"a10fdbc2-63e2-4b0b-afee-5ce01520801e\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-7brtn" Jan 27 15:22:00 crc kubenswrapper[4697]: I0127 15:22:00.233056 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-hkj82" Jan 27 15:22:00 crc kubenswrapper[4697]: I0127 15:22:00.276429 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-cf8cf9ff-6c6dz"] Jan 27 15:22:00 crc kubenswrapper[4697]: I0127 15:22:00.277649 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-cf8cf9ff-6c6dz" Jan 27 15:22:00 crc kubenswrapper[4697]: I0127 15:22:00.280483 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-nlwcd" Jan 27 15:22:00 crc kubenswrapper[4697]: I0127 15:22:00.304202 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-cf8cf9ff-6c6dz"] Jan 27 15:22:00 crc kubenswrapper[4697]: I0127 15:22:00.388601 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/21d93bbb-f406-4af2-8717-65834ef8b7c7-console-oauth-config\") pod \"console-cf8cf9ff-6c6dz\" (UID: \"21d93bbb-f406-4af2-8717-65834ef8b7c7\") " pod="openshift-console/console-cf8cf9ff-6c6dz" Jan 27 15:22:00 crc kubenswrapper[4697]: I0127 15:22:00.388635 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/21d93bbb-f406-4af2-8717-65834ef8b7c7-console-config\") pod \"console-cf8cf9ff-6c6dz\" (UID: \"21d93bbb-f406-4af2-8717-65834ef8b7c7\") " pod="openshift-console/console-cf8cf9ff-6c6dz" Jan 27 15:22:00 crc kubenswrapper[4697]: I0127 15:22:00.388675 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/21d93bbb-f406-4af2-8717-65834ef8b7c7-trusted-ca-bundle\") pod \"console-cf8cf9ff-6c6dz\" (UID: \"21d93bbb-f406-4af2-8717-65834ef8b7c7\") " pod="openshift-console/console-cf8cf9ff-6c6dz" Jan 27 15:22:00 crc kubenswrapper[4697]: I0127 15:22:00.388985 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/21d93bbb-f406-4af2-8717-65834ef8b7c7-service-ca\") pod \"console-cf8cf9ff-6c6dz\" (UID: \"21d93bbb-f406-4af2-8717-65834ef8b7c7\") " pod="openshift-console/console-cf8cf9ff-6c6dz" Jan 27 15:22:00 crc kubenswrapper[4697]: I0127 15:22:00.389081 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6j4pw\" (UniqueName: \"kubernetes.io/projected/21d93bbb-f406-4af2-8717-65834ef8b7c7-kube-api-access-6j4pw\") pod \"console-cf8cf9ff-6c6dz\" (UID: \"21d93bbb-f406-4af2-8717-65834ef8b7c7\") " pod="openshift-console/console-cf8cf9ff-6c6dz" Jan 27 15:22:00 crc kubenswrapper[4697]: I0127 15:22:00.389124 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/21d93bbb-f406-4af2-8717-65834ef8b7c7-console-serving-cert\") pod \"console-cf8cf9ff-6c6dz\" (UID: \"21d93bbb-f406-4af2-8717-65834ef8b7c7\") " pod="openshift-console/console-cf8cf9ff-6c6dz" Jan 27 15:22:00 crc kubenswrapper[4697]: I0127 15:22:00.389148 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/21d93bbb-f406-4af2-8717-65834ef8b7c7-oauth-serving-cert\") pod \"console-cf8cf9ff-6c6dz\" (UID: \"21d93bbb-f406-4af2-8717-65834ef8b7c7\") " pod="openshift-console/console-cf8cf9ff-6c6dz" Jan 27 15:22:00 crc kubenswrapper[4697]: I0127 15:22:00.490293 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/21d93bbb-f406-4af2-8717-65834ef8b7c7-console-oauth-config\") pod \"console-cf8cf9ff-6c6dz\" (UID: \"21d93bbb-f406-4af2-8717-65834ef8b7c7\") " pod="openshift-console/console-cf8cf9ff-6c6dz" Jan 27 15:22:00 crc kubenswrapper[4697]: I0127 15:22:00.490330 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/21d93bbb-f406-4af2-8717-65834ef8b7c7-console-config\") pod \"console-cf8cf9ff-6c6dz\" (UID: \"21d93bbb-f406-4af2-8717-65834ef8b7c7\") " pod="openshift-console/console-cf8cf9ff-6c6dz" Jan 27 15:22:00 crc kubenswrapper[4697]: I0127 15:22:00.490349 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/21d93bbb-f406-4af2-8717-65834ef8b7c7-trusted-ca-bundle\") pod \"console-cf8cf9ff-6c6dz\" (UID: \"21d93bbb-f406-4af2-8717-65834ef8b7c7\") " pod="openshift-console/console-cf8cf9ff-6c6dz" Jan 27 15:22:00 crc kubenswrapper[4697]: I0127 15:22:00.490373 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/21d93bbb-f406-4af2-8717-65834ef8b7c7-service-ca\") pod \"console-cf8cf9ff-6c6dz\" (UID: \"21d93bbb-f406-4af2-8717-65834ef8b7c7\") " pod="openshift-console/console-cf8cf9ff-6c6dz" Jan 27 15:22:00 crc kubenswrapper[4697]: I0127 15:22:00.490437 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6j4pw\" (UniqueName: \"kubernetes.io/projected/21d93bbb-f406-4af2-8717-65834ef8b7c7-kube-api-access-6j4pw\") pod \"console-cf8cf9ff-6c6dz\" (UID: \"21d93bbb-f406-4af2-8717-65834ef8b7c7\") " pod="openshift-console/console-cf8cf9ff-6c6dz" Jan 27 15:22:00 crc kubenswrapper[4697]: I0127 15:22:00.490462 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/21d93bbb-f406-4af2-8717-65834ef8b7c7-console-serving-cert\") pod \"console-cf8cf9ff-6c6dz\" (UID: \"21d93bbb-f406-4af2-8717-65834ef8b7c7\") " pod="openshift-console/console-cf8cf9ff-6c6dz" Jan 27 15:22:00 crc kubenswrapper[4697]: I0127 15:22:00.490478 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/21d93bbb-f406-4af2-8717-65834ef8b7c7-oauth-serving-cert\") pod \"console-cf8cf9ff-6c6dz\" (UID: \"21d93bbb-f406-4af2-8717-65834ef8b7c7\") " pod="openshift-console/console-cf8cf9ff-6c6dz" Jan 27 15:22:00 crc kubenswrapper[4697]: I0127 15:22:00.491548 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/21d93bbb-f406-4af2-8717-65834ef8b7c7-oauth-serving-cert\") pod \"console-cf8cf9ff-6c6dz\" (UID: \"21d93bbb-f406-4af2-8717-65834ef8b7c7\") " pod="openshift-console/console-cf8cf9ff-6c6dz" Jan 27 15:22:00 crc kubenswrapper[4697]: I0127 15:22:00.492695 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/21d93bbb-f406-4af2-8717-65834ef8b7c7-service-ca\") pod \"console-cf8cf9ff-6c6dz\" (UID: \"21d93bbb-f406-4af2-8717-65834ef8b7c7\") " pod="openshift-console/console-cf8cf9ff-6c6dz" Jan 27 15:22:00 crc kubenswrapper[4697]: I0127 15:22:00.493160 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/21d93bbb-f406-4af2-8717-65834ef8b7c7-console-config\") pod \"console-cf8cf9ff-6c6dz\" (UID: \"21d93bbb-f406-4af2-8717-65834ef8b7c7\") " pod="openshift-console/console-cf8cf9ff-6c6dz" Jan 27 15:22:00 crc kubenswrapper[4697]: I0127 15:22:00.493268 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/21d93bbb-f406-4af2-8717-65834ef8b7c7-trusted-ca-bundle\") pod \"console-cf8cf9ff-6c6dz\" (UID: \"21d93bbb-f406-4af2-8717-65834ef8b7c7\") " pod="openshift-console/console-cf8cf9ff-6c6dz" Jan 27 15:22:00 crc kubenswrapper[4697]: I0127 15:22:00.500487 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/21d93bbb-f406-4af2-8717-65834ef8b7c7-console-oauth-config\") pod \"console-cf8cf9ff-6c6dz\" (UID: \"21d93bbb-f406-4af2-8717-65834ef8b7c7\") " pod="openshift-console/console-cf8cf9ff-6c6dz" Jan 27 15:22:00 crc kubenswrapper[4697]: I0127 15:22:00.500495 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/21d93bbb-f406-4af2-8717-65834ef8b7c7-console-serving-cert\") pod \"console-cf8cf9ff-6c6dz\" (UID: \"21d93bbb-f406-4af2-8717-65834ef8b7c7\") " pod="openshift-console/console-cf8cf9ff-6c6dz" Jan 27 15:22:00 crc kubenswrapper[4697]: I0127 15:22:00.512548 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-hkj82"] Jan 27 15:22:00 crc kubenswrapper[4697]: I0127 15:22:00.513121 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6j4pw\" (UniqueName: \"kubernetes.io/projected/21d93bbb-f406-4af2-8717-65834ef8b7c7-kube-api-access-6j4pw\") pod \"console-cf8cf9ff-6c6dz\" (UID: \"21d93bbb-f406-4af2-8717-65834ef8b7c7\") " pod="openshift-console/console-cf8cf9ff-6c6dz" Jan 27 15:22:00 crc kubenswrapper[4697]: W0127 15:22:00.517905 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc1ae0702_73b7_45df_88fb_4e93ab7f6496.slice/crio-f0902e273962c4e0674d7f4fcb30496559241573710dc5f88507a9ed21edb1c7 WatchSource:0}: Error finding container f0902e273962c4e0674d7f4fcb30496559241573710dc5f88507a9ed21edb1c7: Status 404 returned error can't find the container with id f0902e273962c4e0674d7f4fcb30496559241573710dc5f88507a9ed21edb1c7 Jan 27 15:22:00 crc kubenswrapper[4697]: I0127 15:22:00.540186 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-nlwcd" event={"ID":"0cb7d58a-50bd-4ae2-9e83-5c689667726d","Type":"ContainerStarted","Data":"cdee449ed7a7da29055fc5d8bae400f8b075907bd5163094d0504f3a6798f7d6"} Jan 27 15:22:00 crc kubenswrapper[4697]: I0127 15:22:00.541164 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-hkj82" event={"ID":"c1ae0702-73b7-45df-88fb-4e93ab7f6496","Type":"ContainerStarted","Data":"f0902e273962c4e0674d7f4fcb30496559241573710dc5f88507a9ed21edb1c7"} Jan 27 15:22:00 crc kubenswrapper[4697]: I0127 15:22:00.591447 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/0133bab9-91e4-4ff6-8dc1-cf282e197dd0-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-vt4ml\" (UID: \"0133bab9-91e4-4ff6-8dc1-cf282e197dd0\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-vt4ml" Jan 27 15:22:00 crc kubenswrapper[4697]: I0127 15:22:00.594662 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/0133bab9-91e4-4ff6-8dc1-cf282e197dd0-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-vt4ml\" (UID: \"0133bab9-91e4-4ff6-8dc1-cf282e197dd0\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-vt4ml" Jan 27 15:22:00 crc kubenswrapper[4697]: I0127 15:22:00.608760 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-cf8cf9ff-6c6dz" Jan 27 15:22:00 crc kubenswrapper[4697]: I0127 15:22:00.693339 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/a10fdbc2-63e2-4b0b-afee-5ce01520801e-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-7brtn\" (UID: \"a10fdbc2-63e2-4b0b-afee-5ce01520801e\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-7brtn" Jan 27 15:22:00 crc kubenswrapper[4697]: I0127 15:22:00.696445 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/a10fdbc2-63e2-4b0b-afee-5ce01520801e-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-7brtn\" (UID: \"a10fdbc2-63e2-4b0b-afee-5ce01520801e\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-7brtn" Jan 27 15:22:00 crc kubenswrapper[4697]: I0127 15:22:00.851754 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-vt4ml" Jan 27 15:22:00 crc kubenswrapper[4697]: I0127 15:22:00.868290 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-cf8cf9ff-6c6dz"] Jan 27 15:22:00 crc kubenswrapper[4697]: W0127 15:22:00.877241 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod21d93bbb_f406_4af2_8717_65834ef8b7c7.slice/crio-5b4a153bbd4267feb21f46ef54539feb8bed4279b21e875c1e10ca8988f4f1cf WatchSource:0}: Error finding container 5b4a153bbd4267feb21f46ef54539feb8bed4279b21e875c1e10ca8988f4f1cf: Status 404 returned error can't find the container with id 5b4a153bbd4267feb21f46ef54539feb8bed4279b21e875c1e10ca8988f4f1cf Jan 27 15:22:00 crc kubenswrapper[4697]: I0127 15:22:00.969111 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-7brtn" Jan 27 15:22:01 crc kubenswrapper[4697]: I0127 15:22:01.072014 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-vt4ml"] Jan 27 15:22:01 crc kubenswrapper[4697]: W0127 15:22:01.078677 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0133bab9_91e4_4ff6_8dc1_cf282e197dd0.slice/crio-984c36ca9f8bb3ca20dc8f8dc4badf1b318134424f78f394d309f1a385b90003 WatchSource:0}: Error finding container 984c36ca9f8bb3ca20dc8f8dc4badf1b318134424f78f394d309f1a385b90003: Status 404 returned error can't find the container with id 984c36ca9f8bb3ca20dc8f8dc4badf1b318134424f78f394d309f1a385b90003 Jan 27 15:22:01 crc kubenswrapper[4697]: I0127 15:22:01.177447 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-7brtn"] Jan 27 15:22:01 crc kubenswrapper[4697]: W0127 15:22:01.183548 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda10fdbc2_63e2_4b0b_afee_5ce01520801e.slice/crio-8c791fe905c1667a9d4315e0efd6ca23a806fa64187ccfb347623f199cad2470 WatchSource:0}: Error finding container 8c791fe905c1667a9d4315e0efd6ca23a806fa64187ccfb347623f199cad2470: Status 404 returned error can't find the container with id 8c791fe905c1667a9d4315e0efd6ca23a806fa64187ccfb347623f199cad2470 Jan 27 15:22:01 crc kubenswrapper[4697]: I0127 15:22:01.550889 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-vt4ml" event={"ID":"0133bab9-91e4-4ff6-8dc1-cf282e197dd0","Type":"ContainerStarted","Data":"984c36ca9f8bb3ca20dc8f8dc4badf1b318134424f78f394d309f1a385b90003"} Jan 27 15:22:01 crc kubenswrapper[4697]: I0127 15:22:01.553662 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-cf8cf9ff-6c6dz" event={"ID":"21d93bbb-f406-4af2-8717-65834ef8b7c7","Type":"ContainerStarted","Data":"d837b221c0d4d93886753a723551f5305c2260cda304cbfb6dfb3f3114f9e3df"} Jan 27 15:22:01 crc kubenswrapper[4697]: I0127 15:22:01.553741 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-cf8cf9ff-6c6dz" event={"ID":"21d93bbb-f406-4af2-8717-65834ef8b7c7","Type":"ContainerStarted","Data":"5b4a153bbd4267feb21f46ef54539feb8bed4279b21e875c1e10ca8988f4f1cf"} Jan 27 15:22:01 crc kubenswrapper[4697]: I0127 15:22:01.556035 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-7brtn" event={"ID":"a10fdbc2-63e2-4b0b-afee-5ce01520801e","Type":"ContainerStarted","Data":"8c791fe905c1667a9d4315e0efd6ca23a806fa64187ccfb347623f199cad2470"} Jan 27 15:22:01 crc kubenswrapper[4697]: I0127 15:22:01.591282 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-cf8cf9ff-6c6dz" podStartSLOduration=1.591254326 podStartE2EDuration="1.591254326s" podCreationTimestamp="2026-01-27 15:22:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:22:01.582571663 +0000 UTC m=+817.754971484" watchObservedRunningTime="2026-01-27 15:22:01.591254326 +0000 UTC m=+817.763654137" Jan 27 15:22:04 crc kubenswrapper[4697]: I0127 15:22:04.584771 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-hkj82" event={"ID":"c1ae0702-73b7-45df-88fb-4e93ab7f6496","Type":"ContainerStarted","Data":"7bdb44359ee2940e814649fa3a273f62f460e6e16fe5e8397092d88e0d7c4ad5"} Jan 27 15:22:04 crc kubenswrapper[4697]: I0127 15:22:04.587077 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-7brtn" event={"ID":"a10fdbc2-63e2-4b0b-afee-5ce01520801e","Type":"ContainerStarted","Data":"798f1d1c41501f2e00df5ac912e0cc550adeccfedb380666a607c5411087937e"} Jan 27 15:22:04 crc kubenswrapper[4697]: I0127 15:22:04.589239 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-nlwcd" event={"ID":"0cb7d58a-50bd-4ae2-9e83-5c689667726d","Type":"ContainerStarted","Data":"0fc543a8e626c8b6ebdee89856108b1c22021c3423cd2a3f49c41af996b2131f"} Jan 27 15:22:04 crc kubenswrapper[4697]: I0127 15:22:04.589366 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-nlwcd" Jan 27 15:22:04 crc kubenswrapper[4697]: I0127 15:22:04.590300 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-vt4ml" event={"ID":"0133bab9-91e4-4ff6-8dc1-cf282e197dd0","Type":"ContainerStarted","Data":"29a89ec2a937135f5a6f09d6eed64c8ef367cbdc421cdc684ba5a07d974f8f58"} Jan 27 15:22:04 crc kubenswrapper[4697]: I0127 15:22:04.590430 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-vt4ml" Jan 27 15:22:04 crc kubenswrapper[4697]: I0127 15:22:04.635224 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-vt4ml" podStartSLOduration=2.625774808 podStartE2EDuration="5.635199568s" podCreationTimestamp="2026-01-27 15:21:59 +0000 UTC" firstStartedPulling="2026-01-27 15:22:01.082048025 +0000 UTC m=+817.254447806" lastFinishedPulling="2026-01-27 15:22:04.091472745 +0000 UTC m=+820.263872566" observedRunningTime="2026-01-27 15:22:04.630024642 +0000 UTC m=+820.802424443" watchObservedRunningTime="2026-01-27 15:22:04.635199568 +0000 UTC m=+820.807599359" Jan 27 15:22:04 crc kubenswrapper[4697]: I0127 15:22:04.651190 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-nlwcd" podStartSLOduration=1.950898149 podStartE2EDuration="5.651173959s" podCreationTimestamp="2026-01-27 15:21:59 +0000 UTC" firstStartedPulling="2026-01-27 15:22:00.349126016 +0000 UTC m=+816.521525797" lastFinishedPulling="2026-01-27 15:22:04.049401826 +0000 UTC m=+820.221801607" observedRunningTime="2026-01-27 15:22:04.645181812 +0000 UTC m=+820.817581603" watchObservedRunningTime="2026-01-27 15:22:04.651173959 +0000 UTC m=+820.823573760" Jan 27 15:22:04 crc kubenswrapper[4697]: I0127 15:22:04.664827 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-7brtn" podStartSLOduration=1.8169212319999999 podStartE2EDuration="4.664808192s" podCreationTimestamp="2026-01-27 15:22:00 +0000 UTC" firstStartedPulling="2026-01-27 15:22:01.186064829 +0000 UTC m=+817.358464610" lastFinishedPulling="2026-01-27 15:22:04.033951779 +0000 UTC m=+820.206351570" observedRunningTime="2026-01-27 15:22:04.662158927 +0000 UTC m=+820.834558708" watchObservedRunningTime="2026-01-27 15:22:04.664808192 +0000 UTC m=+820.837207993" Jan 27 15:22:07 crc kubenswrapper[4697]: I0127 15:22:07.612707 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-hkj82" event={"ID":"c1ae0702-73b7-45df-88fb-4e93ab7f6496","Type":"ContainerStarted","Data":"41870cbb17c075ddd0a711e3513cab3eeb409e201d9e1083240fefe5bd6d17e2"} Jan 27 15:22:10 crc kubenswrapper[4697]: I0127 15:22:10.308630 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-nlwcd" Jan 27 15:22:10 crc kubenswrapper[4697]: I0127 15:22:10.323914 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-hkj82" podStartSLOduration=5.105384036 podStartE2EDuration="11.323894845s" podCreationTimestamp="2026-01-27 15:21:59 +0000 UTC" firstStartedPulling="2026-01-27 15:22:00.520999189 +0000 UTC m=+816.693398970" lastFinishedPulling="2026-01-27 15:22:06.739509988 +0000 UTC m=+822.911909779" observedRunningTime="2026-01-27 15:22:07.646846342 +0000 UTC m=+823.819246143" watchObservedRunningTime="2026-01-27 15:22:10.323894845 +0000 UTC m=+826.496294626" Jan 27 15:22:10 crc kubenswrapper[4697]: I0127 15:22:10.609599 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-cf8cf9ff-6c6dz" Jan 27 15:22:10 crc kubenswrapper[4697]: I0127 15:22:10.609708 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-cf8cf9ff-6c6dz" Jan 27 15:22:10 crc kubenswrapper[4697]: I0127 15:22:10.618196 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-cf8cf9ff-6c6dz" Jan 27 15:22:10 crc kubenswrapper[4697]: I0127 15:22:10.648439 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-cf8cf9ff-6c6dz" Jan 27 15:22:10 crc kubenswrapper[4697]: I0127 15:22:10.754532 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-wjd95"] Jan 27 15:22:20 crc kubenswrapper[4697]: I0127 15:22:20.858440 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-vt4ml" Jan 27 15:22:35 crc kubenswrapper[4697]: I0127 15:22:35.369691 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6kvwz"] Jan 27 15:22:35 crc kubenswrapper[4697]: I0127 15:22:35.371366 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6kvwz" Jan 27 15:22:35 crc kubenswrapper[4697]: I0127 15:22:35.374523 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 27 15:22:35 crc kubenswrapper[4697]: I0127 15:22:35.392474 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6kvwz"] Jan 27 15:22:35 crc kubenswrapper[4697]: I0127 15:22:35.470497 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zl9j\" (UniqueName: \"kubernetes.io/projected/a5c419f2-da90-4ed6-8155-03cba6840bc7-kube-api-access-8zl9j\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6kvwz\" (UID: \"a5c419f2-da90-4ed6-8155-03cba6840bc7\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6kvwz" Jan 27 15:22:35 crc kubenswrapper[4697]: I0127 15:22:35.470628 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a5c419f2-da90-4ed6-8155-03cba6840bc7-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6kvwz\" (UID: \"a5c419f2-da90-4ed6-8155-03cba6840bc7\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6kvwz" Jan 27 15:22:35 crc kubenswrapper[4697]: I0127 15:22:35.470683 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a5c419f2-da90-4ed6-8155-03cba6840bc7-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6kvwz\" (UID: \"a5c419f2-da90-4ed6-8155-03cba6840bc7\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6kvwz" Jan 27 15:22:35 crc kubenswrapper[4697]: I0127 15:22:35.574748 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8zl9j\" (UniqueName: \"kubernetes.io/projected/a5c419f2-da90-4ed6-8155-03cba6840bc7-kube-api-access-8zl9j\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6kvwz\" (UID: \"a5c419f2-da90-4ed6-8155-03cba6840bc7\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6kvwz" Jan 27 15:22:35 crc kubenswrapper[4697]: I0127 15:22:35.574862 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a5c419f2-da90-4ed6-8155-03cba6840bc7-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6kvwz\" (UID: \"a5c419f2-da90-4ed6-8155-03cba6840bc7\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6kvwz" Jan 27 15:22:35 crc kubenswrapper[4697]: I0127 15:22:35.574904 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a5c419f2-da90-4ed6-8155-03cba6840bc7-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6kvwz\" (UID: \"a5c419f2-da90-4ed6-8155-03cba6840bc7\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6kvwz" Jan 27 15:22:35 crc kubenswrapper[4697]: I0127 15:22:35.575506 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a5c419f2-da90-4ed6-8155-03cba6840bc7-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6kvwz\" (UID: \"a5c419f2-da90-4ed6-8155-03cba6840bc7\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6kvwz" Jan 27 15:22:35 crc kubenswrapper[4697]: I0127 15:22:35.575607 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a5c419f2-da90-4ed6-8155-03cba6840bc7-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6kvwz\" (UID: \"a5c419f2-da90-4ed6-8155-03cba6840bc7\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6kvwz" Jan 27 15:22:35 crc kubenswrapper[4697]: I0127 15:22:35.600491 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8zl9j\" (UniqueName: \"kubernetes.io/projected/a5c419f2-da90-4ed6-8155-03cba6840bc7-kube-api-access-8zl9j\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6kvwz\" (UID: \"a5c419f2-da90-4ed6-8155-03cba6840bc7\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6kvwz" Jan 27 15:22:35 crc kubenswrapper[4697]: I0127 15:22:35.695976 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6kvwz" Jan 27 15:22:35 crc kubenswrapper[4697]: I0127 15:22:35.807009 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-wjd95" podUID="0f95124d-8a5d-4a0d-b4cd-906d0341a6a2" containerName="console" containerID="cri-o://a25b1887a26bd7bc33aea2fca0f7d247f941b26bafd9a4e6afe65513d5c220fd" gracePeriod=15 Jan 27 15:22:35 crc kubenswrapper[4697]: I0127 15:22:35.902477 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6kvwz"] Jan 27 15:22:36 crc kubenswrapper[4697]: I0127 15:22:36.087941 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-wjd95_0f95124d-8a5d-4a0d-b4cd-906d0341a6a2/console/0.log" Jan 27 15:22:36 crc kubenswrapper[4697]: I0127 15:22:36.088254 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-wjd95" Jan 27 15:22:36 crc kubenswrapper[4697]: I0127 15:22:36.183533 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0f95124d-8a5d-4a0d-b4cd-906d0341a6a2-console-serving-cert\") pod \"0f95124d-8a5d-4a0d-b4cd-906d0341a6a2\" (UID: \"0f95124d-8a5d-4a0d-b4cd-906d0341a6a2\") " Jan 27 15:22:36 crc kubenswrapper[4697]: I0127 15:22:36.183618 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0f95124d-8a5d-4a0d-b4cd-906d0341a6a2-console-oauth-config\") pod \"0f95124d-8a5d-4a0d-b4cd-906d0341a6a2\" (UID: \"0f95124d-8a5d-4a0d-b4cd-906d0341a6a2\") " Jan 27 15:22:36 crc kubenswrapper[4697]: I0127 15:22:36.183673 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0f95124d-8a5d-4a0d-b4cd-906d0341a6a2-oauth-serving-cert\") pod \"0f95124d-8a5d-4a0d-b4cd-906d0341a6a2\" (UID: \"0f95124d-8a5d-4a0d-b4cd-906d0341a6a2\") " Jan 27 15:22:36 crc kubenswrapper[4697]: I0127 15:22:36.183730 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0f95124d-8a5d-4a0d-b4cd-906d0341a6a2-console-config\") pod \"0f95124d-8a5d-4a0d-b4cd-906d0341a6a2\" (UID: \"0f95124d-8a5d-4a0d-b4cd-906d0341a6a2\") " Jan 27 15:22:36 crc kubenswrapper[4697]: I0127 15:22:36.183778 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hh84z\" (UniqueName: \"kubernetes.io/projected/0f95124d-8a5d-4a0d-b4cd-906d0341a6a2-kube-api-access-hh84z\") pod \"0f95124d-8a5d-4a0d-b4cd-906d0341a6a2\" (UID: \"0f95124d-8a5d-4a0d-b4cd-906d0341a6a2\") " Jan 27 15:22:36 crc kubenswrapper[4697]: I0127 15:22:36.183884 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0f95124d-8a5d-4a0d-b4cd-906d0341a6a2-trusted-ca-bundle\") pod \"0f95124d-8a5d-4a0d-b4cd-906d0341a6a2\" (UID: \"0f95124d-8a5d-4a0d-b4cd-906d0341a6a2\") " Jan 27 15:22:36 crc kubenswrapper[4697]: I0127 15:22:36.183903 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0f95124d-8a5d-4a0d-b4cd-906d0341a6a2-service-ca\") pod \"0f95124d-8a5d-4a0d-b4cd-906d0341a6a2\" (UID: \"0f95124d-8a5d-4a0d-b4cd-906d0341a6a2\") " Jan 27 15:22:36 crc kubenswrapper[4697]: I0127 15:22:36.184815 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0f95124d-8a5d-4a0d-b4cd-906d0341a6a2-console-config" (OuterVolumeSpecName: "console-config") pod "0f95124d-8a5d-4a0d-b4cd-906d0341a6a2" (UID: "0f95124d-8a5d-4a0d-b4cd-906d0341a6a2"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:22:36 crc kubenswrapper[4697]: I0127 15:22:36.184950 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0f95124d-8a5d-4a0d-b4cd-906d0341a6a2-service-ca" (OuterVolumeSpecName: "service-ca") pod "0f95124d-8a5d-4a0d-b4cd-906d0341a6a2" (UID: "0f95124d-8a5d-4a0d-b4cd-906d0341a6a2"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:22:36 crc kubenswrapper[4697]: I0127 15:22:36.185233 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0f95124d-8a5d-4a0d-b4cd-906d0341a6a2-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "0f95124d-8a5d-4a0d-b4cd-906d0341a6a2" (UID: "0f95124d-8a5d-4a0d-b4cd-906d0341a6a2"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:22:36 crc kubenswrapper[4697]: I0127 15:22:36.185477 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0f95124d-8a5d-4a0d-b4cd-906d0341a6a2-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "0f95124d-8a5d-4a0d-b4cd-906d0341a6a2" (UID: "0f95124d-8a5d-4a0d-b4cd-906d0341a6a2"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:22:36 crc kubenswrapper[4697]: I0127 15:22:36.189778 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f95124d-8a5d-4a0d-b4cd-906d0341a6a2-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "0f95124d-8a5d-4a0d-b4cd-906d0341a6a2" (UID: "0f95124d-8a5d-4a0d-b4cd-906d0341a6a2"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:22:36 crc kubenswrapper[4697]: I0127 15:22:36.189955 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f95124d-8a5d-4a0d-b4cd-906d0341a6a2-kube-api-access-hh84z" (OuterVolumeSpecName: "kube-api-access-hh84z") pod "0f95124d-8a5d-4a0d-b4cd-906d0341a6a2" (UID: "0f95124d-8a5d-4a0d-b4cd-906d0341a6a2"). InnerVolumeSpecName "kube-api-access-hh84z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:22:36 crc kubenswrapper[4697]: I0127 15:22:36.190020 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f95124d-8a5d-4a0d-b4cd-906d0341a6a2-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "0f95124d-8a5d-4a0d-b4cd-906d0341a6a2" (UID: "0f95124d-8a5d-4a0d-b4cd-906d0341a6a2"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:22:36 crc kubenswrapper[4697]: I0127 15:22:36.285175 4697 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0f95124d-8a5d-4a0d-b4cd-906d0341a6a2-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 15:22:36 crc kubenswrapper[4697]: I0127 15:22:36.285223 4697 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0f95124d-8a5d-4a0d-b4cd-906d0341a6a2-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 27 15:22:36 crc kubenswrapper[4697]: I0127 15:22:36.285243 4697 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0f95124d-8a5d-4a0d-b4cd-906d0341a6a2-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 15:22:36 crc kubenswrapper[4697]: I0127 15:22:36.285262 4697 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0f95124d-8a5d-4a0d-b4cd-906d0341a6a2-console-config\") on node \"crc\" DevicePath \"\"" Jan 27 15:22:36 crc kubenswrapper[4697]: I0127 15:22:36.285281 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hh84z\" (UniqueName: \"kubernetes.io/projected/0f95124d-8a5d-4a0d-b4cd-906d0341a6a2-kube-api-access-hh84z\") on node \"crc\" DevicePath \"\"" Jan 27 15:22:36 crc kubenswrapper[4697]: I0127 15:22:36.285301 4697 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0f95124d-8a5d-4a0d-b4cd-906d0341a6a2-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 15:22:36 crc kubenswrapper[4697]: I0127 15:22:36.285319 4697 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0f95124d-8a5d-4a0d-b4cd-906d0341a6a2-service-ca\") on node \"crc\" DevicePath \"\"" Jan 27 15:22:36 crc kubenswrapper[4697]: I0127 15:22:36.835177 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-wjd95_0f95124d-8a5d-4a0d-b4cd-906d0341a6a2/console/0.log" Jan 27 15:22:36 crc kubenswrapper[4697]: I0127 15:22:36.835235 4697 generic.go:334] "Generic (PLEG): container finished" podID="0f95124d-8a5d-4a0d-b4cd-906d0341a6a2" containerID="a25b1887a26bd7bc33aea2fca0f7d247f941b26bafd9a4e6afe65513d5c220fd" exitCode=2 Jan 27 15:22:36 crc kubenswrapper[4697]: I0127 15:22:36.835347 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-wjd95" Jan 27 15:22:36 crc kubenswrapper[4697]: I0127 15:22:36.835748 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-wjd95" event={"ID":"0f95124d-8a5d-4a0d-b4cd-906d0341a6a2","Type":"ContainerDied","Data":"a25b1887a26bd7bc33aea2fca0f7d247f941b26bafd9a4e6afe65513d5c220fd"} Jan 27 15:22:36 crc kubenswrapper[4697]: I0127 15:22:36.835779 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-wjd95" event={"ID":"0f95124d-8a5d-4a0d-b4cd-906d0341a6a2","Type":"ContainerDied","Data":"29d5c9787502e9669abbe96eaf4ab7794727491f9681e13bbc4697ee3d0371ce"} Jan 27 15:22:36 crc kubenswrapper[4697]: I0127 15:22:36.835814 4697 scope.go:117] "RemoveContainer" containerID="a25b1887a26bd7bc33aea2fca0f7d247f941b26bafd9a4e6afe65513d5c220fd" Jan 27 15:22:36 crc kubenswrapper[4697]: I0127 15:22:36.841519 4697 generic.go:334] "Generic (PLEG): container finished" podID="a5c419f2-da90-4ed6-8155-03cba6840bc7" containerID="206b4eab75b57d80537b17fc4a7a4ce71dd2c395c2662b8e7ca597b73cde975d" exitCode=0 Jan 27 15:22:36 crc kubenswrapper[4697]: I0127 15:22:36.841556 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6kvwz" event={"ID":"a5c419f2-da90-4ed6-8155-03cba6840bc7","Type":"ContainerDied","Data":"206b4eab75b57d80537b17fc4a7a4ce71dd2c395c2662b8e7ca597b73cde975d"} Jan 27 15:22:36 crc kubenswrapper[4697]: I0127 15:22:36.841580 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6kvwz" event={"ID":"a5c419f2-da90-4ed6-8155-03cba6840bc7","Type":"ContainerStarted","Data":"9a2d40f3aa9938324b3d72a085dbc60fbea63b1716e044282e37ccd619dcb5aa"} Jan 27 15:22:36 crc kubenswrapper[4697]: I0127 15:22:36.869879 4697 scope.go:117] "RemoveContainer" containerID="a25b1887a26bd7bc33aea2fca0f7d247f941b26bafd9a4e6afe65513d5c220fd" Jan 27 15:22:36 crc kubenswrapper[4697]: E0127 15:22:36.870497 4697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a25b1887a26bd7bc33aea2fca0f7d247f941b26bafd9a4e6afe65513d5c220fd\": container with ID starting with a25b1887a26bd7bc33aea2fca0f7d247f941b26bafd9a4e6afe65513d5c220fd not found: ID does not exist" containerID="a25b1887a26bd7bc33aea2fca0f7d247f941b26bafd9a4e6afe65513d5c220fd" Jan 27 15:22:36 crc kubenswrapper[4697]: I0127 15:22:36.870682 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a25b1887a26bd7bc33aea2fca0f7d247f941b26bafd9a4e6afe65513d5c220fd"} err="failed to get container status \"a25b1887a26bd7bc33aea2fca0f7d247f941b26bafd9a4e6afe65513d5c220fd\": rpc error: code = NotFound desc = could not find container \"a25b1887a26bd7bc33aea2fca0f7d247f941b26bafd9a4e6afe65513d5c220fd\": container with ID starting with a25b1887a26bd7bc33aea2fca0f7d247f941b26bafd9a4e6afe65513d5c220fd not found: ID does not exist" Jan 27 15:22:36 crc kubenswrapper[4697]: I0127 15:22:36.878994 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-wjd95"] Jan 27 15:22:36 crc kubenswrapper[4697]: I0127 15:22:36.886925 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-wjd95"] Jan 27 15:22:38 crc kubenswrapper[4697]: I0127 15:22:38.575305 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f95124d-8a5d-4a0d-b4cd-906d0341a6a2" path="/var/lib/kubelet/pods/0f95124d-8a5d-4a0d-b4cd-906d0341a6a2/volumes" Jan 27 15:22:38 crc kubenswrapper[4697]: I0127 15:22:38.860732 4697 generic.go:334] "Generic (PLEG): container finished" podID="a5c419f2-da90-4ed6-8155-03cba6840bc7" containerID="5b4a9b988909b38b1258cbf523860528366f144a2b1655dd67171a1ffe9e0cea" exitCode=0 Jan 27 15:22:38 crc kubenswrapper[4697]: I0127 15:22:38.860814 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6kvwz" event={"ID":"a5c419f2-da90-4ed6-8155-03cba6840bc7","Type":"ContainerDied","Data":"5b4a9b988909b38b1258cbf523860528366f144a2b1655dd67171a1ffe9e0cea"} Jan 27 15:22:39 crc kubenswrapper[4697]: I0127 15:22:39.873607 4697 generic.go:334] "Generic (PLEG): container finished" podID="a5c419f2-da90-4ed6-8155-03cba6840bc7" containerID="f6860cc74b01af4a3034ede7f2f1c8a12ee4a09895087b5e15ff7b907c6d8447" exitCode=0 Jan 27 15:22:39 crc kubenswrapper[4697]: I0127 15:22:39.873873 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6kvwz" event={"ID":"a5c419f2-da90-4ed6-8155-03cba6840bc7","Type":"ContainerDied","Data":"f6860cc74b01af4a3034ede7f2f1c8a12ee4a09895087b5e15ff7b907c6d8447"} Jan 27 15:22:41 crc kubenswrapper[4697]: I0127 15:22:41.142484 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6kvwz" Jan 27 15:22:41 crc kubenswrapper[4697]: I0127 15:22:41.263188 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8zl9j\" (UniqueName: \"kubernetes.io/projected/a5c419f2-da90-4ed6-8155-03cba6840bc7-kube-api-access-8zl9j\") pod \"a5c419f2-da90-4ed6-8155-03cba6840bc7\" (UID: \"a5c419f2-da90-4ed6-8155-03cba6840bc7\") " Jan 27 15:22:41 crc kubenswrapper[4697]: I0127 15:22:41.263328 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a5c419f2-da90-4ed6-8155-03cba6840bc7-util\") pod \"a5c419f2-da90-4ed6-8155-03cba6840bc7\" (UID: \"a5c419f2-da90-4ed6-8155-03cba6840bc7\") " Jan 27 15:22:41 crc kubenswrapper[4697]: I0127 15:22:41.263471 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a5c419f2-da90-4ed6-8155-03cba6840bc7-bundle\") pod \"a5c419f2-da90-4ed6-8155-03cba6840bc7\" (UID: \"a5c419f2-da90-4ed6-8155-03cba6840bc7\") " Jan 27 15:22:41 crc kubenswrapper[4697]: I0127 15:22:41.265042 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a5c419f2-da90-4ed6-8155-03cba6840bc7-bundle" (OuterVolumeSpecName: "bundle") pod "a5c419f2-da90-4ed6-8155-03cba6840bc7" (UID: "a5c419f2-da90-4ed6-8155-03cba6840bc7"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:22:41 crc kubenswrapper[4697]: I0127 15:22:41.272074 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5c419f2-da90-4ed6-8155-03cba6840bc7-kube-api-access-8zl9j" (OuterVolumeSpecName: "kube-api-access-8zl9j") pod "a5c419f2-da90-4ed6-8155-03cba6840bc7" (UID: "a5c419f2-da90-4ed6-8155-03cba6840bc7"). InnerVolumeSpecName "kube-api-access-8zl9j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:22:41 crc kubenswrapper[4697]: I0127 15:22:41.365198 4697 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a5c419f2-da90-4ed6-8155-03cba6840bc7-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 15:22:41 crc kubenswrapper[4697]: I0127 15:22:41.365231 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8zl9j\" (UniqueName: \"kubernetes.io/projected/a5c419f2-da90-4ed6-8155-03cba6840bc7-kube-api-access-8zl9j\") on node \"crc\" DevicePath \"\"" Jan 27 15:22:41 crc kubenswrapper[4697]: I0127 15:22:41.601171 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a5c419f2-da90-4ed6-8155-03cba6840bc7-util" (OuterVolumeSpecName: "util") pod "a5c419f2-da90-4ed6-8155-03cba6840bc7" (UID: "a5c419f2-da90-4ed6-8155-03cba6840bc7"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:22:41 crc kubenswrapper[4697]: I0127 15:22:41.669658 4697 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a5c419f2-da90-4ed6-8155-03cba6840bc7-util\") on node \"crc\" DevicePath \"\"" Jan 27 15:22:41 crc kubenswrapper[4697]: I0127 15:22:41.894336 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6kvwz" event={"ID":"a5c419f2-da90-4ed6-8155-03cba6840bc7","Type":"ContainerDied","Data":"9a2d40f3aa9938324b3d72a085dbc60fbea63b1716e044282e37ccd619dcb5aa"} Jan 27 15:22:41 crc kubenswrapper[4697]: I0127 15:22:41.894426 4697 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9a2d40f3aa9938324b3d72a085dbc60fbea63b1716e044282e37ccd619dcb5aa" Jan 27 15:22:41 crc kubenswrapper[4697]: I0127 15:22:41.894518 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6kvwz" Jan 27 15:22:50 crc kubenswrapper[4697]: I0127 15:22:50.392687 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-976dcb485-6tnr7"] Jan 27 15:22:50 crc kubenswrapper[4697]: E0127 15:22:50.393469 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f95124d-8a5d-4a0d-b4cd-906d0341a6a2" containerName="console" Jan 27 15:22:50 crc kubenswrapper[4697]: I0127 15:22:50.393486 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f95124d-8a5d-4a0d-b4cd-906d0341a6a2" containerName="console" Jan 27 15:22:50 crc kubenswrapper[4697]: E0127 15:22:50.393499 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5c419f2-da90-4ed6-8155-03cba6840bc7" containerName="pull" Jan 27 15:22:50 crc kubenswrapper[4697]: I0127 15:22:50.393508 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5c419f2-da90-4ed6-8155-03cba6840bc7" containerName="pull" Jan 27 15:22:50 crc kubenswrapper[4697]: E0127 15:22:50.393528 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5c419f2-da90-4ed6-8155-03cba6840bc7" containerName="extract" Jan 27 15:22:50 crc kubenswrapper[4697]: I0127 15:22:50.393534 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5c419f2-da90-4ed6-8155-03cba6840bc7" containerName="extract" Jan 27 15:22:50 crc kubenswrapper[4697]: E0127 15:22:50.393543 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5c419f2-da90-4ed6-8155-03cba6840bc7" containerName="util" Jan 27 15:22:50 crc kubenswrapper[4697]: I0127 15:22:50.393549 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5c419f2-da90-4ed6-8155-03cba6840bc7" containerName="util" Jan 27 15:22:50 crc kubenswrapper[4697]: I0127 15:22:50.393649 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5c419f2-da90-4ed6-8155-03cba6840bc7" containerName="extract" Jan 27 15:22:50 crc kubenswrapper[4697]: I0127 15:22:50.393669 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f95124d-8a5d-4a0d-b4cd-906d0341a6a2" containerName="console" Jan 27 15:22:50 crc kubenswrapper[4697]: I0127 15:22:50.394044 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-976dcb485-6tnr7" Jan 27 15:22:50 crc kubenswrapper[4697]: I0127 15:22:50.397114 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 27 15:22:50 crc kubenswrapper[4697]: I0127 15:22:50.397298 4697 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-qc7xc" Jan 27 15:22:50 crc kubenswrapper[4697]: I0127 15:22:50.397402 4697 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 27 15:22:50 crc kubenswrapper[4697]: I0127 15:22:50.399739 4697 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 27 15:22:50 crc kubenswrapper[4697]: I0127 15:22:50.401505 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 27 15:22:50 crc kubenswrapper[4697]: I0127 15:22:50.421951 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-976dcb485-6tnr7"] Jan 27 15:22:50 crc kubenswrapper[4697]: I0127 15:22:50.515330 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rc7nq\" (UniqueName: \"kubernetes.io/projected/fc55dd19-5186-4ee0-b54d-0fec0c93f30a-kube-api-access-rc7nq\") pod \"metallb-operator-controller-manager-976dcb485-6tnr7\" (UID: \"fc55dd19-5186-4ee0-b54d-0fec0c93f30a\") " pod="metallb-system/metallb-operator-controller-manager-976dcb485-6tnr7" Jan 27 15:22:50 crc kubenswrapper[4697]: I0127 15:22:50.515382 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fc55dd19-5186-4ee0-b54d-0fec0c93f30a-webhook-cert\") pod \"metallb-operator-controller-manager-976dcb485-6tnr7\" (UID: \"fc55dd19-5186-4ee0-b54d-0fec0c93f30a\") " pod="metallb-system/metallb-operator-controller-manager-976dcb485-6tnr7" Jan 27 15:22:50 crc kubenswrapper[4697]: I0127 15:22:50.515448 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/fc55dd19-5186-4ee0-b54d-0fec0c93f30a-apiservice-cert\") pod \"metallb-operator-controller-manager-976dcb485-6tnr7\" (UID: \"fc55dd19-5186-4ee0-b54d-0fec0c93f30a\") " pod="metallb-system/metallb-operator-controller-manager-976dcb485-6tnr7" Jan 27 15:22:50 crc kubenswrapper[4697]: I0127 15:22:50.616431 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/fc55dd19-5186-4ee0-b54d-0fec0c93f30a-apiservice-cert\") pod \"metallb-operator-controller-manager-976dcb485-6tnr7\" (UID: \"fc55dd19-5186-4ee0-b54d-0fec0c93f30a\") " pod="metallb-system/metallb-operator-controller-manager-976dcb485-6tnr7" Jan 27 15:22:50 crc kubenswrapper[4697]: I0127 15:22:50.616708 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rc7nq\" (UniqueName: \"kubernetes.io/projected/fc55dd19-5186-4ee0-b54d-0fec0c93f30a-kube-api-access-rc7nq\") pod \"metallb-operator-controller-manager-976dcb485-6tnr7\" (UID: \"fc55dd19-5186-4ee0-b54d-0fec0c93f30a\") " pod="metallb-system/metallb-operator-controller-manager-976dcb485-6tnr7" Jan 27 15:22:50 crc kubenswrapper[4697]: I0127 15:22:50.617091 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fc55dd19-5186-4ee0-b54d-0fec0c93f30a-webhook-cert\") pod \"metallb-operator-controller-manager-976dcb485-6tnr7\" (UID: \"fc55dd19-5186-4ee0-b54d-0fec0c93f30a\") " pod="metallb-system/metallb-operator-controller-manager-976dcb485-6tnr7" Jan 27 15:22:50 crc kubenswrapper[4697]: I0127 15:22:50.622185 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fc55dd19-5186-4ee0-b54d-0fec0c93f30a-webhook-cert\") pod \"metallb-operator-controller-manager-976dcb485-6tnr7\" (UID: \"fc55dd19-5186-4ee0-b54d-0fec0c93f30a\") " pod="metallb-system/metallb-operator-controller-manager-976dcb485-6tnr7" Jan 27 15:22:50 crc kubenswrapper[4697]: I0127 15:22:50.622714 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/fc55dd19-5186-4ee0-b54d-0fec0c93f30a-apiservice-cert\") pod \"metallb-operator-controller-manager-976dcb485-6tnr7\" (UID: \"fc55dd19-5186-4ee0-b54d-0fec0c93f30a\") " pod="metallb-system/metallb-operator-controller-manager-976dcb485-6tnr7" Jan 27 15:22:50 crc kubenswrapper[4697]: I0127 15:22:50.656019 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rc7nq\" (UniqueName: \"kubernetes.io/projected/fc55dd19-5186-4ee0-b54d-0fec0c93f30a-kube-api-access-rc7nq\") pod \"metallb-operator-controller-manager-976dcb485-6tnr7\" (UID: \"fc55dd19-5186-4ee0-b54d-0fec0c93f30a\") " pod="metallb-system/metallb-operator-controller-manager-976dcb485-6tnr7" Jan 27 15:22:50 crc kubenswrapper[4697]: I0127 15:22:50.710837 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-976dcb485-6tnr7" Jan 27 15:22:50 crc kubenswrapper[4697]: I0127 15:22:50.777563 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-5476f886c6-mrv5l"] Jan 27 15:22:50 crc kubenswrapper[4697]: I0127 15:22:50.778404 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-5476f886c6-mrv5l" Jan 27 15:22:50 crc kubenswrapper[4697]: I0127 15:22:50.785185 4697 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 27 15:22:50 crc kubenswrapper[4697]: I0127 15:22:50.785343 4697 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 27 15:22:50 crc kubenswrapper[4697]: I0127 15:22:50.785444 4697 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-xgvc7" Jan 27 15:22:50 crc kubenswrapper[4697]: I0127 15:22:50.799910 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-5476f886c6-mrv5l"] Jan 27 15:22:50 crc kubenswrapper[4697]: I0127 15:22:50.863071 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5tr7h\" (UniqueName: \"kubernetes.io/projected/4779f8a7-b446-4128-8800-0b6420fda6d8-kube-api-access-5tr7h\") pod \"metallb-operator-webhook-server-5476f886c6-mrv5l\" (UID: \"4779f8a7-b446-4128-8800-0b6420fda6d8\") " pod="metallb-system/metallb-operator-webhook-server-5476f886c6-mrv5l" Jan 27 15:22:50 crc kubenswrapper[4697]: I0127 15:22:50.863323 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4779f8a7-b446-4128-8800-0b6420fda6d8-apiservice-cert\") pod \"metallb-operator-webhook-server-5476f886c6-mrv5l\" (UID: \"4779f8a7-b446-4128-8800-0b6420fda6d8\") " pod="metallb-system/metallb-operator-webhook-server-5476f886c6-mrv5l" Jan 27 15:22:50 crc kubenswrapper[4697]: I0127 15:22:50.863440 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4779f8a7-b446-4128-8800-0b6420fda6d8-webhook-cert\") pod \"metallb-operator-webhook-server-5476f886c6-mrv5l\" (UID: \"4779f8a7-b446-4128-8800-0b6420fda6d8\") " pod="metallb-system/metallb-operator-webhook-server-5476f886c6-mrv5l" Jan 27 15:22:50 crc kubenswrapper[4697]: I0127 15:22:50.975628 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5tr7h\" (UniqueName: \"kubernetes.io/projected/4779f8a7-b446-4128-8800-0b6420fda6d8-kube-api-access-5tr7h\") pod \"metallb-operator-webhook-server-5476f886c6-mrv5l\" (UID: \"4779f8a7-b446-4128-8800-0b6420fda6d8\") " pod="metallb-system/metallb-operator-webhook-server-5476f886c6-mrv5l" Jan 27 15:22:50 crc kubenswrapper[4697]: I0127 15:22:50.976694 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4779f8a7-b446-4128-8800-0b6420fda6d8-apiservice-cert\") pod \"metallb-operator-webhook-server-5476f886c6-mrv5l\" (UID: \"4779f8a7-b446-4128-8800-0b6420fda6d8\") " pod="metallb-system/metallb-operator-webhook-server-5476f886c6-mrv5l" Jan 27 15:22:50 crc kubenswrapper[4697]: I0127 15:22:50.976871 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4779f8a7-b446-4128-8800-0b6420fda6d8-webhook-cert\") pod \"metallb-operator-webhook-server-5476f886c6-mrv5l\" (UID: \"4779f8a7-b446-4128-8800-0b6420fda6d8\") " pod="metallb-system/metallb-operator-webhook-server-5476f886c6-mrv5l" Jan 27 15:22:50 crc kubenswrapper[4697]: I0127 15:22:50.986602 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4779f8a7-b446-4128-8800-0b6420fda6d8-webhook-cert\") pod \"metallb-operator-webhook-server-5476f886c6-mrv5l\" (UID: \"4779f8a7-b446-4128-8800-0b6420fda6d8\") " pod="metallb-system/metallb-operator-webhook-server-5476f886c6-mrv5l" Jan 27 15:22:50 crc kubenswrapper[4697]: I0127 15:22:50.987126 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4779f8a7-b446-4128-8800-0b6420fda6d8-apiservice-cert\") pod \"metallb-operator-webhook-server-5476f886c6-mrv5l\" (UID: \"4779f8a7-b446-4128-8800-0b6420fda6d8\") " pod="metallb-system/metallb-operator-webhook-server-5476f886c6-mrv5l" Jan 27 15:22:50 crc kubenswrapper[4697]: I0127 15:22:50.998606 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5tr7h\" (UniqueName: \"kubernetes.io/projected/4779f8a7-b446-4128-8800-0b6420fda6d8-kube-api-access-5tr7h\") pod \"metallb-operator-webhook-server-5476f886c6-mrv5l\" (UID: \"4779f8a7-b446-4128-8800-0b6420fda6d8\") " pod="metallb-system/metallb-operator-webhook-server-5476f886c6-mrv5l" Jan 27 15:22:51 crc kubenswrapper[4697]: I0127 15:22:51.097078 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-5476f886c6-mrv5l" Jan 27 15:22:51 crc kubenswrapper[4697]: I0127 15:22:51.353478 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-976dcb485-6tnr7"] Jan 27 15:22:51 crc kubenswrapper[4697]: W0127 15:22:51.364100 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfc55dd19_5186_4ee0_b54d_0fec0c93f30a.slice/crio-ccb487bb291c69ae5d92759ea01d851906214361d51ea8638eaeaaf26e5939a8 WatchSource:0}: Error finding container ccb487bb291c69ae5d92759ea01d851906214361d51ea8638eaeaaf26e5939a8: Status 404 returned error can't find the container with id ccb487bb291c69ae5d92759ea01d851906214361d51ea8638eaeaaf26e5939a8 Jan 27 15:22:51 crc kubenswrapper[4697]: I0127 15:22:51.394620 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-5476f886c6-mrv5l"] Jan 27 15:22:51 crc kubenswrapper[4697]: I0127 15:22:51.950612 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-5476f886c6-mrv5l" event={"ID":"4779f8a7-b446-4128-8800-0b6420fda6d8","Type":"ContainerStarted","Data":"a727f314f4673b4ceb5b4c0c6c015424081bd11875ef406a840852d417de19fc"} Jan 27 15:22:51 crc kubenswrapper[4697]: I0127 15:22:51.951566 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-976dcb485-6tnr7" event={"ID":"fc55dd19-5186-4ee0-b54d-0fec0c93f30a","Type":"ContainerStarted","Data":"ccb487bb291c69ae5d92759ea01d851906214361d51ea8638eaeaaf26e5939a8"} Jan 27 15:22:54 crc kubenswrapper[4697]: I0127 15:22:54.980145 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-976dcb485-6tnr7" event={"ID":"fc55dd19-5186-4ee0-b54d-0fec0c93f30a","Type":"ContainerStarted","Data":"9fffe3c0c896126636ca3059dfc8125a99deaf6f9b2346b4630203d41f1aca51"} Jan 27 15:22:54 crc kubenswrapper[4697]: I0127 15:22:54.983701 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-976dcb485-6tnr7" Jan 27 15:22:55 crc kubenswrapper[4697]: I0127 15:22:55.010275 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-976dcb485-6tnr7" podStartSLOduration=1.674933878 podStartE2EDuration="5.010253865s" podCreationTimestamp="2026-01-27 15:22:50 +0000 UTC" firstStartedPulling="2026-01-27 15:22:51.366121208 +0000 UTC m=+867.538520989" lastFinishedPulling="2026-01-27 15:22:54.701441195 +0000 UTC m=+870.873840976" observedRunningTime="2026-01-27 15:22:55.000729462 +0000 UTC m=+871.173129253" watchObservedRunningTime="2026-01-27 15:22:55.010253865 +0000 UTC m=+871.182653646" Jan 27 15:22:58 crc kubenswrapper[4697]: I0127 15:22:58.003472 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-5476f886c6-mrv5l" event={"ID":"4779f8a7-b446-4128-8800-0b6420fda6d8","Type":"ContainerStarted","Data":"a141c15f8c4dac6639e5bf3e191b08739654df72593088bf69eaa51c3a0f8316"} Jan 27 15:22:58 crc kubenswrapper[4697]: I0127 15:22:58.004068 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-5476f886c6-mrv5l" Jan 27 15:22:58 crc kubenswrapper[4697]: I0127 15:22:58.027163 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-5476f886c6-mrv5l" podStartSLOduration=2.539769703 podStartE2EDuration="8.027147947s" podCreationTimestamp="2026-01-27 15:22:50 +0000 UTC" firstStartedPulling="2026-01-27 15:22:51.39852389 +0000 UTC m=+867.570923671" lastFinishedPulling="2026-01-27 15:22:56.885902134 +0000 UTC m=+873.058301915" observedRunningTime="2026-01-27 15:22:58.026146493 +0000 UTC m=+874.198546304" watchObservedRunningTime="2026-01-27 15:22:58.027147947 +0000 UTC m=+874.199547728" Jan 27 15:23:11 crc kubenswrapper[4697]: I0127 15:23:11.101543 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-5476f886c6-mrv5l" Jan 27 15:23:25 crc kubenswrapper[4697]: I0127 15:23:25.109420 4697 patch_prober.go:28] interesting pod/machine-config-daemon-wz495 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 15:23:25 crc kubenswrapper[4697]: I0127 15:23:25.110034 4697 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 15:23:27 crc kubenswrapper[4697]: I0127 15:23:27.571266 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-w8lbx"] Jan 27 15:23:27 crc kubenswrapper[4697]: I0127 15:23:27.576163 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w8lbx" Jan 27 15:23:27 crc kubenswrapper[4697]: I0127 15:23:27.610870 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-w8lbx"] Jan 27 15:23:27 crc kubenswrapper[4697]: I0127 15:23:27.624639 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89adb880-08aa-412e-83a0-e9352901785f-utilities\") pod \"redhat-marketplace-w8lbx\" (UID: \"89adb880-08aa-412e-83a0-e9352901785f\") " pod="openshift-marketplace/redhat-marketplace-w8lbx" Jan 27 15:23:27 crc kubenswrapper[4697]: I0127 15:23:27.624708 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vcm2s\" (UniqueName: \"kubernetes.io/projected/89adb880-08aa-412e-83a0-e9352901785f-kube-api-access-vcm2s\") pod \"redhat-marketplace-w8lbx\" (UID: \"89adb880-08aa-412e-83a0-e9352901785f\") " pod="openshift-marketplace/redhat-marketplace-w8lbx" Jan 27 15:23:27 crc kubenswrapper[4697]: I0127 15:23:27.625420 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89adb880-08aa-412e-83a0-e9352901785f-catalog-content\") pod \"redhat-marketplace-w8lbx\" (UID: \"89adb880-08aa-412e-83a0-e9352901785f\") " pod="openshift-marketplace/redhat-marketplace-w8lbx" Jan 27 15:23:27 crc kubenswrapper[4697]: I0127 15:23:27.726327 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89adb880-08aa-412e-83a0-e9352901785f-utilities\") pod \"redhat-marketplace-w8lbx\" (UID: \"89adb880-08aa-412e-83a0-e9352901785f\") " pod="openshift-marketplace/redhat-marketplace-w8lbx" Jan 27 15:23:27 crc kubenswrapper[4697]: I0127 15:23:27.726670 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vcm2s\" (UniqueName: \"kubernetes.io/projected/89adb880-08aa-412e-83a0-e9352901785f-kube-api-access-vcm2s\") pod \"redhat-marketplace-w8lbx\" (UID: \"89adb880-08aa-412e-83a0-e9352901785f\") " pod="openshift-marketplace/redhat-marketplace-w8lbx" Jan 27 15:23:27 crc kubenswrapper[4697]: I0127 15:23:27.726878 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89adb880-08aa-412e-83a0-e9352901785f-catalog-content\") pod \"redhat-marketplace-w8lbx\" (UID: \"89adb880-08aa-412e-83a0-e9352901785f\") " pod="openshift-marketplace/redhat-marketplace-w8lbx" Jan 27 15:23:27 crc kubenswrapper[4697]: I0127 15:23:27.726907 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89adb880-08aa-412e-83a0-e9352901785f-utilities\") pod \"redhat-marketplace-w8lbx\" (UID: \"89adb880-08aa-412e-83a0-e9352901785f\") " pod="openshift-marketplace/redhat-marketplace-w8lbx" Jan 27 15:23:27 crc kubenswrapper[4697]: I0127 15:23:27.727339 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89adb880-08aa-412e-83a0-e9352901785f-catalog-content\") pod \"redhat-marketplace-w8lbx\" (UID: \"89adb880-08aa-412e-83a0-e9352901785f\") " pod="openshift-marketplace/redhat-marketplace-w8lbx" Jan 27 15:23:27 crc kubenswrapper[4697]: I0127 15:23:27.756195 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vcm2s\" (UniqueName: \"kubernetes.io/projected/89adb880-08aa-412e-83a0-e9352901785f-kube-api-access-vcm2s\") pod \"redhat-marketplace-w8lbx\" (UID: \"89adb880-08aa-412e-83a0-e9352901785f\") " pod="openshift-marketplace/redhat-marketplace-w8lbx" Jan 27 15:23:27 crc kubenswrapper[4697]: I0127 15:23:27.914002 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w8lbx" Jan 27 15:23:28 crc kubenswrapper[4697]: I0127 15:23:28.142979 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-w8lbx"] Jan 27 15:23:28 crc kubenswrapper[4697]: I0127 15:23:28.164408 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w8lbx" event={"ID":"89adb880-08aa-412e-83a0-e9352901785f","Type":"ContainerStarted","Data":"99d087aa8fd817ef8c3334fe2259253ec6d535e0127e539343af7853a7003429"} Jan 27 15:23:29 crc kubenswrapper[4697]: I0127 15:23:29.174298 4697 generic.go:334] "Generic (PLEG): container finished" podID="89adb880-08aa-412e-83a0-e9352901785f" containerID="c0f28c8184990cd7d447593c9471456c3e8aaa08ba1d7066341c7256eafbad9d" exitCode=0 Jan 27 15:23:29 crc kubenswrapper[4697]: I0127 15:23:29.174342 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w8lbx" event={"ID":"89adb880-08aa-412e-83a0-e9352901785f","Type":"ContainerDied","Data":"c0f28c8184990cd7d447593c9471456c3e8aaa08ba1d7066341c7256eafbad9d"} Jan 27 15:23:30 crc kubenswrapper[4697]: I0127 15:23:30.194675 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w8lbx" event={"ID":"89adb880-08aa-412e-83a0-e9352901785f","Type":"ContainerStarted","Data":"7700923508d8725b94b9548f4221a2beb72f855ded789ee0aa616a8f4fa5e551"} Jan 27 15:23:30 crc kubenswrapper[4697]: I0127 15:23:30.714583 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-976dcb485-6tnr7" Jan 27 15:23:31 crc kubenswrapper[4697]: I0127 15:23:31.204003 4697 generic.go:334] "Generic (PLEG): container finished" podID="89adb880-08aa-412e-83a0-e9352901785f" containerID="7700923508d8725b94b9548f4221a2beb72f855ded789ee0aa616a8f4fa5e551" exitCode=0 Jan 27 15:23:31 crc kubenswrapper[4697]: I0127 15:23:31.204141 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w8lbx" event={"ID":"89adb880-08aa-412e-83a0-e9352901785f","Type":"ContainerDied","Data":"7700923508d8725b94b9548f4221a2beb72f855ded789ee0aa616a8f4fa5e551"} Jan 27 15:23:31 crc kubenswrapper[4697]: I0127 15:23:31.508869 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-g49qd"] Jan 27 15:23:31 crc kubenswrapper[4697]: I0127 15:23:31.521796 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-kk5jh"] Jan 27 15:23:31 crc kubenswrapper[4697]: I0127 15:23:31.522550 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-kk5jh" Jan 27 15:23:31 crc kubenswrapper[4697]: I0127 15:23:31.523033 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-g49qd" Jan 27 15:23:31 crc kubenswrapper[4697]: I0127 15:23:31.524566 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 27 15:23:31 crc kubenswrapper[4697]: I0127 15:23:31.529735 4697 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 27 15:23:31 crc kubenswrapper[4697]: I0127 15:23:31.529794 4697 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-fh299" Jan 27 15:23:31 crc kubenswrapper[4697]: I0127 15:23:31.530220 4697 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 27 15:23:31 crc kubenswrapper[4697]: I0127 15:23:31.538679 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-kk5jh"] Jan 27 15:23:31 crc kubenswrapper[4697]: I0127 15:23:31.612019 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-8stft"] Jan 27 15:23:31 crc kubenswrapper[4697]: I0127 15:23:31.612882 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-8stft" Jan 27 15:23:31 crc kubenswrapper[4697]: I0127 15:23:31.614858 4697 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 27 15:23:31 crc kubenswrapper[4697]: I0127 15:23:31.615434 4697 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-nq6fq" Jan 27 15:23:31 crc kubenswrapper[4697]: I0127 15:23:31.615540 4697 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 27 15:23:31 crc kubenswrapper[4697]: I0127 15:23:31.617530 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 27 15:23:31 crc kubenswrapper[4697]: I0127 15:23:31.625076 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-shgkw"] Jan 27 15:23:31 crc kubenswrapper[4697]: I0127 15:23:31.625951 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-shgkw" Jan 27 15:23:31 crc kubenswrapper[4697]: I0127 15:23:31.627696 4697 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 27 15:23:31 crc kubenswrapper[4697]: I0127 15:23:31.630795 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-shgkw"] Jan 27 15:23:31 crc kubenswrapper[4697]: I0127 15:23:31.718159 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/de0040d7-f7cb-4a80-ba9a-bbc8898365e1-reloader\") pod \"frr-k8s-g49qd\" (UID: \"de0040d7-f7cb-4a80-ba9a-bbc8898365e1\") " pod="metallb-system/frr-k8s-g49qd" Jan 27 15:23:31 crc kubenswrapper[4697]: I0127 15:23:31.718235 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7dtk\" (UniqueName: \"kubernetes.io/projected/de0040d7-f7cb-4a80-ba9a-bbc8898365e1-kube-api-access-n7dtk\") pod \"frr-k8s-g49qd\" (UID: \"de0040d7-f7cb-4a80-ba9a-bbc8898365e1\") " pod="metallb-system/frr-k8s-g49qd" Jan 27 15:23:31 crc kubenswrapper[4697]: I0127 15:23:31.718256 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/18479ade-7486-4889-b313-79c6598cc773-metrics-certs\") pod \"speaker-8stft\" (UID: \"18479ade-7486-4889-b313-79c6598cc773\") " pod="metallb-system/speaker-8stft" Jan 27 15:23:31 crc kubenswrapper[4697]: I0127 15:23:31.718273 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/18479ade-7486-4889-b313-79c6598cc773-metallb-excludel2\") pod \"speaker-8stft\" (UID: \"18479ade-7486-4889-b313-79c6598cc773\") " pod="metallb-system/speaker-8stft" Jan 27 15:23:31 crc kubenswrapper[4697]: I0127 15:23:31.718321 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/de0040d7-f7cb-4a80-ba9a-bbc8898365e1-frr-conf\") pod \"frr-k8s-g49qd\" (UID: \"de0040d7-f7cb-4a80-ba9a-bbc8898365e1\") " pod="metallb-system/frr-k8s-g49qd" Jan 27 15:23:31 crc kubenswrapper[4697]: I0127 15:23:31.718336 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8k7w\" (UniqueName: \"kubernetes.io/projected/18479ade-7486-4889-b313-79c6598cc773-kube-api-access-p8k7w\") pod \"speaker-8stft\" (UID: \"18479ade-7486-4889-b313-79c6598cc773\") " pod="metallb-system/speaker-8stft" Jan 27 15:23:31 crc kubenswrapper[4697]: I0127 15:23:31.718354 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/18479ade-7486-4889-b313-79c6598cc773-memberlist\") pod \"speaker-8stft\" (UID: \"18479ade-7486-4889-b313-79c6598cc773\") " pod="metallb-system/speaker-8stft" Jan 27 15:23:31 crc kubenswrapper[4697]: I0127 15:23:31.718411 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/de0040d7-f7cb-4a80-ba9a-bbc8898365e1-metrics\") pod \"frr-k8s-g49qd\" (UID: \"de0040d7-f7cb-4a80-ba9a-bbc8898365e1\") " pod="metallb-system/frr-k8s-g49qd" Jan 27 15:23:31 crc kubenswrapper[4697]: I0127 15:23:31.718426 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/de0040d7-f7cb-4a80-ba9a-bbc8898365e1-frr-sockets\") pod \"frr-k8s-g49qd\" (UID: \"de0040d7-f7cb-4a80-ba9a-bbc8898365e1\") " pod="metallb-system/frr-k8s-g49qd" Jan 27 15:23:31 crc kubenswrapper[4697]: I0127 15:23:31.718441 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/de0040d7-f7cb-4a80-ba9a-bbc8898365e1-frr-startup\") pod \"frr-k8s-g49qd\" (UID: \"de0040d7-f7cb-4a80-ba9a-bbc8898365e1\") " pod="metallb-system/frr-k8s-g49qd" Jan 27 15:23:31 crc kubenswrapper[4697]: I0127 15:23:31.718478 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/de0040d7-f7cb-4a80-ba9a-bbc8898365e1-metrics-certs\") pod \"frr-k8s-g49qd\" (UID: \"de0040d7-f7cb-4a80-ba9a-bbc8898365e1\") " pod="metallb-system/frr-k8s-g49qd" Jan 27 15:23:31 crc kubenswrapper[4697]: I0127 15:23:31.718515 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvbll\" (UniqueName: \"kubernetes.io/projected/cb6be63b-c3fd-4e21-a1b3-ffc11357a98f-kube-api-access-cvbll\") pod \"frr-k8s-webhook-server-7df86c4f6c-kk5jh\" (UID: \"cb6be63b-c3fd-4e21-a1b3-ffc11357a98f\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-kk5jh" Jan 27 15:23:31 crc kubenswrapper[4697]: I0127 15:23:31.718531 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cb6be63b-c3fd-4e21-a1b3-ffc11357a98f-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-kk5jh\" (UID: \"cb6be63b-c3fd-4e21-a1b3-ffc11357a98f\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-kk5jh" Jan 27 15:23:31 crc kubenswrapper[4697]: I0127 15:23:31.819109 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7dtk\" (UniqueName: \"kubernetes.io/projected/de0040d7-f7cb-4a80-ba9a-bbc8898365e1-kube-api-access-n7dtk\") pod \"frr-k8s-g49qd\" (UID: \"de0040d7-f7cb-4a80-ba9a-bbc8898365e1\") " pod="metallb-system/frr-k8s-g49qd" Jan 27 15:23:31 crc kubenswrapper[4697]: I0127 15:23:31.819156 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4bnnm\" (UniqueName: \"kubernetes.io/projected/0ecbc291-e00b-42be-b1dc-fd53bcb5256a-kube-api-access-4bnnm\") pod \"controller-6968d8fdc4-shgkw\" (UID: \"0ecbc291-e00b-42be-b1dc-fd53bcb5256a\") " pod="metallb-system/controller-6968d8fdc4-shgkw" Jan 27 15:23:31 crc kubenswrapper[4697]: I0127 15:23:31.819177 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/18479ade-7486-4889-b313-79c6598cc773-metrics-certs\") pod \"speaker-8stft\" (UID: \"18479ade-7486-4889-b313-79c6598cc773\") " pod="metallb-system/speaker-8stft" Jan 27 15:23:31 crc kubenswrapper[4697]: I0127 15:23:31.819196 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/18479ade-7486-4889-b313-79c6598cc773-metallb-excludel2\") pod \"speaker-8stft\" (UID: \"18479ade-7486-4889-b313-79c6598cc773\") " pod="metallb-system/speaker-8stft" Jan 27 15:23:31 crc kubenswrapper[4697]: I0127 15:23:31.819214 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0ecbc291-e00b-42be-b1dc-fd53bcb5256a-metrics-certs\") pod \"controller-6968d8fdc4-shgkw\" (UID: \"0ecbc291-e00b-42be-b1dc-fd53bcb5256a\") " pod="metallb-system/controller-6968d8fdc4-shgkw" Jan 27 15:23:31 crc kubenswrapper[4697]: I0127 15:23:31.819246 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/de0040d7-f7cb-4a80-ba9a-bbc8898365e1-frr-conf\") pod \"frr-k8s-g49qd\" (UID: \"de0040d7-f7cb-4a80-ba9a-bbc8898365e1\") " pod="metallb-system/frr-k8s-g49qd" Jan 27 15:23:31 crc kubenswrapper[4697]: I0127 15:23:31.819262 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p8k7w\" (UniqueName: \"kubernetes.io/projected/18479ade-7486-4889-b313-79c6598cc773-kube-api-access-p8k7w\") pod \"speaker-8stft\" (UID: \"18479ade-7486-4889-b313-79c6598cc773\") " pod="metallb-system/speaker-8stft" Jan 27 15:23:31 crc kubenswrapper[4697]: I0127 15:23:31.819284 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/18479ade-7486-4889-b313-79c6598cc773-memberlist\") pod \"speaker-8stft\" (UID: \"18479ade-7486-4889-b313-79c6598cc773\") " pod="metallb-system/speaker-8stft" Jan 27 15:23:31 crc kubenswrapper[4697]: I0127 15:23:31.819300 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0ecbc291-e00b-42be-b1dc-fd53bcb5256a-cert\") pod \"controller-6968d8fdc4-shgkw\" (UID: \"0ecbc291-e00b-42be-b1dc-fd53bcb5256a\") " pod="metallb-system/controller-6968d8fdc4-shgkw" Jan 27 15:23:31 crc kubenswrapper[4697]: I0127 15:23:31.819329 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/de0040d7-f7cb-4a80-ba9a-bbc8898365e1-frr-sockets\") pod \"frr-k8s-g49qd\" (UID: \"de0040d7-f7cb-4a80-ba9a-bbc8898365e1\") " pod="metallb-system/frr-k8s-g49qd" Jan 27 15:23:31 crc kubenswrapper[4697]: I0127 15:23:31.819345 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/de0040d7-f7cb-4a80-ba9a-bbc8898365e1-metrics\") pod \"frr-k8s-g49qd\" (UID: \"de0040d7-f7cb-4a80-ba9a-bbc8898365e1\") " pod="metallb-system/frr-k8s-g49qd" Jan 27 15:23:31 crc kubenswrapper[4697]: I0127 15:23:31.819361 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/de0040d7-f7cb-4a80-ba9a-bbc8898365e1-frr-startup\") pod \"frr-k8s-g49qd\" (UID: \"de0040d7-f7cb-4a80-ba9a-bbc8898365e1\") " pod="metallb-system/frr-k8s-g49qd" Jan 27 15:23:31 crc kubenswrapper[4697]: E0127 15:23:31.819342 4697 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Jan 27 15:23:31 crc kubenswrapper[4697]: I0127 15:23:31.819385 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/de0040d7-f7cb-4a80-ba9a-bbc8898365e1-metrics-certs\") pod \"frr-k8s-g49qd\" (UID: \"de0040d7-f7cb-4a80-ba9a-bbc8898365e1\") " pod="metallb-system/frr-k8s-g49qd" Jan 27 15:23:31 crc kubenswrapper[4697]: I0127 15:23:31.819410 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cvbll\" (UniqueName: \"kubernetes.io/projected/cb6be63b-c3fd-4e21-a1b3-ffc11357a98f-kube-api-access-cvbll\") pod \"frr-k8s-webhook-server-7df86c4f6c-kk5jh\" (UID: \"cb6be63b-c3fd-4e21-a1b3-ffc11357a98f\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-kk5jh" Jan 27 15:23:31 crc kubenswrapper[4697]: E0127 15:23:31.819418 4697 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 27 15:23:31 crc kubenswrapper[4697]: E0127 15:23:31.819468 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/18479ade-7486-4889-b313-79c6598cc773-metrics-certs podName:18479ade-7486-4889-b313-79c6598cc773 nodeName:}" failed. No retries permitted until 2026-01-27 15:23:32.319415752 +0000 UTC m=+908.491815543 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/18479ade-7486-4889-b313-79c6598cc773-metrics-certs") pod "speaker-8stft" (UID: "18479ade-7486-4889-b313-79c6598cc773") : secret "speaker-certs-secret" not found Jan 27 15:23:31 crc kubenswrapper[4697]: I0127 15:23:31.819502 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cb6be63b-c3fd-4e21-a1b3-ffc11357a98f-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-kk5jh\" (UID: \"cb6be63b-c3fd-4e21-a1b3-ffc11357a98f\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-kk5jh" Jan 27 15:23:31 crc kubenswrapper[4697]: I0127 15:23:31.819622 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/de0040d7-f7cb-4a80-ba9a-bbc8898365e1-reloader\") pod \"frr-k8s-g49qd\" (UID: \"de0040d7-f7cb-4a80-ba9a-bbc8898365e1\") " pod="metallb-system/frr-k8s-g49qd" Jan 27 15:23:31 crc kubenswrapper[4697]: I0127 15:23:31.819829 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/de0040d7-f7cb-4a80-ba9a-bbc8898365e1-frr-conf\") pod \"frr-k8s-g49qd\" (UID: \"de0040d7-f7cb-4a80-ba9a-bbc8898365e1\") " pod="metallb-system/frr-k8s-g49qd" Jan 27 15:23:31 crc kubenswrapper[4697]: I0127 15:23:31.819867 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/de0040d7-f7cb-4a80-ba9a-bbc8898365e1-frr-sockets\") pod \"frr-k8s-g49qd\" (UID: \"de0040d7-f7cb-4a80-ba9a-bbc8898365e1\") " pod="metallb-system/frr-k8s-g49qd" Jan 27 15:23:31 crc kubenswrapper[4697]: E0127 15:23:31.819909 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/18479ade-7486-4889-b313-79c6598cc773-memberlist podName:18479ade-7486-4889-b313-79c6598cc773 nodeName:}" failed. No retries permitted until 2026-01-27 15:23:32.319893444 +0000 UTC m=+908.492293215 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/18479ade-7486-4889-b313-79c6598cc773-memberlist") pod "speaker-8stft" (UID: "18479ade-7486-4889-b313-79c6598cc773") : secret "metallb-memberlist" not found Jan 27 15:23:31 crc kubenswrapper[4697]: I0127 15:23:31.819955 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/de0040d7-f7cb-4a80-ba9a-bbc8898365e1-metrics\") pod \"frr-k8s-g49qd\" (UID: \"de0040d7-f7cb-4a80-ba9a-bbc8898365e1\") " pod="metallb-system/frr-k8s-g49qd" Jan 27 15:23:31 crc kubenswrapper[4697]: I0127 15:23:31.819966 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/18479ade-7486-4889-b313-79c6598cc773-metallb-excludel2\") pod \"speaker-8stft\" (UID: \"18479ade-7486-4889-b313-79c6598cc773\") " pod="metallb-system/speaker-8stft" Jan 27 15:23:31 crc kubenswrapper[4697]: I0127 15:23:31.820270 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/de0040d7-f7cb-4a80-ba9a-bbc8898365e1-reloader\") pod \"frr-k8s-g49qd\" (UID: \"de0040d7-f7cb-4a80-ba9a-bbc8898365e1\") " pod="metallb-system/frr-k8s-g49qd" Jan 27 15:23:31 crc kubenswrapper[4697]: I0127 15:23:31.820620 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/de0040d7-f7cb-4a80-ba9a-bbc8898365e1-frr-startup\") pod \"frr-k8s-g49qd\" (UID: \"de0040d7-f7cb-4a80-ba9a-bbc8898365e1\") " pod="metallb-system/frr-k8s-g49qd" Jan 27 15:23:31 crc kubenswrapper[4697]: I0127 15:23:31.830436 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/de0040d7-f7cb-4a80-ba9a-bbc8898365e1-metrics-certs\") pod \"frr-k8s-g49qd\" (UID: \"de0040d7-f7cb-4a80-ba9a-bbc8898365e1\") " pod="metallb-system/frr-k8s-g49qd" Jan 27 15:23:31 crc kubenswrapper[4697]: I0127 15:23:31.843803 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cb6be63b-c3fd-4e21-a1b3-ffc11357a98f-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-kk5jh\" (UID: \"cb6be63b-c3fd-4e21-a1b3-ffc11357a98f\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-kk5jh" Jan 27 15:23:31 crc kubenswrapper[4697]: I0127 15:23:31.846491 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7dtk\" (UniqueName: \"kubernetes.io/projected/de0040d7-f7cb-4a80-ba9a-bbc8898365e1-kube-api-access-n7dtk\") pod \"frr-k8s-g49qd\" (UID: \"de0040d7-f7cb-4a80-ba9a-bbc8898365e1\") " pod="metallb-system/frr-k8s-g49qd" Jan 27 15:23:31 crc kubenswrapper[4697]: I0127 15:23:31.854692 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-g49qd" Jan 27 15:23:31 crc kubenswrapper[4697]: I0127 15:23:31.855632 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cvbll\" (UniqueName: \"kubernetes.io/projected/cb6be63b-c3fd-4e21-a1b3-ffc11357a98f-kube-api-access-cvbll\") pod \"frr-k8s-webhook-server-7df86c4f6c-kk5jh\" (UID: \"cb6be63b-c3fd-4e21-a1b3-ffc11357a98f\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-kk5jh" Jan 27 15:23:31 crc kubenswrapper[4697]: I0127 15:23:31.873118 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p8k7w\" (UniqueName: \"kubernetes.io/projected/18479ade-7486-4889-b313-79c6598cc773-kube-api-access-p8k7w\") pod \"speaker-8stft\" (UID: \"18479ade-7486-4889-b313-79c6598cc773\") " pod="metallb-system/speaker-8stft" Jan 27 15:23:31 crc kubenswrapper[4697]: I0127 15:23:31.920570 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4bnnm\" (UniqueName: \"kubernetes.io/projected/0ecbc291-e00b-42be-b1dc-fd53bcb5256a-kube-api-access-4bnnm\") pod \"controller-6968d8fdc4-shgkw\" (UID: \"0ecbc291-e00b-42be-b1dc-fd53bcb5256a\") " pod="metallb-system/controller-6968d8fdc4-shgkw" Jan 27 15:23:31 crc kubenswrapper[4697]: I0127 15:23:31.920632 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0ecbc291-e00b-42be-b1dc-fd53bcb5256a-metrics-certs\") pod \"controller-6968d8fdc4-shgkw\" (UID: \"0ecbc291-e00b-42be-b1dc-fd53bcb5256a\") " pod="metallb-system/controller-6968d8fdc4-shgkw" Jan 27 15:23:31 crc kubenswrapper[4697]: I0127 15:23:31.920672 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0ecbc291-e00b-42be-b1dc-fd53bcb5256a-cert\") pod \"controller-6968d8fdc4-shgkw\" (UID: \"0ecbc291-e00b-42be-b1dc-fd53bcb5256a\") " pod="metallb-system/controller-6968d8fdc4-shgkw" Jan 27 15:23:31 crc kubenswrapper[4697]: I0127 15:23:31.924045 4697 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 27 15:23:31 crc kubenswrapper[4697]: I0127 15:23:31.925394 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0ecbc291-e00b-42be-b1dc-fd53bcb5256a-metrics-certs\") pod \"controller-6968d8fdc4-shgkw\" (UID: \"0ecbc291-e00b-42be-b1dc-fd53bcb5256a\") " pod="metallb-system/controller-6968d8fdc4-shgkw" Jan 27 15:23:31 crc kubenswrapper[4697]: I0127 15:23:31.936328 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0ecbc291-e00b-42be-b1dc-fd53bcb5256a-cert\") pod \"controller-6968d8fdc4-shgkw\" (UID: \"0ecbc291-e00b-42be-b1dc-fd53bcb5256a\") " pod="metallb-system/controller-6968d8fdc4-shgkw" Jan 27 15:23:31 crc kubenswrapper[4697]: I0127 15:23:31.944318 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4bnnm\" (UniqueName: \"kubernetes.io/projected/0ecbc291-e00b-42be-b1dc-fd53bcb5256a-kube-api-access-4bnnm\") pod \"controller-6968d8fdc4-shgkw\" (UID: \"0ecbc291-e00b-42be-b1dc-fd53bcb5256a\") " pod="metallb-system/controller-6968d8fdc4-shgkw" Jan 27 15:23:32 crc kubenswrapper[4697]: I0127 15:23:32.141703 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-kk5jh" Jan 27 15:23:32 crc kubenswrapper[4697]: I0127 15:23:32.222157 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w8lbx" event={"ID":"89adb880-08aa-412e-83a0-e9352901785f","Type":"ContainerStarted","Data":"bd121f9ae11f8f0f2d6dc71eae444ae968f04dfbea933d4afc1e01cac74d4723"} Jan 27 15:23:32 crc kubenswrapper[4697]: I0127 15:23:32.229931 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-g49qd" event={"ID":"de0040d7-f7cb-4a80-ba9a-bbc8898365e1","Type":"ContainerStarted","Data":"73617a50892a8f613bf1d612d36d25e241c7dd2907581104f006bc5a70d42b91"} Jan 27 15:23:32 crc kubenswrapper[4697]: I0127 15:23:32.242363 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-shgkw" Jan 27 15:23:32 crc kubenswrapper[4697]: I0127 15:23:32.325694 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/18479ade-7486-4889-b313-79c6598cc773-metrics-certs\") pod \"speaker-8stft\" (UID: \"18479ade-7486-4889-b313-79c6598cc773\") " pod="metallb-system/speaker-8stft" Jan 27 15:23:32 crc kubenswrapper[4697]: I0127 15:23:32.325769 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/18479ade-7486-4889-b313-79c6598cc773-memberlist\") pod \"speaker-8stft\" (UID: \"18479ade-7486-4889-b313-79c6598cc773\") " pod="metallb-system/speaker-8stft" Jan 27 15:23:32 crc kubenswrapper[4697]: E0127 15:23:32.325927 4697 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 27 15:23:32 crc kubenswrapper[4697]: E0127 15:23:32.325998 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/18479ade-7486-4889-b313-79c6598cc773-memberlist podName:18479ade-7486-4889-b313-79c6598cc773 nodeName:}" failed. No retries permitted until 2026-01-27 15:23:33.325985098 +0000 UTC m=+909.498384879 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/18479ade-7486-4889-b313-79c6598cc773-memberlist") pod "speaker-8stft" (UID: "18479ade-7486-4889-b313-79c6598cc773") : secret "metallb-memberlist" not found Jan 27 15:23:32 crc kubenswrapper[4697]: I0127 15:23:32.330718 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/18479ade-7486-4889-b313-79c6598cc773-metrics-certs\") pod \"speaker-8stft\" (UID: \"18479ade-7486-4889-b313-79c6598cc773\") " pod="metallb-system/speaker-8stft" Jan 27 15:23:32 crc kubenswrapper[4697]: I0127 15:23:32.336652 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-w8lbx" podStartSLOduration=2.824109849 podStartE2EDuration="5.336637369s" podCreationTimestamp="2026-01-27 15:23:27 +0000 UTC" firstStartedPulling="2026-01-27 15:23:29.176112975 +0000 UTC m=+905.348512786" lastFinishedPulling="2026-01-27 15:23:31.688640525 +0000 UTC m=+907.861040306" observedRunningTime="2026-01-27 15:23:32.258261382 +0000 UTC m=+908.430661173" watchObservedRunningTime="2026-01-27 15:23:32.336637369 +0000 UTC m=+908.509037150" Jan 27 15:23:32 crc kubenswrapper[4697]: I0127 15:23:32.340615 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-kk5jh"] Jan 27 15:23:32 crc kubenswrapper[4697]: W0127 15:23:32.362246 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcb6be63b_c3fd_4e21_a1b3_ffc11357a98f.slice/crio-fe2d896e1860c8046562d5582089e6493667c4d029aeea52cfbc137fad23cfbb WatchSource:0}: Error finding container fe2d896e1860c8046562d5582089e6493667c4d029aeea52cfbc137fad23cfbb: Status 404 returned error can't find the container with id fe2d896e1860c8046562d5582089e6493667c4d029aeea52cfbc137fad23cfbb Jan 27 15:23:32 crc kubenswrapper[4697]: I0127 15:23:32.691666 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-shgkw"] Jan 27 15:23:33 crc kubenswrapper[4697]: I0127 15:23:33.237654 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-kk5jh" event={"ID":"cb6be63b-c3fd-4e21-a1b3-ffc11357a98f","Type":"ContainerStarted","Data":"fe2d896e1860c8046562d5582089e6493667c4d029aeea52cfbc137fad23cfbb"} Jan 27 15:23:33 crc kubenswrapper[4697]: I0127 15:23:33.240953 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-shgkw" event={"ID":"0ecbc291-e00b-42be-b1dc-fd53bcb5256a","Type":"ContainerStarted","Data":"0569b22213b1e09bf135c719ec4199c6786d56768eaff33e8a7b52d41406540a"} Jan 27 15:23:33 crc kubenswrapper[4697]: I0127 15:23:33.241134 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-shgkw" event={"ID":"0ecbc291-e00b-42be-b1dc-fd53bcb5256a","Type":"ContainerStarted","Data":"18c4450cd5b71bbb40193f4ba41f5e2986f48224b69ba1e59f34e2ec832c317c"} Jan 27 15:23:33 crc kubenswrapper[4697]: I0127 15:23:33.241149 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-shgkw" event={"ID":"0ecbc291-e00b-42be-b1dc-fd53bcb5256a","Type":"ContainerStarted","Data":"12461b4e6a5959f3701443c859ee4cdeb010593c3f013cdf2591f4f4f78b48b0"} Jan 27 15:23:33 crc kubenswrapper[4697]: I0127 15:23:33.260236 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-shgkw" podStartSLOduration=2.260212159 podStartE2EDuration="2.260212159s" podCreationTimestamp="2026-01-27 15:23:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:23:33.25777783 +0000 UTC m=+909.430177611" watchObservedRunningTime="2026-01-27 15:23:33.260212159 +0000 UTC m=+909.432611940" Jan 27 15:23:33 crc kubenswrapper[4697]: I0127 15:23:33.339407 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/18479ade-7486-4889-b313-79c6598cc773-memberlist\") pod \"speaker-8stft\" (UID: \"18479ade-7486-4889-b313-79c6598cc773\") " pod="metallb-system/speaker-8stft" Jan 27 15:23:33 crc kubenswrapper[4697]: I0127 15:23:33.345541 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/18479ade-7486-4889-b313-79c6598cc773-memberlist\") pod \"speaker-8stft\" (UID: \"18479ade-7486-4889-b313-79c6598cc773\") " pod="metallb-system/speaker-8stft" Jan 27 15:23:33 crc kubenswrapper[4697]: I0127 15:23:33.427925 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-8stft" Jan 27 15:23:34 crc kubenswrapper[4697]: I0127 15:23:34.261126 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-8stft" event={"ID":"18479ade-7486-4889-b313-79c6598cc773","Type":"ContainerStarted","Data":"21a611f91c76ae878e95ec4667b63047e568cc8adffba48314b4723d0a0dc727"} Jan 27 15:23:34 crc kubenswrapper[4697]: I0127 15:23:34.261169 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-8stft" event={"ID":"18479ade-7486-4889-b313-79c6598cc773","Type":"ContainerStarted","Data":"9261e488915e3b5e030d7cc02f0a3756eacddb0bf8d8d60517a31953e4ce57f4"} Jan 27 15:23:34 crc kubenswrapper[4697]: I0127 15:23:34.261179 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-8stft" event={"ID":"18479ade-7486-4889-b313-79c6598cc773","Type":"ContainerStarted","Data":"73a28094abdcb0e48574ca31e4166d44cbddf43b4117b58ff625e019114c2c65"} Jan 27 15:23:34 crc kubenswrapper[4697]: I0127 15:23:34.262435 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-shgkw" Jan 27 15:23:34 crc kubenswrapper[4697]: I0127 15:23:34.262529 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-8stft" Jan 27 15:23:34 crc kubenswrapper[4697]: I0127 15:23:34.285530 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-8stft" podStartSLOduration=3.285514138 podStartE2EDuration="3.285514138s" podCreationTimestamp="2026-01-27 15:23:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:23:34.28314867 +0000 UTC m=+910.455548451" watchObservedRunningTime="2026-01-27 15:23:34.285514138 +0000 UTC m=+910.457913919" Jan 27 15:23:37 crc kubenswrapper[4697]: I0127 15:23:37.914378 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-w8lbx" Jan 27 15:23:37 crc kubenswrapper[4697]: I0127 15:23:37.914945 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-w8lbx" Jan 27 15:23:37 crc kubenswrapper[4697]: I0127 15:23:37.955628 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-w8lbx" Jan 27 15:23:38 crc kubenswrapper[4697]: I0127 15:23:38.352761 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-w8lbx" Jan 27 15:23:38 crc kubenswrapper[4697]: I0127 15:23:38.393223 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-w8lbx"] Jan 27 15:23:40 crc kubenswrapper[4697]: I0127 15:23:40.308026 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-kk5jh" event={"ID":"cb6be63b-c3fd-4e21-a1b3-ffc11357a98f","Type":"ContainerStarted","Data":"48a61d0ffd5a0b56f7a674a4e3c502294177efdf1d2083225efb37141655493b"} Jan 27 15:23:40 crc kubenswrapper[4697]: I0127 15:23:40.308347 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-kk5jh" Jan 27 15:23:40 crc kubenswrapper[4697]: I0127 15:23:40.310878 4697 generic.go:334] "Generic (PLEG): container finished" podID="de0040d7-f7cb-4a80-ba9a-bbc8898365e1" containerID="85d208c698a3bbd5932345e12f7130e3f87b44a066d128ee3c8d2a9069429ef1" exitCode=0 Jan 27 15:23:40 crc kubenswrapper[4697]: I0127 15:23:40.310982 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-g49qd" event={"ID":"de0040d7-f7cb-4a80-ba9a-bbc8898365e1","Type":"ContainerDied","Data":"85d208c698a3bbd5932345e12f7130e3f87b44a066d128ee3c8d2a9069429ef1"} Jan 27 15:23:40 crc kubenswrapper[4697]: I0127 15:23:40.311087 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-w8lbx" podUID="89adb880-08aa-412e-83a0-e9352901785f" containerName="registry-server" containerID="cri-o://bd121f9ae11f8f0f2d6dc71eae444ae968f04dfbea933d4afc1e01cac74d4723" gracePeriod=2 Jan 27 15:23:40 crc kubenswrapper[4697]: I0127 15:23:40.325685 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-kk5jh" podStartSLOduration=1.768504862 podStartE2EDuration="9.32566806s" podCreationTimestamp="2026-01-27 15:23:31 +0000 UTC" firstStartedPulling="2026-01-27 15:23:32.365161176 +0000 UTC m=+908.537560957" lastFinishedPulling="2026-01-27 15:23:39.922324374 +0000 UTC m=+916.094724155" observedRunningTime="2026-01-27 15:23:40.325075624 +0000 UTC m=+916.497475425" watchObservedRunningTime="2026-01-27 15:23:40.32566806 +0000 UTC m=+916.498067841" Jan 27 15:23:40 crc kubenswrapper[4697]: I0127 15:23:40.607913 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w8lbx" Jan 27 15:23:40 crc kubenswrapper[4697]: I0127 15:23:40.769094 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89adb880-08aa-412e-83a0-e9352901785f-utilities\") pod \"89adb880-08aa-412e-83a0-e9352901785f\" (UID: \"89adb880-08aa-412e-83a0-e9352901785f\") " Jan 27 15:23:40 crc kubenswrapper[4697]: I0127 15:23:40.769154 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vcm2s\" (UniqueName: \"kubernetes.io/projected/89adb880-08aa-412e-83a0-e9352901785f-kube-api-access-vcm2s\") pod \"89adb880-08aa-412e-83a0-e9352901785f\" (UID: \"89adb880-08aa-412e-83a0-e9352901785f\") " Jan 27 15:23:40 crc kubenswrapper[4697]: I0127 15:23:40.769194 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89adb880-08aa-412e-83a0-e9352901785f-catalog-content\") pod \"89adb880-08aa-412e-83a0-e9352901785f\" (UID: \"89adb880-08aa-412e-83a0-e9352901785f\") " Jan 27 15:23:40 crc kubenswrapper[4697]: I0127 15:23:40.771232 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/89adb880-08aa-412e-83a0-e9352901785f-utilities" (OuterVolumeSpecName: "utilities") pod "89adb880-08aa-412e-83a0-e9352901785f" (UID: "89adb880-08aa-412e-83a0-e9352901785f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:23:40 crc kubenswrapper[4697]: I0127 15:23:40.780639 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89adb880-08aa-412e-83a0-e9352901785f-kube-api-access-vcm2s" (OuterVolumeSpecName: "kube-api-access-vcm2s") pod "89adb880-08aa-412e-83a0-e9352901785f" (UID: "89adb880-08aa-412e-83a0-e9352901785f"). InnerVolumeSpecName "kube-api-access-vcm2s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:23:40 crc kubenswrapper[4697]: I0127 15:23:40.790466 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/89adb880-08aa-412e-83a0-e9352901785f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "89adb880-08aa-412e-83a0-e9352901785f" (UID: "89adb880-08aa-412e-83a0-e9352901785f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:23:40 crc kubenswrapper[4697]: I0127 15:23:40.870680 4697 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89adb880-08aa-412e-83a0-e9352901785f-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 15:23:40 crc kubenswrapper[4697]: I0127 15:23:40.870721 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vcm2s\" (UniqueName: \"kubernetes.io/projected/89adb880-08aa-412e-83a0-e9352901785f-kube-api-access-vcm2s\") on node \"crc\" DevicePath \"\"" Jan 27 15:23:40 crc kubenswrapper[4697]: I0127 15:23:40.870734 4697 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89adb880-08aa-412e-83a0-e9352901785f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 15:23:41 crc kubenswrapper[4697]: I0127 15:23:41.320680 4697 generic.go:334] "Generic (PLEG): container finished" podID="89adb880-08aa-412e-83a0-e9352901785f" containerID="bd121f9ae11f8f0f2d6dc71eae444ae968f04dfbea933d4afc1e01cac74d4723" exitCode=0 Jan 27 15:23:41 crc kubenswrapper[4697]: I0127 15:23:41.320821 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w8lbx" Jan 27 15:23:41 crc kubenswrapper[4697]: I0127 15:23:41.320928 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w8lbx" event={"ID":"89adb880-08aa-412e-83a0-e9352901785f","Type":"ContainerDied","Data":"bd121f9ae11f8f0f2d6dc71eae444ae968f04dfbea933d4afc1e01cac74d4723"} Jan 27 15:23:41 crc kubenswrapper[4697]: I0127 15:23:41.323029 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w8lbx" event={"ID":"89adb880-08aa-412e-83a0-e9352901785f","Type":"ContainerDied","Data":"99d087aa8fd817ef8c3334fe2259253ec6d535e0127e539343af7853a7003429"} Jan 27 15:23:41 crc kubenswrapper[4697]: I0127 15:23:41.323079 4697 scope.go:117] "RemoveContainer" containerID="bd121f9ae11f8f0f2d6dc71eae444ae968f04dfbea933d4afc1e01cac74d4723" Jan 27 15:23:41 crc kubenswrapper[4697]: I0127 15:23:41.325257 4697 generic.go:334] "Generic (PLEG): container finished" podID="de0040d7-f7cb-4a80-ba9a-bbc8898365e1" containerID="655a10759b1e53028636e4f17f70d74be44f29a8608e20ea4fd534dc344ce43a" exitCode=0 Jan 27 15:23:41 crc kubenswrapper[4697]: I0127 15:23:41.325554 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-g49qd" event={"ID":"de0040d7-f7cb-4a80-ba9a-bbc8898365e1","Type":"ContainerDied","Data":"655a10759b1e53028636e4f17f70d74be44f29a8608e20ea4fd534dc344ce43a"} Jan 27 15:23:41 crc kubenswrapper[4697]: I0127 15:23:41.350296 4697 scope.go:117] "RemoveContainer" containerID="7700923508d8725b94b9548f4221a2beb72f855ded789ee0aa616a8f4fa5e551" Jan 27 15:23:41 crc kubenswrapper[4697]: I0127 15:23:41.378511 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-w8lbx"] Jan 27 15:23:41 crc kubenswrapper[4697]: I0127 15:23:41.391443 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-w8lbx"] Jan 27 15:23:41 crc kubenswrapper[4697]: I0127 15:23:41.398373 4697 scope.go:117] "RemoveContainer" containerID="c0f28c8184990cd7d447593c9471456c3e8aaa08ba1d7066341c7256eafbad9d" Jan 27 15:23:41 crc kubenswrapper[4697]: I0127 15:23:41.423594 4697 scope.go:117] "RemoveContainer" containerID="bd121f9ae11f8f0f2d6dc71eae444ae968f04dfbea933d4afc1e01cac74d4723" Jan 27 15:23:41 crc kubenswrapper[4697]: E0127 15:23:41.423990 4697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd121f9ae11f8f0f2d6dc71eae444ae968f04dfbea933d4afc1e01cac74d4723\": container with ID starting with bd121f9ae11f8f0f2d6dc71eae444ae968f04dfbea933d4afc1e01cac74d4723 not found: ID does not exist" containerID="bd121f9ae11f8f0f2d6dc71eae444ae968f04dfbea933d4afc1e01cac74d4723" Jan 27 15:23:41 crc kubenswrapper[4697]: I0127 15:23:41.424021 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd121f9ae11f8f0f2d6dc71eae444ae968f04dfbea933d4afc1e01cac74d4723"} err="failed to get container status \"bd121f9ae11f8f0f2d6dc71eae444ae968f04dfbea933d4afc1e01cac74d4723\": rpc error: code = NotFound desc = could not find container \"bd121f9ae11f8f0f2d6dc71eae444ae968f04dfbea933d4afc1e01cac74d4723\": container with ID starting with bd121f9ae11f8f0f2d6dc71eae444ae968f04dfbea933d4afc1e01cac74d4723 not found: ID does not exist" Jan 27 15:23:41 crc kubenswrapper[4697]: I0127 15:23:41.424053 4697 scope.go:117] "RemoveContainer" containerID="7700923508d8725b94b9548f4221a2beb72f855ded789ee0aa616a8f4fa5e551" Jan 27 15:23:41 crc kubenswrapper[4697]: E0127 15:23:41.424360 4697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7700923508d8725b94b9548f4221a2beb72f855ded789ee0aa616a8f4fa5e551\": container with ID starting with 7700923508d8725b94b9548f4221a2beb72f855ded789ee0aa616a8f4fa5e551 not found: ID does not exist" containerID="7700923508d8725b94b9548f4221a2beb72f855ded789ee0aa616a8f4fa5e551" Jan 27 15:23:41 crc kubenswrapper[4697]: I0127 15:23:41.424384 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7700923508d8725b94b9548f4221a2beb72f855ded789ee0aa616a8f4fa5e551"} err="failed to get container status \"7700923508d8725b94b9548f4221a2beb72f855ded789ee0aa616a8f4fa5e551\": rpc error: code = NotFound desc = could not find container \"7700923508d8725b94b9548f4221a2beb72f855ded789ee0aa616a8f4fa5e551\": container with ID starting with 7700923508d8725b94b9548f4221a2beb72f855ded789ee0aa616a8f4fa5e551 not found: ID does not exist" Jan 27 15:23:41 crc kubenswrapper[4697]: I0127 15:23:41.424400 4697 scope.go:117] "RemoveContainer" containerID="c0f28c8184990cd7d447593c9471456c3e8aaa08ba1d7066341c7256eafbad9d" Jan 27 15:23:41 crc kubenswrapper[4697]: E0127 15:23:41.424647 4697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c0f28c8184990cd7d447593c9471456c3e8aaa08ba1d7066341c7256eafbad9d\": container with ID starting with c0f28c8184990cd7d447593c9471456c3e8aaa08ba1d7066341c7256eafbad9d not found: ID does not exist" containerID="c0f28c8184990cd7d447593c9471456c3e8aaa08ba1d7066341c7256eafbad9d" Jan 27 15:23:41 crc kubenswrapper[4697]: I0127 15:23:41.424669 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c0f28c8184990cd7d447593c9471456c3e8aaa08ba1d7066341c7256eafbad9d"} err="failed to get container status \"c0f28c8184990cd7d447593c9471456c3e8aaa08ba1d7066341c7256eafbad9d\": rpc error: code = NotFound desc = could not find container \"c0f28c8184990cd7d447593c9471456c3e8aaa08ba1d7066341c7256eafbad9d\": container with ID starting with c0f28c8184990cd7d447593c9471456c3e8aaa08ba1d7066341c7256eafbad9d not found: ID does not exist" Jan 27 15:23:41 crc kubenswrapper[4697]: I0127 15:23:41.791845 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-gqkk8"] Jan 27 15:23:41 crc kubenswrapper[4697]: E0127 15:23:41.792115 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89adb880-08aa-412e-83a0-e9352901785f" containerName="extract-content" Jan 27 15:23:41 crc kubenswrapper[4697]: I0127 15:23:41.792129 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="89adb880-08aa-412e-83a0-e9352901785f" containerName="extract-content" Jan 27 15:23:41 crc kubenswrapper[4697]: E0127 15:23:41.792148 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89adb880-08aa-412e-83a0-e9352901785f" containerName="extract-utilities" Jan 27 15:23:41 crc kubenswrapper[4697]: I0127 15:23:41.792156 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="89adb880-08aa-412e-83a0-e9352901785f" containerName="extract-utilities" Jan 27 15:23:41 crc kubenswrapper[4697]: E0127 15:23:41.792168 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89adb880-08aa-412e-83a0-e9352901785f" containerName="registry-server" Jan 27 15:23:41 crc kubenswrapper[4697]: I0127 15:23:41.792177 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="89adb880-08aa-412e-83a0-e9352901785f" containerName="registry-server" Jan 27 15:23:41 crc kubenswrapper[4697]: I0127 15:23:41.792303 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="89adb880-08aa-412e-83a0-e9352901785f" containerName="registry-server" Jan 27 15:23:41 crc kubenswrapper[4697]: I0127 15:23:41.793200 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gqkk8" Jan 27 15:23:41 crc kubenswrapper[4697]: I0127 15:23:41.802945 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gqkk8"] Jan 27 15:23:41 crc kubenswrapper[4697]: I0127 15:23:41.885737 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8be4ec26-6fb7-4cd4-a934-c57a9f08dc82-utilities\") pod \"certified-operators-gqkk8\" (UID: \"8be4ec26-6fb7-4cd4-a934-c57a9f08dc82\") " pod="openshift-marketplace/certified-operators-gqkk8" Jan 27 15:23:41 crc kubenswrapper[4697]: I0127 15:23:41.885801 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8be4ec26-6fb7-4cd4-a934-c57a9f08dc82-catalog-content\") pod \"certified-operators-gqkk8\" (UID: \"8be4ec26-6fb7-4cd4-a934-c57a9f08dc82\") " pod="openshift-marketplace/certified-operators-gqkk8" Jan 27 15:23:41 crc kubenswrapper[4697]: I0127 15:23:41.885834 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2xmz\" (UniqueName: \"kubernetes.io/projected/8be4ec26-6fb7-4cd4-a934-c57a9f08dc82-kube-api-access-g2xmz\") pod \"certified-operators-gqkk8\" (UID: \"8be4ec26-6fb7-4cd4-a934-c57a9f08dc82\") " pod="openshift-marketplace/certified-operators-gqkk8" Jan 27 15:23:41 crc kubenswrapper[4697]: I0127 15:23:41.986823 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8be4ec26-6fb7-4cd4-a934-c57a9f08dc82-utilities\") pod \"certified-operators-gqkk8\" (UID: \"8be4ec26-6fb7-4cd4-a934-c57a9f08dc82\") " pod="openshift-marketplace/certified-operators-gqkk8" Jan 27 15:23:41 crc kubenswrapper[4697]: I0127 15:23:41.986892 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8be4ec26-6fb7-4cd4-a934-c57a9f08dc82-catalog-content\") pod \"certified-operators-gqkk8\" (UID: \"8be4ec26-6fb7-4cd4-a934-c57a9f08dc82\") " pod="openshift-marketplace/certified-operators-gqkk8" Jan 27 15:23:41 crc kubenswrapper[4697]: I0127 15:23:41.986917 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g2xmz\" (UniqueName: \"kubernetes.io/projected/8be4ec26-6fb7-4cd4-a934-c57a9f08dc82-kube-api-access-g2xmz\") pod \"certified-operators-gqkk8\" (UID: \"8be4ec26-6fb7-4cd4-a934-c57a9f08dc82\") " pod="openshift-marketplace/certified-operators-gqkk8" Jan 27 15:23:41 crc kubenswrapper[4697]: I0127 15:23:41.987903 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8be4ec26-6fb7-4cd4-a934-c57a9f08dc82-utilities\") pod \"certified-operators-gqkk8\" (UID: \"8be4ec26-6fb7-4cd4-a934-c57a9f08dc82\") " pod="openshift-marketplace/certified-operators-gqkk8" Jan 27 15:23:41 crc kubenswrapper[4697]: I0127 15:23:41.988216 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8be4ec26-6fb7-4cd4-a934-c57a9f08dc82-catalog-content\") pod \"certified-operators-gqkk8\" (UID: \"8be4ec26-6fb7-4cd4-a934-c57a9f08dc82\") " pod="openshift-marketplace/certified-operators-gqkk8" Jan 27 15:23:42 crc kubenswrapper[4697]: I0127 15:23:42.016480 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g2xmz\" (UniqueName: \"kubernetes.io/projected/8be4ec26-6fb7-4cd4-a934-c57a9f08dc82-kube-api-access-g2xmz\") pod \"certified-operators-gqkk8\" (UID: \"8be4ec26-6fb7-4cd4-a934-c57a9f08dc82\") " pod="openshift-marketplace/certified-operators-gqkk8" Jan 27 15:23:42 crc kubenswrapper[4697]: I0127 15:23:42.111282 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gqkk8" Jan 27 15:23:42 crc kubenswrapper[4697]: I0127 15:23:42.260813 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-shgkw" Jan 27 15:23:42 crc kubenswrapper[4697]: I0127 15:23:42.335071 4697 generic.go:334] "Generic (PLEG): container finished" podID="de0040d7-f7cb-4a80-ba9a-bbc8898365e1" containerID="bb2b5d9cc7fe29ef32921d1f36d5da219fd19613597768fadbdf707b3954cd05" exitCode=0 Jan 27 15:23:42 crc kubenswrapper[4697]: I0127 15:23:42.335124 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-g49qd" event={"ID":"de0040d7-f7cb-4a80-ba9a-bbc8898365e1","Type":"ContainerDied","Data":"bb2b5d9cc7fe29ef32921d1f36d5da219fd19613597768fadbdf707b3954cd05"} Jan 27 15:23:42 crc kubenswrapper[4697]: I0127 15:23:42.577300 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89adb880-08aa-412e-83a0-e9352901785f" path="/var/lib/kubelet/pods/89adb880-08aa-412e-83a0-e9352901785f/volumes" Jan 27 15:23:42 crc kubenswrapper[4697]: I0127 15:23:42.587179 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gqkk8"] Jan 27 15:23:43 crc kubenswrapper[4697]: I0127 15:23:43.352447 4697 generic.go:334] "Generic (PLEG): container finished" podID="8be4ec26-6fb7-4cd4-a934-c57a9f08dc82" containerID="f8442d28292bd0eb9db4f39f7a15b939ae45099ab7307acd21e7dd09753aaba5" exitCode=0 Jan 27 15:23:43 crc kubenswrapper[4697]: I0127 15:23:43.352533 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gqkk8" event={"ID":"8be4ec26-6fb7-4cd4-a934-c57a9f08dc82","Type":"ContainerDied","Data":"f8442d28292bd0eb9db4f39f7a15b939ae45099ab7307acd21e7dd09753aaba5"} Jan 27 15:23:43 crc kubenswrapper[4697]: I0127 15:23:43.352565 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gqkk8" event={"ID":"8be4ec26-6fb7-4cd4-a934-c57a9f08dc82","Type":"ContainerStarted","Data":"cee73ec2c22d8fe7101ec35c5b036635b1d5818c2e15778fac4fa2ec7a938ad1"} Jan 27 15:23:43 crc kubenswrapper[4697]: I0127 15:23:43.361578 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-g49qd" event={"ID":"de0040d7-f7cb-4a80-ba9a-bbc8898365e1","Type":"ContainerStarted","Data":"eeddc2c926c7dbc9c711e4ef1e7dd5177432d375521a89958d2b57adf1cd484d"} Jan 27 15:23:43 crc kubenswrapper[4697]: I0127 15:23:43.361622 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-g49qd" event={"ID":"de0040d7-f7cb-4a80-ba9a-bbc8898365e1","Type":"ContainerStarted","Data":"c9e3c9a584ab052b5078133b3632c19e3e720b46a6b1e2057d205a6e842ca6f3"} Jan 27 15:23:43 crc kubenswrapper[4697]: I0127 15:23:43.361636 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-g49qd" event={"ID":"de0040d7-f7cb-4a80-ba9a-bbc8898365e1","Type":"ContainerStarted","Data":"5db7d8eb3bb4f8e3305678613b496dc9cdf9450c9538d90bae2dbaa4ba6496d4"} Jan 27 15:23:43 crc kubenswrapper[4697]: I0127 15:23:43.361647 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-g49qd" event={"ID":"de0040d7-f7cb-4a80-ba9a-bbc8898365e1","Type":"ContainerStarted","Data":"32b099080cc42a874ced84643ba2c96089613725f513de26e741c4e086fac2ab"} Jan 27 15:23:43 crc kubenswrapper[4697]: I0127 15:23:43.361658 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-g49qd" event={"ID":"de0040d7-f7cb-4a80-ba9a-bbc8898365e1","Type":"ContainerStarted","Data":"c2b1c3554cfcb09a5306776fac3535409ba85adf0d51ed9e5b6c64b34283636e"} Jan 27 15:23:43 crc kubenswrapper[4697]: I0127 15:23:43.434896 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-8stft" Jan 27 15:23:46 crc kubenswrapper[4697]: I0127 15:23:46.643890 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-9q2dz"] Jan 27 15:23:46 crc kubenswrapper[4697]: I0127 15:23:46.645340 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-9q2dz" Jan 27 15:23:46 crc kubenswrapper[4697]: I0127 15:23:46.651572 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-hzcjc" Jan 27 15:23:46 crc kubenswrapper[4697]: I0127 15:23:46.651754 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 27 15:23:46 crc kubenswrapper[4697]: I0127 15:23:46.651888 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 27 15:23:46 crc kubenswrapper[4697]: I0127 15:23:46.677774 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-9q2dz"] Jan 27 15:23:46 crc kubenswrapper[4697]: I0127 15:23:46.753414 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66dlt\" (UniqueName: \"kubernetes.io/projected/54fe292f-155d-4761-9ad7-2a9940acf36f-kube-api-access-66dlt\") pod \"openstack-operator-index-9q2dz\" (UID: \"54fe292f-155d-4761-9ad7-2a9940acf36f\") " pod="openstack-operators/openstack-operator-index-9q2dz" Jan 27 15:23:46 crc kubenswrapper[4697]: I0127 15:23:46.854469 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66dlt\" (UniqueName: \"kubernetes.io/projected/54fe292f-155d-4761-9ad7-2a9940acf36f-kube-api-access-66dlt\") pod \"openstack-operator-index-9q2dz\" (UID: \"54fe292f-155d-4761-9ad7-2a9940acf36f\") " pod="openstack-operators/openstack-operator-index-9q2dz" Jan 27 15:23:46 crc kubenswrapper[4697]: I0127 15:23:46.878545 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66dlt\" (UniqueName: \"kubernetes.io/projected/54fe292f-155d-4761-9ad7-2a9940acf36f-kube-api-access-66dlt\") pod \"openstack-operator-index-9q2dz\" (UID: \"54fe292f-155d-4761-9ad7-2a9940acf36f\") " pod="openstack-operators/openstack-operator-index-9q2dz" Jan 27 15:23:46 crc kubenswrapper[4697]: I0127 15:23:46.967139 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-9q2dz" Jan 27 15:23:52 crc kubenswrapper[4697]: I0127 15:23:52.147721 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-kk5jh" Jan 27 15:23:53 crc kubenswrapper[4697]: I0127 15:23:53.390553 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-9q2dz"] Jan 27 15:23:53 crc kubenswrapper[4697]: I0127 15:23:53.403921 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-mvqgc"] Jan 27 15:23:53 crc kubenswrapper[4697]: I0127 15:23:53.404622 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-mvqgc" Jan 27 15:23:53 crc kubenswrapper[4697]: I0127 15:23:53.420971 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-mvqgc"] Jan 27 15:23:53 crc kubenswrapper[4697]: I0127 15:23:53.545210 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mjxl\" (UniqueName: \"kubernetes.io/projected/779425e2-ee9e-45ea-b8c9-07df5c5278b2-kube-api-access-7mjxl\") pod \"openstack-operator-index-mvqgc\" (UID: \"779425e2-ee9e-45ea-b8c9-07df5c5278b2\") " pod="openstack-operators/openstack-operator-index-mvqgc" Jan 27 15:23:53 crc kubenswrapper[4697]: I0127 15:23:53.646094 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7mjxl\" (UniqueName: \"kubernetes.io/projected/779425e2-ee9e-45ea-b8c9-07df5c5278b2-kube-api-access-7mjxl\") pod \"openstack-operator-index-mvqgc\" (UID: \"779425e2-ee9e-45ea-b8c9-07df5c5278b2\") " pod="openstack-operators/openstack-operator-index-mvqgc" Jan 27 15:23:53 crc kubenswrapper[4697]: I0127 15:23:53.670954 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7mjxl\" (UniqueName: \"kubernetes.io/projected/779425e2-ee9e-45ea-b8c9-07df5c5278b2-kube-api-access-7mjxl\") pod \"openstack-operator-index-mvqgc\" (UID: \"779425e2-ee9e-45ea-b8c9-07df5c5278b2\") " pod="openstack-operators/openstack-operator-index-mvqgc" Jan 27 15:23:53 crc kubenswrapper[4697]: I0127 15:23:53.725136 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-mvqgc" Jan 27 15:23:54 crc kubenswrapper[4697]: I0127 15:23:54.028802 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-9q2dz"] Jan 27 15:23:54 crc kubenswrapper[4697]: W0127 15:23:54.180444 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod779425e2_ee9e_45ea_b8c9_07df5c5278b2.slice/crio-b482a572fad44e5cfa62a363b3a72af62b1b5cafcc158b4e45c49f61a9e0e52a WatchSource:0}: Error finding container b482a572fad44e5cfa62a363b3a72af62b1b5cafcc158b4e45c49f61a9e0e52a: Status 404 returned error can't find the container with id b482a572fad44e5cfa62a363b3a72af62b1b5cafcc158b4e45c49f61a9e0e52a Jan 27 15:23:54 crc kubenswrapper[4697]: I0127 15:23:54.185971 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-mvqgc"] Jan 27 15:23:54 crc kubenswrapper[4697]: I0127 15:23:54.445197 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-mvqgc" event={"ID":"779425e2-ee9e-45ea-b8c9-07df5c5278b2","Type":"ContainerStarted","Data":"b482a572fad44e5cfa62a363b3a72af62b1b5cafcc158b4e45c49f61a9e0e52a"} Jan 27 15:23:54 crc kubenswrapper[4697]: I0127 15:23:54.451370 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-g49qd" event={"ID":"de0040d7-f7cb-4a80-ba9a-bbc8898365e1","Type":"ContainerStarted","Data":"07cf66b1921f1d8394fda13b3dee821e9e85b54e1f543e0333daf25cb793eff2"} Jan 27 15:23:54 crc kubenswrapper[4697]: I0127 15:23:54.452817 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-g49qd" Jan 27 15:23:54 crc kubenswrapper[4697]: I0127 15:23:54.456750 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-9q2dz" event={"ID":"54fe292f-155d-4761-9ad7-2a9940acf36f","Type":"ContainerStarted","Data":"38d7c366d03eebce8d7be74d3df755215fb9b80f16146ca366b97554d9381a3d"} Jan 27 15:23:54 crc kubenswrapper[4697]: I0127 15:23:54.456969 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-g49qd" Jan 27 15:23:54 crc kubenswrapper[4697]: I0127 15:23:54.476333 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-g49qd" podStartSLOduration=15.567575739 podStartE2EDuration="23.476315149s" podCreationTimestamp="2026-01-27 15:23:31 +0000 UTC" firstStartedPulling="2026-01-27 15:23:32.031681922 +0000 UTC m=+908.204081703" lastFinishedPulling="2026-01-27 15:23:39.940421332 +0000 UTC m=+916.112821113" observedRunningTime="2026-01-27 15:23:54.475331924 +0000 UTC m=+930.647731715" watchObservedRunningTime="2026-01-27 15:23:54.476315149 +0000 UTC m=+930.648714930" Jan 27 15:23:55 crc kubenswrapper[4697]: I0127 15:23:55.109229 4697 patch_prober.go:28] interesting pod/machine-config-daemon-wz495 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 15:23:55 crc kubenswrapper[4697]: I0127 15:23:55.109599 4697 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 15:23:55 crc kubenswrapper[4697]: I0127 15:23:55.466755 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gqkk8" event={"ID":"8be4ec26-6fb7-4cd4-a934-c57a9f08dc82","Type":"ContainerDied","Data":"623486231f5c96e9d354a16f43b493667a4061bcfb2552b848dbf16c20781b63"} Jan 27 15:23:55 crc kubenswrapper[4697]: I0127 15:23:55.467195 4697 generic.go:334] "Generic (PLEG): container finished" podID="8be4ec26-6fb7-4cd4-a934-c57a9f08dc82" containerID="623486231f5c96e9d354a16f43b493667a4061bcfb2552b848dbf16c20781b63" exitCode=0 Jan 27 15:23:56 crc kubenswrapper[4697]: I0127 15:23:56.855006 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-g49qd" Jan 27 15:23:56 crc kubenswrapper[4697]: I0127 15:23:56.900052 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-g49qd" Jan 27 15:23:58 crc kubenswrapper[4697]: I0127 15:23:58.489819 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-9q2dz" event={"ID":"54fe292f-155d-4761-9ad7-2a9940acf36f","Type":"ContainerStarted","Data":"bed29db0a4702156802188ae6f239e0c7e97bf84c4c4a55f70334bf4035b5762"} Jan 27 15:23:58 crc kubenswrapper[4697]: I0127 15:23:58.489947 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-9q2dz" podUID="54fe292f-155d-4761-9ad7-2a9940acf36f" containerName="registry-server" containerID="cri-o://bed29db0a4702156802188ae6f239e0c7e97bf84c4c4a55f70334bf4035b5762" gracePeriod=2 Jan 27 15:23:58 crc kubenswrapper[4697]: I0127 15:23:58.492341 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gqkk8" event={"ID":"8be4ec26-6fb7-4cd4-a934-c57a9f08dc82","Type":"ContainerStarted","Data":"e61e4dee6f8cd23c21d2b977892deb1e161e06dbea43b8277352e6152bbae897"} Jan 27 15:23:58 crc kubenswrapper[4697]: I0127 15:23:58.495181 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-mvqgc" event={"ID":"779425e2-ee9e-45ea-b8c9-07df5c5278b2","Type":"ContainerStarted","Data":"1ca0f0ea911e938bb2ecf9b073dcbe05daf55c7b06caa62cd512b356bf1f4660"} Jan 27 15:23:58 crc kubenswrapper[4697]: I0127 15:23:58.514777 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-9q2dz" podStartSLOduration=9.225345581 podStartE2EDuration="12.514762468s" podCreationTimestamp="2026-01-27 15:23:46 +0000 UTC" firstStartedPulling="2026-01-27 15:23:54.037897346 +0000 UTC m=+930.210297127" lastFinishedPulling="2026-01-27 15:23:57.327314233 +0000 UTC m=+933.499714014" observedRunningTime="2026-01-27 15:23:58.511806575 +0000 UTC m=+934.684206356" watchObservedRunningTime="2026-01-27 15:23:58.514762468 +0000 UTC m=+934.687162249" Jan 27 15:23:58 crc kubenswrapper[4697]: I0127 15:23:58.557957 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-gqkk8" podStartSLOduration=3.554384229 podStartE2EDuration="17.557940087s" podCreationTimestamp="2026-01-27 15:23:41 +0000 UTC" firstStartedPulling="2026-01-27 15:23:43.353713927 +0000 UTC m=+919.526113708" lastFinishedPulling="2026-01-27 15:23:57.357269785 +0000 UTC m=+933.529669566" observedRunningTime="2026-01-27 15:23:58.556122252 +0000 UTC m=+934.728522033" watchObservedRunningTime="2026-01-27 15:23:58.557940087 +0000 UTC m=+934.730339868" Jan 27 15:23:58 crc kubenswrapper[4697]: I0127 15:23:58.558612 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-mvqgc" podStartSLOduration=2.417550709 podStartE2EDuration="5.558603334s" podCreationTimestamp="2026-01-27 15:23:53 +0000 UTC" firstStartedPulling="2026-01-27 15:23:54.183111111 +0000 UTC m=+930.355510892" lastFinishedPulling="2026-01-27 15:23:57.324163736 +0000 UTC m=+933.496563517" observedRunningTime="2026-01-27 15:23:58.536595919 +0000 UTC m=+934.708995700" watchObservedRunningTime="2026-01-27 15:23:58.558603334 +0000 UTC m=+934.731003115" Jan 27 15:23:58 crc kubenswrapper[4697]: I0127 15:23:58.915896 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-9q2dz" Jan 27 15:23:59 crc kubenswrapper[4697]: I0127 15:23:59.037487 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-66dlt\" (UniqueName: \"kubernetes.io/projected/54fe292f-155d-4761-9ad7-2a9940acf36f-kube-api-access-66dlt\") pod \"54fe292f-155d-4761-9ad7-2a9940acf36f\" (UID: \"54fe292f-155d-4761-9ad7-2a9940acf36f\") " Jan 27 15:23:59 crc kubenswrapper[4697]: I0127 15:23:59.055337 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54fe292f-155d-4761-9ad7-2a9940acf36f-kube-api-access-66dlt" (OuterVolumeSpecName: "kube-api-access-66dlt") pod "54fe292f-155d-4761-9ad7-2a9940acf36f" (UID: "54fe292f-155d-4761-9ad7-2a9940acf36f"). InnerVolumeSpecName "kube-api-access-66dlt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:23:59 crc kubenswrapper[4697]: I0127 15:23:59.139685 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-66dlt\" (UniqueName: \"kubernetes.io/projected/54fe292f-155d-4761-9ad7-2a9940acf36f-kube-api-access-66dlt\") on node \"crc\" DevicePath \"\"" Jan 27 15:23:59 crc kubenswrapper[4697]: I0127 15:23:59.504502 4697 generic.go:334] "Generic (PLEG): container finished" podID="54fe292f-155d-4761-9ad7-2a9940acf36f" containerID="bed29db0a4702156802188ae6f239e0c7e97bf84c4c4a55f70334bf4035b5762" exitCode=0 Jan 27 15:23:59 crc kubenswrapper[4697]: I0127 15:23:59.504583 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-9q2dz" Jan 27 15:23:59 crc kubenswrapper[4697]: I0127 15:23:59.504621 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-9q2dz" event={"ID":"54fe292f-155d-4761-9ad7-2a9940acf36f","Type":"ContainerDied","Data":"bed29db0a4702156802188ae6f239e0c7e97bf84c4c4a55f70334bf4035b5762"} Jan 27 15:23:59 crc kubenswrapper[4697]: I0127 15:23:59.505880 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-9q2dz" event={"ID":"54fe292f-155d-4761-9ad7-2a9940acf36f","Type":"ContainerDied","Data":"38d7c366d03eebce8d7be74d3df755215fb9b80f16146ca366b97554d9381a3d"} Jan 27 15:23:59 crc kubenswrapper[4697]: I0127 15:23:59.505942 4697 scope.go:117] "RemoveContainer" containerID="bed29db0a4702156802188ae6f239e0c7e97bf84c4c4a55f70334bf4035b5762" Jan 27 15:23:59 crc kubenswrapper[4697]: I0127 15:23:59.532188 4697 scope.go:117] "RemoveContainer" containerID="bed29db0a4702156802188ae6f239e0c7e97bf84c4c4a55f70334bf4035b5762" Jan 27 15:23:59 crc kubenswrapper[4697]: E0127 15:23:59.532655 4697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bed29db0a4702156802188ae6f239e0c7e97bf84c4c4a55f70334bf4035b5762\": container with ID starting with bed29db0a4702156802188ae6f239e0c7e97bf84c4c4a55f70334bf4035b5762 not found: ID does not exist" containerID="bed29db0a4702156802188ae6f239e0c7e97bf84c4c4a55f70334bf4035b5762" Jan 27 15:23:59 crc kubenswrapper[4697]: I0127 15:23:59.532695 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bed29db0a4702156802188ae6f239e0c7e97bf84c4c4a55f70334bf4035b5762"} err="failed to get container status \"bed29db0a4702156802188ae6f239e0c7e97bf84c4c4a55f70334bf4035b5762\": rpc error: code = NotFound desc = could not find container \"bed29db0a4702156802188ae6f239e0c7e97bf84c4c4a55f70334bf4035b5762\": container with ID starting with bed29db0a4702156802188ae6f239e0c7e97bf84c4c4a55f70334bf4035b5762 not found: ID does not exist" Jan 27 15:23:59 crc kubenswrapper[4697]: I0127 15:23:59.538881 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-9q2dz"] Jan 27 15:23:59 crc kubenswrapper[4697]: I0127 15:23:59.544656 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-9q2dz"] Jan 27 15:24:00 crc kubenswrapper[4697]: I0127 15:24:00.576296 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="54fe292f-155d-4761-9ad7-2a9940acf36f" path="/var/lib/kubelet/pods/54fe292f-155d-4761-9ad7-2a9940acf36f/volumes" Jan 27 15:24:02 crc kubenswrapper[4697]: I0127 15:24:02.111725 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-gqkk8" Jan 27 15:24:02 crc kubenswrapper[4697]: I0127 15:24:02.112135 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-gqkk8" Jan 27 15:24:02 crc kubenswrapper[4697]: I0127 15:24:02.166760 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-gqkk8" Jan 27 15:24:02 crc kubenswrapper[4697]: I0127 15:24:02.563827 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-gqkk8" Jan 27 15:24:03 crc kubenswrapper[4697]: I0127 15:24:03.726287 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-mvqgc" Jan 27 15:24:03 crc kubenswrapper[4697]: I0127 15:24:03.726341 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-mvqgc" Jan 27 15:24:03 crc kubenswrapper[4697]: I0127 15:24:03.792876 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-mvqgc" Jan 27 15:24:04 crc kubenswrapper[4697]: I0127 15:24:04.576330 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-mvqgc" Jan 27 15:24:06 crc kubenswrapper[4697]: I0127 15:24:06.173889 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gqkk8"] Jan 27 15:24:06 crc kubenswrapper[4697]: I0127 15:24:06.174125 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-gqkk8" podUID="8be4ec26-6fb7-4cd4-a934-c57a9f08dc82" containerName="registry-server" containerID="cri-o://e61e4dee6f8cd23c21d2b977892deb1e161e06dbea43b8277352e6152bbae897" gracePeriod=2 Jan 27 15:24:06 crc kubenswrapper[4697]: I0127 15:24:06.549988 4697 generic.go:334] "Generic (PLEG): container finished" podID="8be4ec26-6fb7-4cd4-a934-c57a9f08dc82" containerID="e61e4dee6f8cd23c21d2b977892deb1e161e06dbea43b8277352e6152bbae897" exitCode=0 Jan 27 15:24:06 crc kubenswrapper[4697]: I0127 15:24:06.550065 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gqkk8" event={"ID":"8be4ec26-6fb7-4cd4-a934-c57a9f08dc82","Type":"ContainerDied","Data":"e61e4dee6f8cd23c21d2b977892deb1e161e06dbea43b8277352e6152bbae897"} Jan 27 15:24:06 crc kubenswrapper[4697]: I0127 15:24:06.550480 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gqkk8" event={"ID":"8be4ec26-6fb7-4cd4-a934-c57a9f08dc82","Type":"ContainerDied","Data":"cee73ec2c22d8fe7101ec35c5b036635b1d5818c2e15778fac4fa2ec7a938ad1"} Jan 27 15:24:06 crc kubenswrapper[4697]: I0127 15:24:06.550495 4697 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cee73ec2c22d8fe7101ec35c5b036635b1d5818c2e15778fac4fa2ec7a938ad1" Jan 27 15:24:06 crc kubenswrapper[4697]: I0127 15:24:06.550428 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gqkk8" Jan 27 15:24:06 crc kubenswrapper[4697]: I0127 15:24:06.640903 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8be4ec26-6fb7-4cd4-a934-c57a9f08dc82-catalog-content\") pod \"8be4ec26-6fb7-4cd4-a934-c57a9f08dc82\" (UID: \"8be4ec26-6fb7-4cd4-a934-c57a9f08dc82\") " Jan 27 15:24:06 crc kubenswrapper[4697]: I0127 15:24:06.640990 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8be4ec26-6fb7-4cd4-a934-c57a9f08dc82-utilities\") pod \"8be4ec26-6fb7-4cd4-a934-c57a9f08dc82\" (UID: \"8be4ec26-6fb7-4cd4-a934-c57a9f08dc82\") " Jan 27 15:24:06 crc kubenswrapper[4697]: I0127 15:24:06.641021 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g2xmz\" (UniqueName: \"kubernetes.io/projected/8be4ec26-6fb7-4cd4-a934-c57a9f08dc82-kube-api-access-g2xmz\") pod \"8be4ec26-6fb7-4cd4-a934-c57a9f08dc82\" (UID: \"8be4ec26-6fb7-4cd4-a934-c57a9f08dc82\") " Jan 27 15:24:06 crc kubenswrapper[4697]: I0127 15:24:06.642499 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8be4ec26-6fb7-4cd4-a934-c57a9f08dc82-utilities" (OuterVolumeSpecName: "utilities") pod "8be4ec26-6fb7-4cd4-a934-c57a9f08dc82" (UID: "8be4ec26-6fb7-4cd4-a934-c57a9f08dc82"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:24:06 crc kubenswrapper[4697]: I0127 15:24:06.673075 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8be4ec26-6fb7-4cd4-a934-c57a9f08dc82-kube-api-access-g2xmz" (OuterVolumeSpecName: "kube-api-access-g2xmz") pod "8be4ec26-6fb7-4cd4-a934-c57a9f08dc82" (UID: "8be4ec26-6fb7-4cd4-a934-c57a9f08dc82"). InnerVolumeSpecName "kube-api-access-g2xmz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:24:06 crc kubenswrapper[4697]: I0127 15:24:06.695872 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8be4ec26-6fb7-4cd4-a934-c57a9f08dc82-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8be4ec26-6fb7-4cd4-a934-c57a9f08dc82" (UID: "8be4ec26-6fb7-4cd4-a934-c57a9f08dc82"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:24:06 crc kubenswrapper[4697]: I0127 15:24:06.742190 4697 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8be4ec26-6fb7-4cd4-a934-c57a9f08dc82-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 15:24:06 crc kubenswrapper[4697]: I0127 15:24:06.742214 4697 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8be4ec26-6fb7-4cd4-a934-c57a9f08dc82-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 15:24:06 crc kubenswrapper[4697]: I0127 15:24:06.742223 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g2xmz\" (UniqueName: \"kubernetes.io/projected/8be4ec26-6fb7-4cd4-a934-c57a9f08dc82-kube-api-access-g2xmz\") on node \"crc\" DevicePath \"\"" Jan 27 15:24:07 crc kubenswrapper[4697]: I0127 15:24:07.555975 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gqkk8" Jan 27 15:24:07 crc kubenswrapper[4697]: I0127 15:24:07.600879 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gqkk8"] Jan 27 15:24:07 crc kubenswrapper[4697]: I0127 15:24:07.606310 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-gqkk8"] Jan 27 15:24:08 crc kubenswrapper[4697]: I0127 15:24:08.578478 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8be4ec26-6fb7-4cd4-a934-c57a9f08dc82" path="/var/lib/kubelet/pods/8be4ec26-6fb7-4cd4-a934-c57a9f08dc82/volumes" Jan 27 15:24:12 crc kubenswrapper[4697]: I0127 15:24:12.414879 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/6ebbdeb42ee59bc46cd5a9affeefe7a428e186e004b54bc44478e0857bm8tsj"] Jan 27 15:24:12 crc kubenswrapper[4697]: E0127 15:24:12.415364 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8be4ec26-6fb7-4cd4-a934-c57a9f08dc82" containerName="extract-utilities" Jan 27 15:24:12 crc kubenswrapper[4697]: I0127 15:24:12.415377 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="8be4ec26-6fb7-4cd4-a934-c57a9f08dc82" containerName="extract-utilities" Jan 27 15:24:12 crc kubenswrapper[4697]: E0127 15:24:12.415394 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8be4ec26-6fb7-4cd4-a934-c57a9f08dc82" containerName="registry-server" Jan 27 15:24:12 crc kubenswrapper[4697]: I0127 15:24:12.415399 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="8be4ec26-6fb7-4cd4-a934-c57a9f08dc82" containerName="registry-server" Jan 27 15:24:12 crc kubenswrapper[4697]: E0127 15:24:12.415408 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54fe292f-155d-4761-9ad7-2a9940acf36f" containerName="registry-server" Jan 27 15:24:12 crc kubenswrapper[4697]: I0127 15:24:12.415414 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="54fe292f-155d-4761-9ad7-2a9940acf36f" containerName="registry-server" Jan 27 15:24:12 crc kubenswrapper[4697]: E0127 15:24:12.415428 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8be4ec26-6fb7-4cd4-a934-c57a9f08dc82" containerName="extract-content" Jan 27 15:24:12 crc kubenswrapper[4697]: I0127 15:24:12.415434 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="8be4ec26-6fb7-4cd4-a934-c57a9f08dc82" containerName="extract-content" Jan 27 15:24:12 crc kubenswrapper[4697]: I0127 15:24:12.415533 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="8be4ec26-6fb7-4cd4-a934-c57a9f08dc82" containerName="registry-server" Jan 27 15:24:12 crc kubenswrapper[4697]: I0127 15:24:12.415552 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="54fe292f-155d-4761-9ad7-2a9940acf36f" containerName="registry-server" Jan 27 15:24:12 crc kubenswrapper[4697]: I0127 15:24:12.416312 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/6ebbdeb42ee59bc46cd5a9affeefe7a428e186e004b54bc44478e0857bm8tsj" Jan 27 15:24:12 crc kubenswrapper[4697]: I0127 15:24:12.427247 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-hhm66" Jan 27 15:24:12 crc kubenswrapper[4697]: I0127 15:24:12.431497 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/6ebbdeb42ee59bc46cd5a9affeefe7a428e186e004b54bc44478e0857bm8tsj"] Jan 27 15:24:12 crc kubenswrapper[4697]: I0127 15:24:12.529305 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zt6s6\" (UniqueName: \"kubernetes.io/projected/e2327960-3adb-4edf-97cb-ffb7cbe0db07-kube-api-access-zt6s6\") pod \"6ebbdeb42ee59bc46cd5a9affeefe7a428e186e004b54bc44478e0857bm8tsj\" (UID: \"e2327960-3adb-4edf-97cb-ffb7cbe0db07\") " pod="openstack-operators/6ebbdeb42ee59bc46cd5a9affeefe7a428e186e004b54bc44478e0857bm8tsj" Jan 27 15:24:12 crc kubenswrapper[4697]: I0127 15:24:12.529353 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e2327960-3adb-4edf-97cb-ffb7cbe0db07-util\") pod \"6ebbdeb42ee59bc46cd5a9affeefe7a428e186e004b54bc44478e0857bm8tsj\" (UID: \"e2327960-3adb-4edf-97cb-ffb7cbe0db07\") " pod="openstack-operators/6ebbdeb42ee59bc46cd5a9affeefe7a428e186e004b54bc44478e0857bm8tsj" Jan 27 15:24:12 crc kubenswrapper[4697]: I0127 15:24:12.529408 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e2327960-3adb-4edf-97cb-ffb7cbe0db07-bundle\") pod \"6ebbdeb42ee59bc46cd5a9affeefe7a428e186e004b54bc44478e0857bm8tsj\" (UID: \"e2327960-3adb-4edf-97cb-ffb7cbe0db07\") " pod="openstack-operators/6ebbdeb42ee59bc46cd5a9affeefe7a428e186e004b54bc44478e0857bm8tsj" Jan 27 15:24:12 crc kubenswrapper[4697]: I0127 15:24:12.630246 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zt6s6\" (UniqueName: \"kubernetes.io/projected/e2327960-3adb-4edf-97cb-ffb7cbe0db07-kube-api-access-zt6s6\") pod \"6ebbdeb42ee59bc46cd5a9affeefe7a428e186e004b54bc44478e0857bm8tsj\" (UID: \"e2327960-3adb-4edf-97cb-ffb7cbe0db07\") " pod="openstack-operators/6ebbdeb42ee59bc46cd5a9affeefe7a428e186e004b54bc44478e0857bm8tsj" Jan 27 15:24:12 crc kubenswrapper[4697]: I0127 15:24:12.630318 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e2327960-3adb-4edf-97cb-ffb7cbe0db07-util\") pod \"6ebbdeb42ee59bc46cd5a9affeefe7a428e186e004b54bc44478e0857bm8tsj\" (UID: \"e2327960-3adb-4edf-97cb-ffb7cbe0db07\") " pod="openstack-operators/6ebbdeb42ee59bc46cd5a9affeefe7a428e186e004b54bc44478e0857bm8tsj" Jan 27 15:24:12 crc kubenswrapper[4697]: I0127 15:24:12.630363 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e2327960-3adb-4edf-97cb-ffb7cbe0db07-bundle\") pod \"6ebbdeb42ee59bc46cd5a9affeefe7a428e186e004b54bc44478e0857bm8tsj\" (UID: \"e2327960-3adb-4edf-97cb-ffb7cbe0db07\") " pod="openstack-operators/6ebbdeb42ee59bc46cd5a9affeefe7a428e186e004b54bc44478e0857bm8tsj" Jan 27 15:24:12 crc kubenswrapper[4697]: I0127 15:24:12.630992 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e2327960-3adb-4edf-97cb-ffb7cbe0db07-bundle\") pod \"6ebbdeb42ee59bc46cd5a9affeefe7a428e186e004b54bc44478e0857bm8tsj\" (UID: \"e2327960-3adb-4edf-97cb-ffb7cbe0db07\") " pod="openstack-operators/6ebbdeb42ee59bc46cd5a9affeefe7a428e186e004b54bc44478e0857bm8tsj" Jan 27 15:24:12 crc kubenswrapper[4697]: I0127 15:24:12.631063 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e2327960-3adb-4edf-97cb-ffb7cbe0db07-util\") pod \"6ebbdeb42ee59bc46cd5a9affeefe7a428e186e004b54bc44478e0857bm8tsj\" (UID: \"e2327960-3adb-4edf-97cb-ffb7cbe0db07\") " pod="openstack-operators/6ebbdeb42ee59bc46cd5a9affeefe7a428e186e004b54bc44478e0857bm8tsj" Jan 27 15:24:12 crc kubenswrapper[4697]: I0127 15:24:12.653489 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zt6s6\" (UniqueName: \"kubernetes.io/projected/e2327960-3adb-4edf-97cb-ffb7cbe0db07-kube-api-access-zt6s6\") pod \"6ebbdeb42ee59bc46cd5a9affeefe7a428e186e004b54bc44478e0857bm8tsj\" (UID: \"e2327960-3adb-4edf-97cb-ffb7cbe0db07\") " pod="openstack-operators/6ebbdeb42ee59bc46cd5a9affeefe7a428e186e004b54bc44478e0857bm8tsj" Jan 27 15:24:12 crc kubenswrapper[4697]: I0127 15:24:12.731846 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/6ebbdeb42ee59bc46cd5a9affeefe7a428e186e004b54bc44478e0857bm8tsj" Jan 27 15:24:12 crc kubenswrapper[4697]: I0127 15:24:12.799487 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-dmznb"] Jan 27 15:24:12 crc kubenswrapper[4697]: I0127 15:24:12.800844 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dmznb" Jan 27 15:24:12 crc kubenswrapper[4697]: I0127 15:24:12.819232 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dmznb"] Jan 27 15:24:12 crc kubenswrapper[4697]: I0127 15:24:12.966557 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea38da88-b3d9-4c9a-ae41-3c5687e16e81-utilities\") pod \"community-operators-dmznb\" (UID: \"ea38da88-b3d9-4c9a-ae41-3c5687e16e81\") " pod="openshift-marketplace/community-operators-dmznb" Jan 27 15:24:12 crc kubenswrapper[4697]: I0127 15:24:12.966635 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea38da88-b3d9-4c9a-ae41-3c5687e16e81-catalog-content\") pod \"community-operators-dmznb\" (UID: \"ea38da88-b3d9-4c9a-ae41-3c5687e16e81\") " pod="openshift-marketplace/community-operators-dmznb" Jan 27 15:24:12 crc kubenswrapper[4697]: I0127 15:24:12.966687 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kv4z4\" (UniqueName: \"kubernetes.io/projected/ea38da88-b3d9-4c9a-ae41-3c5687e16e81-kube-api-access-kv4z4\") pod \"community-operators-dmznb\" (UID: \"ea38da88-b3d9-4c9a-ae41-3c5687e16e81\") " pod="openshift-marketplace/community-operators-dmznb" Jan 27 15:24:13 crc kubenswrapper[4697]: I0127 15:24:13.067592 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea38da88-b3d9-4c9a-ae41-3c5687e16e81-catalog-content\") pod \"community-operators-dmznb\" (UID: \"ea38da88-b3d9-4c9a-ae41-3c5687e16e81\") " pod="openshift-marketplace/community-operators-dmznb" Jan 27 15:24:13 crc kubenswrapper[4697]: I0127 15:24:13.067661 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kv4z4\" (UniqueName: \"kubernetes.io/projected/ea38da88-b3d9-4c9a-ae41-3c5687e16e81-kube-api-access-kv4z4\") pod \"community-operators-dmznb\" (UID: \"ea38da88-b3d9-4c9a-ae41-3c5687e16e81\") " pod="openshift-marketplace/community-operators-dmznb" Jan 27 15:24:13 crc kubenswrapper[4697]: I0127 15:24:13.067728 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea38da88-b3d9-4c9a-ae41-3c5687e16e81-utilities\") pod \"community-operators-dmznb\" (UID: \"ea38da88-b3d9-4c9a-ae41-3c5687e16e81\") " pod="openshift-marketplace/community-operators-dmznb" Jan 27 15:24:13 crc kubenswrapper[4697]: I0127 15:24:13.068307 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea38da88-b3d9-4c9a-ae41-3c5687e16e81-utilities\") pod \"community-operators-dmznb\" (UID: \"ea38da88-b3d9-4c9a-ae41-3c5687e16e81\") " pod="openshift-marketplace/community-operators-dmznb" Jan 27 15:24:13 crc kubenswrapper[4697]: I0127 15:24:13.068601 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea38da88-b3d9-4c9a-ae41-3c5687e16e81-catalog-content\") pod \"community-operators-dmznb\" (UID: \"ea38da88-b3d9-4c9a-ae41-3c5687e16e81\") " pod="openshift-marketplace/community-operators-dmznb" Jan 27 15:24:13 crc kubenswrapper[4697]: I0127 15:24:13.091378 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kv4z4\" (UniqueName: \"kubernetes.io/projected/ea38da88-b3d9-4c9a-ae41-3c5687e16e81-kube-api-access-kv4z4\") pod \"community-operators-dmznb\" (UID: \"ea38da88-b3d9-4c9a-ae41-3c5687e16e81\") " pod="openshift-marketplace/community-operators-dmznb" Jan 27 15:24:13 crc kubenswrapper[4697]: I0127 15:24:13.114165 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dmznb" Jan 27 15:24:13 crc kubenswrapper[4697]: I0127 15:24:13.281166 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/6ebbdeb42ee59bc46cd5a9affeefe7a428e186e004b54bc44478e0857bm8tsj"] Jan 27 15:24:13 crc kubenswrapper[4697]: W0127 15:24:13.290505 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode2327960_3adb_4edf_97cb_ffb7cbe0db07.slice/crio-b2b877e94b9c27c0a9cd7bb2ea4985c9eac51b5726e9b3ec3ffaf4aaf2080b14 WatchSource:0}: Error finding container b2b877e94b9c27c0a9cd7bb2ea4985c9eac51b5726e9b3ec3ffaf4aaf2080b14: Status 404 returned error can't find the container with id b2b877e94b9c27c0a9cd7bb2ea4985c9eac51b5726e9b3ec3ffaf4aaf2080b14 Jan 27 15:24:13 crc kubenswrapper[4697]: I0127 15:24:13.555859 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dmznb"] Jan 27 15:24:13 crc kubenswrapper[4697]: I0127 15:24:13.593633 4697 generic.go:334] "Generic (PLEG): container finished" podID="e2327960-3adb-4edf-97cb-ffb7cbe0db07" containerID="abc01ed12fbcc3a4a18b7c85f2d2bcb1fb3fbd30f0efb2d0163238f8e7ed4786" exitCode=0 Jan 27 15:24:13 crc kubenswrapper[4697]: I0127 15:24:13.593681 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/6ebbdeb42ee59bc46cd5a9affeefe7a428e186e004b54bc44478e0857bm8tsj" event={"ID":"e2327960-3adb-4edf-97cb-ffb7cbe0db07","Type":"ContainerDied","Data":"abc01ed12fbcc3a4a18b7c85f2d2bcb1fb3fbd30f0efb2d0163238f8e7ed4786"} Jan 27 15:24:13 crc kubenswrapper[4697]: I0127 15:24:13.593707 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/6ebbdeb42ee59bc46cd5a9affeefe7a428e186e004b54bc44478e0857bm8tsj" event={"ID":"e2327960-3adb-4edf-97cb-ffb7cbe0db07","Type":"ContainerStarted","Data":"b2b877e94b9c27c0a9cd7bb2ea4985c9eac51b5726e9b3ec3ffaf4aaf2080b14"} Jan 27 15:24:13 crc kubenswrapper[4697]: W0127 15:24:13.599251 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podea38da88_b3d9_4c9a_ae41_3c5687e16e81.slice/crio-e3597c06fd582617f16135d0191226881310c59fe3e08bc15cb9164a357bcf09 WatchSource:0}: Error finding container e3597c06fd582617f16135d0191226881310c59fe3e08bc15cb9164a357bcf09: Status 404 returned error can't find the container with id e3597c06fd582617f16135d0191226881310c59fe3e08bc15cb9164a357bcf09 Jan 27 15:24:14 crc kubenswrapper[4697]: I0127 15:24:14.602506 4697 generic.go:334] "Generic (PLEG): container finished" podID="e2327960-3adb-4edf-97cb-ffb7cbe0db07" containerID="9860fd06329592f5749ad7e0766872c0d45d465f6d5a50daea1187bffcb6ae21" exitCode=0 Jan 27 15:24:14 crc kubenswrapper[4697]: I0127 15:24:14.602951 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/6ebbdeb42ee59bc46cd5a9affeefe7a428e186e004b54bc44478e0857bm8tsj" event={"ID":"e2327960-3adb-4edf-97cb-ffb7cbe0db07","Type":"ContainerDied","Data":"9860fd06329592f5749ad7e0766872c0d45d465f6d5a50daea1187bffcb6ae21"} Jan 27 15:24:14 crc kubenswrapper[4697]: I0127 15:24:14.606828 4697 generic.go:334] "Generic (PLEG): container finished" podID="ea38da88-b3d9-4c9a-ae41-3c5687e16e81" containerID="eecd1c315b149d5cbc8485738cc91f3d2f6fa345db4f7dde132ff8ac8f88fb05" exitCode=0 Jan 27 15:24:14 crc kubenswrapper[4697]: I0127 15:24:14.606869 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dmznb" event={"ID":"ea38da88-b3d9-4c9a-ae41-3c5687e16e81","Type":"ContainerDied","Data":"eecd1c315b149d5cbc8485738cc91f3d2f6fa345db4f7dde132ff8ac8f88fb05"} Jan 27 15:24:14 crc kubenswrapper[4697]: I0127 15:24:14.606890 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dmznb" event={"ID":"ea38da88-b3d9-4c9a-ae41-3c5687e16e81","Type":"ContainerStarted","Data":"e3597c06fd582617f16135d0191226881310c59fe3e08bc15cb9164a357bcf09"} Jan 27 15:24:15 crc kubenswrapper[4697]: I0127 15:24:15.613714 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dmznb" event={"ID":"ea38da88-b3d9-4c9a-ae41-3c5687e16e81","Type":"ContainerStarted","Data":"27ec9cd897ced91556da23c1ca22eb00f3d0971885109f3e9b6b7dc9280d2fb3"} Jan 27 15:24:15 crc kubenswrapper[4697]: I0127 15:24:15.616021 4697 generic.go:334] "Generic (PLEG): container finished" podID="e2327960-3adb-4edf-97cb-ffb7cbe0db07" containerID="336bdd7dba2cb54e96d9db41d19413ac7e010c884fa2956a9da0b4dab4985368" exitCode=0 Jan 27 15:24:15 crc kubenswrapper[4697]: I0127 15:24:15.616053 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/6ebbdeb42ee59bc46cd5a9affeefe7a428e186e004b54bc44478e0857bm8tsj" event={"ID":"e2327960-3adb-4edf-97cb-ffb7cbe0db07","Type":"ContainerDied","Data":"336bdd7dba2cb54e96d9db41d19413ac7e010c884fa2956a9da0b4dab4985368"} Jan 27 15:24:16 crc kubenswrapper[4697]: I0127 15:24:16.625668 4697 generic.go:334] "Generic (PLEG): container finished" podID="ea38da88-b3d9-4c9a-ae41-3c5687e16e81" containerID="27ec9cd897ced91556da23c1ca22eb00f3d0971885109f3e9b6b7dc9280d2fb3" exitCode=0 Jan 27 15:24:16 crc kubenswrapper[4697]: I0127 15:24:16.625857 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dmznb" event={"ID":"ea38da88-b3d9-4c9a-ae41-3c5687e16e81","Type":"ContainerDied","Data":"27ec9cd897ced91556da23c1ca22eb00f3d0971885109f3e9b6b7dc9280d2fb3"} Jan 27 15:24:16 crc kubenswrapper[4697]: I0127 15:24:16.899585 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/6ebbdeb42ee59bc46cd5a9affeefe7a428e186e004b54bc44478e0857bm8tsj" Jan 27 15:24:16 crc kubenswrapper[4697]: I0127 15:24:16.928123 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e2327960-3adb-4edf-97cb-ffb7cbe0db07-util\") pod \"e2327960-3adb-4edf-97cb-ffb7cbe0db07\" (UID: \"e2327960-3adb-4edf-97cb-ffb7cbe0db07\") " Jan 27 15:24:16 crc kubenswrapper[4697]: I0127 15:24:16.928225 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e2327960-3adb-4edf-97cb-ffb7cbe0db07-bundle\") pod \"e2327960-3adb-4edf-97cb-ffb7cbe0db07\" (UID: \"e2327960-3adb-4edf-97cb-ffb7cbe0db07\") " Jan 27 15:24:16 crc kubenswrapper[4697]: I0127 15:24:16.928287 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zt6s6\" (UniqueName: \"kubernetes.io/projected/e2327960-3adb-4edf-97cb-ffb7cbe0db07-kube-api-access-zt6s6\") pod \"e2327960-3adb-4edf-97cb-ffb7cbe0db07\" (UID: \"e2327960-3adb-4edf-97cb-ffb7cbe0db07\") " Jan 27 15:24:16 crc kubenswrapper[4697]: I0127 15:24:16.929383 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e2327960-3adb-4edf-97cb-ffb7cbe0db07-bundle" (OuterVolumeSpecName: "bundle") pod "e2327960-3adb-4edf-97cb-ffb7cbe0db07" (UID: "e2327960-3adb-4edf-97cb-ffb7cbe0db07"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:24:16 crc kubenswrapper[4697]: I0127 15:24:16.935905 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2327960-3adb-4edf-97cb-ffb7cbe0db07-kube-api-access-zt6s6" (OuterVolumeSpecName: "kube-api-access-zt6s6") pod "e2327960-3adb-4edf-97cb-ffb7cbe0db07" (UID: "e2327960-3adb-4edf-97cb-ffb7cbe0db07"). InnerVolumeSpecName "kube-api-access-zt6s6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:24:16 crc kubenswrapper[4697]: I0127 15:24:16.941690 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e2327960-3adb-4edf-97cb-ffb7cbe0db07-util" (OuterVolumeSpecName: "util") pod "e2327960-3adb-4edf-97cb-ffb7cbe0db07" (UID: "e2327960-3adb-4edf-97cb-ffb7cbe0db07"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:24:17 crc kubenswrapper[4697]: I0127 15:24:17.030109 4697 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e2327960-3adb-4edf-97cb-ffb7cbe0db07-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 15:24:17 crc kubenswrapper[4697]: I0127 15:24:17.030141 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zt6s6\" (UniqueName: \"kubernetes.io/projected/e2327960-3adb-4edf-97cb-ffb7cbe0db07-kube-api-access-zt6s6\") on node \"crc\" DevicePath \"\"" Jan 27 15:24:17 crc kubenswrapper[4697]: I0127 15:24:17.030154 4697 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e2327960-3adb-4edf-97cb-ffb7cbe0db07-util\") on node \"crc\" DevicePath \"\"" Jan 27 15:24:17 crc kubenswrapper[4697]: I0127 15:24:17.632184 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dmznb" event={"ID":"ea38da88-b3d9-4c9a-ae41-3c5687e16e81","Type":"ContainerStarted","Data":"3962c93602f0381f46b57953b587623c9047f489538bbb8a0c20a8a8f9fd2921"} Jan 27 15:24:17 crc kubenswrapper[4697]: I0127 15:24:17.633889 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/6ebbdeb42ee59bc46cd5a9affeefe7a428e186e004b54bc44478e0857bm8tsj" event={"ID":"e2327960-3adb-4edf-97cb-ffb7cbe0db07","Type":"ContainerDied","Data":"b2b877e94b9c27c0a9cd7bb2ea4985c9eac51b5726e9b3ec3ffaf4aaf2080b14"} Jan 27 15:24:17 crc kubenswrapper[4697]: I0127 15:24:17.633919 4697 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b2b877e94b9c27c0a9cd7bb2ea4985c9eac51b5726e9b3ec3ffaf4aaf2080b14" Jan 27 15:24:17 crc kubenswrapper[4697]: I0127 15:24:17.633953 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/6ebbdeb42ee59bc46cd5a9affeefe7a428e186e004b54bc44478e0857bm8tsj" Jan 27 15:24:17 crc kubenswrapper[4697]: I0127 15:24:17.658775 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-dmznb" podStartSLOduration=3.070938676 podStartE2EDuration="5.658758776s" podCreationTimestamp="2026-01-27 15:24:12 +0000 UTC" firstStartedPulling="2026-01-27 15:24:14.618066366 +0000 UTC m=+950.790466157" lastFinishedPulling="2026-01-27 15:24:17.205886476 +0000 UTC m=+953.378286257" observedRunningTime="2026-01-27 15:24:17.654230324 +0000 UTC m=+953.826630115" watchObservedRunningTime="2026-01-27 15:24:17.658758776 +0000 UTC m=+953.831158557" Jan 27 15:24:19 crc kubenswrapper[4697]: I0127 15:24:19.275470 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-6fb647f7d4-299rw"] Jan 27 15:24:19 crc kubenswrapper[4697]: E0127 15:24:19.275983 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2327960-3adb-4edf-97cb-ffb7cbe0db07" containerName="util" Jan 27 15:24:19 crc kubenswrapper[4697]: I0127 15:24:19.275994 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2327960-3adb-4edf-97cb-ffb7cbe0db07" containerName="util" Jan 27 15:24:19 crc kubenswrapper[4697]: E0127 15:24:19.276007 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2327960-3adb-4edf-97cb-ffb7cbe0db07" containerName="extract" Jan 27 15:24:19 crc kubenswrapper[4697]: I0127 15:24:19.276014 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2327960-3adb-4edf-97cb-ffb7cbe0db07" containerName="extract" Jan 27 15:24:19 crc kubenswrapper[4697]: E0127 15:24:19.276025 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2327960-3adb-4edf-97cb-ffb7cbe0db07" containerName="pull" Jan 27 15:24:19 crc kubenswrapper[4697]: I0127 15:24:19.276030 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2327960-3adb-4edf-97cb-ffb7cbe0db07" containerName="pull" Jan 27 15:24:19 crc kubenswrapper[4697]: I0127 15:24:19.276134 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2327960-3adb-4edf-97cb-ffb7cbe0db07" containerName="extract" Jan 27 15:24:19 crc kubenswrapper[4697]: I0127 15:24:19.276529 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-6fb647f7d4-299rw" Jan 27 15:24:19 crc kubenswrapper[4697]: I0127 15:24:19.279014 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-qkx6j" Jan 27 15:24:19 crc kubenswrapper[4697]: I0127 15:24:19.346539 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-6fb647f7d4-299rw"] Jan 27 15:24:19 crc kubenswrapper[4697]: I0127 15:24:19.359193 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhpz6\" (UniqueName: \"kubernetes.io/projected/de629115-105c-4dac-b1d9-ce37c3cf02b2-kube-api-access-vhpz6\") pod \"openstack-operator-controller-init-6fb647f7d4-299rw\" (UID: \"de629115-105c-4dac-b1d9-ce37c3cf02b2\") " pod="openstack-operators/openstack-operator-controller-init-6fb647f7d4-299rw" Jan 27 15:24:19 crc kubenswrapper[4697]: I0127 15:24:19.460080 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vhpz6\" (UniqueName: \"kubernetes.io/projected/de629115-105c-4dac-b1d9-ce37c3cf02b2-kube-api-access-vhpz6\") pod \"openstack-operator-controller-init-6fb647f7d4-299rw\" (UID: \"de629115-105c-4dac-b1d9-ce37c3cf02b2\") " pod="openstack-operators/openstack-operator-controller-init-6fb647f7d4-299rw" Jan 27 15:24:19 crc kubenswrapper[4697]: I0127 15:24:19.478523 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vhpz6\" (UniqueName: \"kubernetes.io/projected/de629115-105c-4dac-b1d9-ce37c3cf02b2-kube-api-access-vhpz6\") pod \"openstack-operator-controller-init-6fb647f7d4-299rw\" (UID: \"de629115-105c-4dac-b1d9-ce37c3cf02b2\") " pod="openstack-operators/openstack-operator-controller-init-6fb647f7d4-299rw" Jan 27 15:24:19 crc kubenswrapper[4697]: I0127 15:24:19.592125 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-6fb647f7d4-299rw" Jan 27 15:24:20 crc kubenswrapper[4697]: I0127 15:24:20.073442 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-6fb647f7d4-299rw"] Jan 27 15:24:20 crc kubenswrapper[4697]: W0127 15:24:20.075837 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podde629115_105c_4dac_b1d9_ce37c3cf02b2.slice/crio-54a0d23795cfa87001242ff0d0a1c29c87398bac77d32c41df1c39dc3354b790 WatchSource:0}: Error finding container 54a0d23795cfa87001242ff0d0a1c29c87398bac77d32c41df1c39dc3354b790: Status 404 returned error can't find the container with id 54a0d23795cfa87001242ff0d0a1c29c87398bac77d32c41df1c39dc3354b790 Jan 27 15:24:20 crc kubenswrapper[4697]: I0127 15:24:20.653836 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-6fb647f7d4-299rw" event={"ID":"de629115-105c-4dac-b1d9-ce37c3cf02b2","Type":"ContainerStarted","Data":"54a0d23795cfa87001242ff0d0a1c29c87398bac77d32c41df1c39dc3354b790"} Jan 27 15:24:23 crc kubenswrapper[4697]: I0127 15:24:23.114731 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-dmznb" Jan 27 15:24:23 crc kubenswrapper[4697]: I0127 15:24:23.116110 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-dmznb" Jan 27 15:24:23 crc kubenswrapper[4697]: I0127 15:24:23.168001 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-dmznb" Jan 27 15:24:23 crc kubenswrapper[4697]: I0127 15:24:23.761216 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-dmznb" Jan 27 15:24:25 crc kubenswrapper[4697]: I0127 15:24:25.108955 4697 patch_prober.go:28] interesting pod/machine-config-daemon-wz495 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 15:24:25 crc kubenswrapper[4697]: I0127 15:24:25.109312 4697 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 15:24:25 crc kubenswrapper[4697]: I0127 15:24:25.109354 4697 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wz495" Jan 27 15:24:25 crc kubenswrapper[4697]: I0127 15:24:25.109876 4697 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"939f9c93ba265c5d99e68011d55d9135f74940c6f260b8c578f1d67844ceb0ed"} pod="openshift-machine-config-operator/machine-config-daemon-wz495" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 15:24:25 crc kubenswrapper[4697]: I0127 15:24:25.110318 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" containerName="machine-config-daemon" containerID="cri-o://939f9c93ba265c5d99e68011d55d9135f74940c6f260b8c578f1d67844ceb0ed" gracePeriod=600 Jan 27 15:24:25 crc kubenswrapper[4697]: I0127 15:24:25.578487 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dmznb"] Jan 27 15:24:25 crc kubenswrapper[4697]: I0127 15:24:25.726597 4697 generic.go:334] "Generic (PLEG): container finished" podID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" containerID="939f9c93ba265c5d99e68011d55d9135f74940c6f260b8c578f1d67844ceb0ed" exitCode=0 Jan 27 15:24:25 crc kubenswrapper[4697]: I0127 15:24:25.727504 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wz495" event={"ID":"e9bec8bc-b2a6-4865-83ca-692ae5c022a6","Type":"ContainerDied","Data":"939f9c93ba265c5d99e68011d55d9135f74940c6f260b8c578f1d67844ceb0ed"} Jan 27 15:24:25 crc kubenswrapper[4697]: I0127 15:24:25.727540 4697 scope.go:117] "RemoveContainer" containerID="29b143d9c88ca58d5e8f4a44a13b9ecc0a8f5a18f7aa625b7c0810002ed2b91e" Jan 27 15:24:26 crc kubenswrapper[4697]: I0127 15:24:26.735622 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-dmznb" podUID="ea38da88-b3d9-4c9a-ae41-3c5687e16e81" containerName="registry-server" containerID="cri-o://3962c93602f0381f46b57953b587623c9047f489538bbb8a0c20a8a8f9fd2921" gracePeriod=2 Jan 27 15:24:27 crc kubenswrapper[4697]: I0127 15:24:27.746143 4697 generic.go:334] "Generic (PLEG): container finished" podID="ea38da88-b3d9-4c9a-ae41-3c5687e16e81" containerID="3962c93602f0381f46b57953b587623c9047f489538bbb8a0c20a8a8f9fd2921" exitCode=0 Jan 27 15:24:27 crc kubenswrapper[4697]: I0127 15:24:27.746221 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dmznb" event={"ID":"ea38da88-b3d9-4c9a-ae41-3c5687e16e81","Type":"ContainerDied","Data":"3962c93602f0381f46b57953b587623c9047f489538bbb8a0c20a8a8f9fd2921"} Jan 27 15:24:28 crc kubenswrapper[4697]: I0127 15:24:28.917238 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dmznb" Jan 27 15:24:29 crc kubenswrapper[4697]: I0127 15:24:29.101515 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea38da88-b3d9-4c9a-ae41-3c5687e16e81-catalog-content\") pod \"ea38da88-b3d9-4c9a-ae41-3c5687e16e81\" (UID: \"ea38da88-b3d9-4c9a-ae41-3c5687e16e81\") " Jan 27 15:24:29 crc kubenswrapper[4697]: I0127 15:24:29.102014 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kv4z4\" (UniqueName: \"kubernetes.io/projected/ea38da88-b3d9-4c9a-ae41-3c5687e16e81-kube-api-access-kv4z4\") pod \"ea38da88-b3d9-4c9a-ae41-3c5687e16e81\" (UID: \"ea38da88-b3d9-4c9a-ae41-3c5687e16e81\") " Jan 27 15:24:29 crc kubenswrapper[4697]: I0127 15:24:29.102175 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea38da88-b3d9-4c9a-ae41-3c5687e16e81-utilities\") pod \"ea38da88-b3d9-4c9a-ae41-3c5687e16e81\" (UID: \"ea38da88-b3d9-4c9a-ae41-3c5687e16e81\") " Jan 27 15:24:29 crc kubenswrapper[4697]: I0127 15:24:29.102636 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea38da88-b3d9-4c9a-ae41-3c5687e16e81-utilities" (OuterVolumeSpecName: "utilities") pod "ea38da88-b3d9-4c9a-ae41-3c5687e16e81" (UID: "ea38da88-b3d9-4c9a-ae41-3c5687e16e81"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:24:29 crc kubenswrapper[4697]: I0127 15:24:29.108895 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea38da88-b3d9-4c9a-ae41-3c5687e16e81-kube-api-access-kv4z4" (OuterVolumeSpecName: "kube-api-access-kv4z4") pod "ea38da88-b3d9-4c9a-ae41-3c5687e16e81" (UID: "ea38da88-b3d9-4c9a-ae41-3c5687e16e81"). InnerVolumeSpecName "kube-api-access-kv4z4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:24:29 crc kubenswrapper[4697]: I0127 15:24:29.141515 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea38da88-b3d9-4c9a-ae41-3c5687e16e81-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ea38da88-b3d9-4c9a-ae41-3c5687e16e81" (UID: "ea38da88-b3d9-4c9a-ae41-3c5687e16e81"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:24:29 crc kubenswrapper[4697]: I0127 15:24:29.204332 4697 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea38da88-b3d9-4c9a-ae41-3c5687e16e81-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 15:24:29 crc kubenswrapper[4697]: I0127 15:24:29.204357 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kv4z4\" (UniqueName: \"kubernetes.io/projected/ea38da88-b3d9-4c9a-ae41-3c5687e16e81-kube-api-access-kv4z4\") on node \"crc\" DevicePath \"\"" Jan 27 15:24:29 crc kubenswrapper[4697]: I0127 15:24:29.204369 4697 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea38da88-b3d9-4c9a-ae41-3c5687e16e81-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 15:24:29 crc kubenswrapper[4697]: I0127 15:24:29.785676 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dmznb" event={"ID":"ea38da88-b3d9-4c9a-ae41-3c5687e16e81","Type":"ContainerDied","Data":"e3597c06fd582617f16135d0191226881310c59fe3e08bc15cb9164a357bcf09"} Jan 27 15:24:29 crc kubenswrapper[4697]: I0127 15:24:29.785760 4697 scope.go:117] "RemoveContainer" containerID="3962c93602f0381f46b57953b587623c9047f489538bbb8a0c20a8a8f9fd2921" Jan 27 15:24:29 crc kubenswrapper[4697]: I0127 15:24:29.785776 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dmznb" Jan 27 15:24:29 crc kubenswrapper[4697]: I0127 15:24:29.789100 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-6fb647f7d4-299rw" event={"ID":"de629115-105c-4dac-b1d9-ce37c3cf02b2","Type":"ContainerStarted","Data":"81a477865b22d27f8fb1982199b70b756d83300bb463053144ce788af7e4ba81"} Jan 27 15:24:29 crc kubenswrapper[4697]: I0127 15:24:29.791199 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-6fb647f7d4-299rw" Jan 27 15:24:29 crc kubenswrapper[4697]: I0127 15:24:29.797351 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wz495" event={"ID":"e9bec8bc-b2a6-4865-83ca-692ae5c022a6","Type":"ContainerStarted","Data":"92d797174f2c61fd113567cb99c93ce3ccc4863dd93b46c4dc54df8e401db4fd"} Jan 27 15:24:29 crc kubenswrapper[4697]: I0127 15:24:29.820029 4697 scope.go:117] "RemoveContainer" containerID="27ec9cd897ced91556da23c1ca22eb00f3d0971885109f3e9b6b7dc9280d2fb3" Jan 27 15:24:29 crc kubenswrapper[4697]: I0127 15:24:29.835115 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-6fb647f7d4-299rw" podStartSLOduration=2.068674588 podStartE2EDuration="10.835088203s" podCreationTimestamp="2026-01-27 15:24:19 +0000 UTC" firstStartedPulling="2026-01-27 15:24:20.077662485 +0000 UTC m=+956.250062266" lastFinishedPulling="2026-01-27 15:24:28.8440761 +0000 UTC m=+965.016475881" observedRunningTime="2026-01-27 15:24:29.83257103 +0000 UTC m=+966.004970861" watchObservedRunningTime="2026-01-27 15:24:29.835088203 +0000 UTC m=+966.007488024" Jan 27 15:24:29 crc kubenswrapper[4697]: I0127 15:24:29.874797 4697 scope.go:117] "RemoveContainer" containerID="eecd1c315b149d5cbc8485738cc91f3d2f6fa345db4f7dde132ff8ac8f88fb05" Jan 27 15:24:29 crc kubenswrapper[4697]: I0127 15:24:29.875519 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dmznb"] Jan 27 15:24:29 crc kubenswrapper[4697]: I0127 15:24:29.886458 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-dmznb"] Jan 27 15:24:30 crc kubenswrapper[4697]: I0127 15:24:30.578251 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea38da88-b3d9-4c9a-ae41-3c5687e16e81" path="/var/lib/kubelet/pods/ea38da88-b3d9-4c9a-ae41-3c5687e16e81/volumes" Jan 27 15:24:39 crc kubenswrapper[4697]: I0127 15:24:39.594801 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-6fb647f7d4-299rw" Jan 27 15:25:07 crc kubenswrapper[4697]: I0127 15:25:07.243675 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-65ff799cfd-666rh"] Jan 27 15:25:07 crc kubenswrapper[4697]: E0127 15:25:07.244469 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea38da88-b3d9-4c9a-ae41-3c5687e16e81" containerName="extract-content" Jan 27 15:25:07 crc kubenswrapper[4697]: I0127 15:25:07.244483 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea38da88-b3d9-4c9a-ae41-3c5687e16e81" containerName="extract-content" Jan 27 15:25:07 crc kubenswrapper[4697]: E0127 15:25:07.244498 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea38da88-b3d9-4c9a-ae41-3c5687e16e81" containerName="extract-utilities" Jan 27 15:25:07 crc kubenswrapper[4697]: I0127 15:25:07.244504 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea38da88-b3d9-4c9a-ae41-3c5687e16e81" containerName="extract-utilities" Jan 27 15:25:07 crc kubenswrapper[4697]: E0127 15:25:07.244515 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea38da88-b3d9-4c9a-ae41-3c5687e16e81" containerName="registry-server" Jan 27 15:25:07 crc kubenswrapper[4697]: I0127 15:25:07.244521 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea38da88-b3d9-4c9a-ae41-3c5687e16e81" containerName="registry-server" Jan 27 15:25:07 crc kubenswrapper[4697]: I0127 15:25:07.244621 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea38da88-b3d9-4c9a-ae41-3c5687e16e81" containerName="registry-server" Jan 27 15:25:07 crc kubenswrapper[4697]: I0127 15:25:07.245142 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-65ff799cfd-666rh" Jan 27 15:25:07 crc kubenswrapper[4697]: I0127 15:25:07.247667 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-5z8pt" Jan 27 15:25:07 crc kubenswrapper[4697]: I0127 15:25:07.249325 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-655bf9cfbb-wppqr"] Jan 27 15:25:07 crc kubenswrapper[4697]: I0127 15:25:07.250071 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-wppqr" Jan 27 15:25:07 crc kubenswrapper[4697]: I0127 15:25:07.266476 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-kb5ps" Jan 27 15:25:07 crc kubenswrapper[4697]: I0127 15:25:07.277938 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-65ff799cfd-666rh"] Jan 27 15:25:07 crc kubenswrapper[4697]: I0127 15:25:07.283543 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-655bf9cfbb-wppqr"] Jan 27 15:25:07 crc kubenswrapper[4697]: I0127 15:25:07.337425 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-77554cdc5c-s4rdx"] Jan 27 15:25:07 crc kubenswrapper[4697]: I0127 15:25:07.338264 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-s4rdx" Jan 27 15:25:07 crc kubenswrapper[4697]: W0127 15:25:07.341619 4697 reflector.go:561] object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-6xzqn": failed to list *v1.Secret: secrets "designate-operator-controller-manager-dockercfg-6xzqn" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openstack-operators": no relationship found between node 'crc' and this object Jan 27 15:25:07 crc kubenswrapper[4697]: E0127 15:25:07.341651 4697 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"designate-operator-controller-manager-dockercfg-6xzqn\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"designate-operator-controller-manager-dockercfg-6xzqn\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openstack-operators\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 27 15:25:07 crc kubenswrapper[4697]: I0127 15:25:07.368833 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-67dd55ff59-hv8n2"] Jan 27 15:25:07 crc kubenswrapper[4697]: I0127 15:25:07.369517 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-hv8n2" Jan 27 15:25:07 crc kubenswrapper[4697]: I0127 15:25:07.372384 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-tcqds" Jan 27 15:25:07 crc kubenswrapper[4697]: I0127 15:25:07.380189 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-77554cdc5c-s4rdx"] Jan 27 15:25:07 crc kubenswrapper[4697]: I0127 15:25:07.385805 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdq8j\" (UniqueName: \"kubernetes.io/projected/349690fb-f1d2-4848-8424-01e794dc6317-kube-api-access-fdq8j\") pod \"barbican-operator-controller-manager-65ff799cfd-666rh\" (UID: \"349690fb-f1d2-4848-8424-01e794dc6317\") " pod="openstack-operators/barbican-operator-controller-manager-65ff799cfd-666rh" Jan 27 15:25:07 crc kubenswrapper[4697]: I0127 15:25:07.385856 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jcsf\" (UniqueName: \"kubernetes.io/projected/6be24454-9d04-4e38-a00e-d6f62e156bd0-kube-api-access-8jcsf\") pod \"cinder-operator-controller-manager-655bf9cfbb-wppqr\" (UID: \"6be24454-9d04-4e38-a00e-d6f62e156bd0\") " pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-wppqr" Jan 27 15:25:07 crc kubenswrapper[4697]: I0127 15:25:07.395111 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-575ffb885b-5h569"] Jan 27 15:25:07 crc kubenswrapper[4697]: I0127 15:25:07.395756 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-575ffb885b-5h569" Jan 27 15:25:07 crc kubenswrapper[4697]: I0127 15:25:07.415197 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-x6qdq" Jan 27 15:25:07 crc kubenswrapper[4697]: I0127 15:25:07.420848 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-575ffb885b-5h569"] Jan 27 15:25:07 crc kubenswrapper[4697]: I0127 15:25:07.435836 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-67dd55ff59-hv8n2"] Jan 27 15:25:07 crc kubenswrapper[4697]: I0127 15:25:07.449125 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-9nsp6"] Jan 27 15:25:07 crc kubenswrapper[4697]: I0127 15:25:07.449883 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-9nsp6" Jan 27 15:25:07 crc kubenswrapper[4697]: I0127 15:25:07.454055 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-ksfqz" Jan 27 15:25:07 crc kubenswrapper[4697]: I0127 15:25:07.455791 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-7d75bc88d5-2zk5c"] Jan 27 15:25:07 crc kubenswrapper[4697]: I0127 15:25:07.456454 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-2zk5c" Jan 27 15:25:07 crc kubenswrapper[4697]: I0127 15:25:07.481610 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 27 15:25:07 crc kubenswrapper[4697]: I0127 15:25:07.481799 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-mwh6n" Jan 27 15:25:07 crc kubenswrapper[4697]: I0127 15:25:07.483586 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-9nsp6"] Jan 27 15:25:07 crc kubenswrapper[4697]: I0127 15:25:07.487471 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fdq8j\" (UniqueName: \"kubernetes.io/projected/349690fb-f1d2-4848-8424-01e794dc6317-kube-api-access-fdq8j\") pod \"barbican-operator-controller-manager-65ff799cfd-666rh\" (UID: \"349690fb-f1d2-4848-8424-01e794dc6317\") " pod="openstack-operators/barbican-operator-controller-manager-65ff799cfd-666rh" Jan 27 15:25:07 crc kubenswrapper[4697]: I0127 15:25:07.487538 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8jcsf\" (UniqueName: \"kubernetes.io/projected/6be24454-9d04-4e38-a00e-d6f62e156bd0-kube-api-access-8jcsf\") pod \"cinder-operator-controller-manager-655bf9cfbb-wppqr\" (UID: \"6be24454-9d04-4e38-a00e-d6f62e156bd0\") " pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-wppqr" Jan 27 15:25:07 crc kubenswrapper[4697]: I0127 15:25:07.487588 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q68d8\" (UniqueName: \"kubernetes.io/projected/d930a939-ecb8-4955-88bf-274d35ed9e6a-kube-api-access-q68d8\") pod \"designate-operator-controller-manager-77554cdc5c-s4rdx\" (UID: \"d930a939-ecb8-4955-88bf-274d35ed9e6a\") " pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-s4rdx" Jan 27 15:25:07 crc kubenswrapper[4697]: I0127 15:25:07.487607 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgmhf\" (UniqueName: \"kubernetes.io/projected/71562cb6-5243-4433-bd90-07c45cf11203-kube-api-access-wgmhf\") pod \"glance-operator-controller-manager-67dd55ff59-hv8n2\" (UID: \"71562cb6-5243-4433-bd90-07c45cf11203\") " pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-hv8n2" Jan 27 15:25:07 crc kubenswrapper[4697]: I0127 15:25:07.546827 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-7d75bc88d5-2zk5c"] Jan 27 15:25:07 crc kubenswrapper[4697]: I0127 15:25:07.563775 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdq8j\" (UniqueName: \"kubernetes.io/projected/349690fb-f1d2-4848-8424-01e794dc6317-kube-api-access-fdq8j\") pod \"barbican-operator-controller-manager-65ff799cfd-666rh\" (UID: \"349690fb-f1d2-4848-8424-01e794dc6317\") " pod="openstack-operators/barbican-operator-controller-manager-65ff799cfd-666rh" Jan 27 15:25:07 crc kubenswrapper[4697]: I0127 15:25:07.564493 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-65ff799cfd-666rh" Jan 27 15:25:07 crc kubenswrapper[4697]: I0127 15:25:07.581217 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8jcsf\" (UniqueName: \"kubernetes.io/projected/6be24454-9d04-4e38-a00e-d6f62e156bd0-kube-api-access-8jcsf\") pod \"cinder-operator-controller-manager-655bf9cfbb-wppqr\" (UID: \"6be24454-9d04-4e38-a00e-d6f62e156bd0\") " pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-wppqr" Jan 27 15:25:07 crc kubenswrapper[4697]: I0127 15:25:07.581464 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-768b776ffb-9qhk4"] Jan 27 15:25:07 crc kubenswrapper[4697]: I0127 15:25:07.582159 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-9qhk4" Jan 27 15:25:07 crc kubenswrapper[4697]: I0127 15:25:07.582964 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-wppqr" Jan 27 15:25:07 crc kubenswrapper[4697]: I0127 15:25:07.586815 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-768b776ffb-9qhk4"] Jan 27 15:25:07 crc kubenswrapper[4697]: I0127 15:25:07.587145 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-spqpn" Jan 27 15:25:07 crc kubenswrapper[4697]: I0127 15:25:07.591415 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d26a6673-d71e-4f0a-a8f6-e87866dafa6a-cert\") pod \"infra-operator-controller-manager-7d75bc88d5-2zk5c\" (UID: \"d26a6673-d71e-4f0a-a8f6-e87866dafa6a\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-2zk5c" Jan 27 15:25:07 crc kubenswrapper[4697]: I0127 15:25:07.591465 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q68d8\" (UniqueName: \"kubernetes.io/projected/d930a939-ecb8-4955-88bf-274d35ed9e6a-kube-api-access-q68d8\") pod \"designate-operator-controller-manager-77554cdc5c-s4rdx\" (UID: \"d930a939-ecb8-4955-88bf-274d35ed9e6a\") " pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-s4rdx" Jan 27 15:25:07 crc kubenswrapper[4697]: I0127 15:25:07.591492 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wgmhf\" (UniqueName: \"kubernetes.io/projected/71562cb6-5243-4433-bd90-07c45cf11203-kube-api-access-wgmhf\") pod \"glance-operator-controller-manager-67dd55ff59-hv8n2\" (UID: \"71562cb6-5243-4433-bd90-07c45cf11203\") " pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-hv8n2" Jan 27 15:25:07 crc kubenswrapper[4697]: I0127 15:25:07.591512 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvl79\" (UniqueName: \"kubernetes.io/projected/ab1c79ce-8e28-4565-9760-5fd20ddf47eb-kube-api-access-qvl79\") pod \"heat-operator-controller-manager-575ffb885b-5h569\" (UID: \"ab1c79ce-8e28-4565-9760-5fd20ddf47eb\") " pod="openstack-operators/heat-operator-controller-manager-575ffb885b-5h569" Jan 27 15:25:07 crc kubenswrapper[4697]: I0127 15:25:07.591535 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5pqmm\" (UniqueName: \"kubernetes.io/projected/88db0cc4-3d70-47be-83e1-e5d2d3f3ff24-kube-api-access-5pqmm\") pod \"horizon-operator-controller-manager-77d5c5b54f-9nsp6\" (UID: \"88db0cc4-3d70-47be-83e1-e5d2d3f3ff24\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-9nsp6" Jan 27 15:25:07 crc kubenswrapper[4697]: I0127 15:25:07.591558 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knxhj\" (UniqueName: \"kubernetes.io/projected/d26a6673-d71e-4f0a-a8f6-e87866dafa6a-kube-api-access-knxhj\") pod \"infra-operator-controller-manager-7d75bc88d5-2zk5c\" (UID: \"d26a6673-d71e-4f0a-a8f6-e87866dafa6a\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-2zk5c" Jan 27 15:25:07 crc kubenswrapper[4697]: I0127 15:25:07.640534 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-55f684fd56-zppcc"] Jan 27 15:25:07 crc kubenswrapper[4697]: I0127 15:25:07.641257 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-zppcc" Jan 27 15:25:07 crc kubenswrapper[4697]: I0127 15:25:07.642405 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wgmhf\" (UniqueName: \"kubernetes.io/projected/71562cb6-5243-4433-bd90-07c45cf11203-kube-api-access-wgmhf\") pod \"glance-operator-controller-manager-67dd55ff59-hv8n2\" (UID: \"71562cb6-5243-4433-bd90-07c45cf11203\") " pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-hv8n2" Jan 27 15:25:07 crc kubenswrapper[4697]: I0127 15:25:07.663351 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-gjl7k" Jan 27 15:25:07 crc kubenswrapper[4697]: I0127 15:25:07.673428 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q68d8\" (UniqueName: \"kubernetes.io/projected/d930a939-ecb8-4955-88bf-274d35ed9e6a-kube-api-access-q68d8\") pod \"designate-operator-controller-manager-77554cdc5c-s4rdx\" (UID: \"d930a939-ecb8-4955-88bf-274d35ed9e6a\") " pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-s4rdx" Jan 27 15:25:07 crc kubenswrapper[4697]: I0127 15:25:07.675248 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-55f684fd56-zppcc"] Jan 27 15:25:07 crc kubenswrapper[4697]: I0127 15:25:07.694404 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d26a6673-d71e-4f0a-a8f6-e87866dafa6a-cert\") pod \"infra-operator-controller-manager-7d75bc88d5-2zk5c\" (UID: \"d26a6673-d71e-4f0a-a8f6-e87866dafa6a\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-2zk5c" Jan 27 15:25:07 crc kubenswrapper[4697]: I0127 15:25:07.694470 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvl79\" (UniqueName: \"kubernetes.io/projected/ab1c79ce-8e28-4565-9760-5fd20ddf47eb-kube-api-access-qvl79\") pod \"heat-operator-controller-manager-575ffb885b-5h569\" (UID: \"ab1c79ce-8e28-4565-9760-5fd20ddf47eb\") " pod="openstack-operators/heat-operator-controller-manager-575ffb885b-5h569" Jan 27 15:25:07 crc kubenswrapper[4697]: I0127 15:25:07.694495 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5pqmm\" (UniqueName: \"kubernetes.io/projected/88db0cc4-3d70-47be-83e1-e5d2d3f3ff24-kube-api-access-5pqmm\") pod \"horizon-operator-controller-manager-77d5c5b54f-9nsp6\" (UID: \"88db0cc4-3d70-47be-83e1-e5d2d3f3ff24\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-9nsp6" Jan 27 15:25:07 crc kubenswrapper[4697]: I0127 15:25:07.694517 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-knxhj\" (UniqueName: \"kubernetes.io/projected/d26a6673-d71e-4f0a-a8f6-e87866dafa6a-kube-api-access-knxhj\") pod \"infra-operator-controller-manager-7d75bc88d5-2zk5c\" (UID: \"d26a6673-d71e-4f0a-a8f6-e87866dafa6a\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-2zk5c" Jan 27 15:25:07 crc kubenswrapper[4697]: I0127 15:25:07.694555 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qc8mv\" (UniqueName: \"kubernetes.io/projected/ea0ee1bf-fe8d-4c6d-bf66-bb6b4b632ccf-kube-api-access-qc8mv\") pod \"ironic-operator-controller-manager-768b776ffb-9qhk4\" (UID: \"ea0ee1bf-fe8d-4c6d-bf66-bb6b4b632ccf\") " pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-9qhk4" Jan 27 15:25:07 crc kubenswrapper[4697]: E0127 15:25:07.695469 4697 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 27 15:25:07 crc kubenswrapper[4697]: E0127 15:25:07.695516 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d26a6673-d71e-4f0a-a8f6-e87866dafa6a-cert podName:d26a6673-d71e-4f0a-a8f6-e87866dafa6a nodeName:}" failed. No retries permitted until 2026-01-27 15:25:08.195500049 +0000 UTC m=+1004.367899830 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/d26a6673-d71e-4f0a-a8f6-e87866dafa6a-cert") pod "infra-operator-controller-manager-7d75bc88d5-2zk5c" (UID: "d26a6673-d71e-4f0a-a8f6-e87866dafa6a") : secret "infra-operator-webhook-server-cert" not found Jan 27 15:25:07 crc kubenswrapper[4697]: I0127 15:25:07.702283 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-hv8n2" Jan 27 15:25:07 crc kubenswrapper[4697]: I0127 15:25:07.711471 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-849fcfbb6b-5frlr"] Jan 27 15:25:07 crc kubenswrapper[4697]: I0127 15:25:07.712211 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-5frlr" Jan 27 15:25:07 crc kubenswrapper[4697]: I0127 15:25:07.724401 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-s2mqs"] Jan 27 15:25:07 crc kubenswrapper[4697]: I0127 15:25:07.725362 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-s2mqs" Jan 27 15:25:07 crc kubenswrapper[4697]: I0127 15:25:07.725920 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-n6hwx" Jan 27 15:25:07 crc kubenswrapper[4697]: I0127 15:25:07.743017 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-57tdj" Jan 27 15:25:07 crc kubenswrapper[4697]: I0127 15:25:07.744278 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvl79\" (UniqueName: \"kubernetes.io/projected/ab1c79ce-8e28-4565-9760-5fd20ddf47eb-kube-api-access-qvl79\") pod \"heat-operator-controller-manager-575ffb885b-5h569\" (UID: \"ab1c79ce-8e28-4565-9760-5fd20ddf47eb\") " pod="openstack-operators/heat-operator-controller-manager-575ffb885b-5h569" Jan 27 15:25:07 crc kubenswrapper[4697]: I0127 15:25:07.749578 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-s2mqs"] Jan 27 15:25:07 crc kubenswrapper[4697]: I0127 15:25:07.752635 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-849fcfbb6b-5frlr"] Jan 27 15:25:07 crc kubenswrapper[4697]: I0127 15:25:07.753936 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-knxhj\" (UniqueName: \"kubernetes.io/projected/d26a6673-d71e-4f0a-a8f6-e87866dafa6a-kube-api-access-knxhj\") pod \"infra-operator-controller-manager-7d75bc88d5-2zk5c\" (UID: \"d26a6673-d71e-4f0a-a8f6-e87866dafa6a\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-2zk5c" Jan 27 15:25:07 crc kubenswrapper[4697]: I0127 15:25:07.760302 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5pqmm\" (UniqueName: \"kubernetes.io/projected/88db0cc4-3d70-47be-83e1-e5d2d3f3ff24-kube-api-access-5pqmm\") pod \"horizon-operator-controller-manager-77d5c5b54f-9nsp6\" (UID: \"88db0cc4-3d70-47be-83e1-e5d2d3f3ff24\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-9nsp6" Jan 27 15:25:07 crc kubenswrapper[4697]: I0127 15:25:07.776453 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-7w8b9"] Jan 27 15:25:07 crc kubenswrapper[4697]: I0127 15:25:07.777582 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-9nsp6" Jan 27 15:25:07 crc kubenswrapper[4697]: I0127 15:25:07.779969 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-7w8b9" Jan 27 15:25:07 crc kubenswrapper[4697]: I0127 15:25:07.795337 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmxtz\" (UniqueName: \"kubernetes.io/projected/39770161-132e-4037-aec7-9db6d10d17d8-kube-api-access-gmxtz\") pod \"keystone-operator-controller-manager-55f684fd56-zppcc\" (UID: \"39770161-132e-4037-aec7-9db6d10d17d8\") " pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-zppcc" Jan 27 15:25:07 crc kubenswrapper[4697]: I0127 15:25:07.795420 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qc8mv\" (UniqueName: \"kubernetes.io/projected/ea0ee1bf-fe8d-4c6d-bf66-bb6b4b632ccf-kube-api-access-qc8mv\") pod \"ironic-operator-controller-manager-768b776ffb-9qhk4\" (UID: \"ea0ee1bf-fe8d-4c6d-bf66-bb6b4b632ccf\") " pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-9qhk4" Jan 27 15:25:07 crc kubenswrapper[4697]: I0127 15:25:07.795459 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lzf7\" (UniqueName: \"kubernetes.io/projected/a068f004-7f2c-4c3d-8bfe-98fbc4b65a73-kube-api-access-4lzf7\") pod \"manila-operator-controller-manager-849fcfbb6b-5frlr\" (UID: \"a068f004-7f2c-4c3d-8bfe-98fbc4b65a73\") " pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-5frlr" Jan 27 15:25:07 crc kubenswrapper[4697]: I0127 15:25:07.795902 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-sqh4m" Jan 27 15:25:07 crc kubenswrapper[4697]: I0127 15:25:07.813550 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-7w8b9"] Jan 27 15:25:07 crc kubenswrapper[4697]: I0127 15:25:07.850106 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-ddcbfd695-nx7cr"] Jan 27 15:25:07 crc kubenswrapper[4697]: I0127 15:25:07.851004 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-ddcbfd695-nx7cr" Jan 27 15:25:07 crc kubenswrapper[4697]: I0127 15:25:07.860314 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qc8mv\" (UniqueName: \"kubernetes.io/projected/ea0ee1bf-fe8d-4c6d-bf66-bb6b4b632ccf-kube-api-access-qc8mv\") pod \"ironic-operator-controller-manager-768b776ffb-9qhk4\" (UID: \"ea0ee1bf-fe8d-4c6d-bf66-bb6b4b632ccf\") " pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-9qhk4" Jan 27 15:25:07 crc kubenswrapper[4697]: I0127 15:25:07.876286 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-9d2bw" Jan 27 15:25:07 crc kubenswrapper[4697]: I0127 15:25:07.899623 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvjl4\" (UniqueName: \"kubernetes.io/projected/b23f7e1b-6141-4dc3-bf18-70732ae7889a-kube-api-access-dvjl4\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-s2mqs\" (UID: \"b23f7e1b-6141-4dc3-bf18-70732ae7889a\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-s2mqs" Jan 27 15:25:07 crc kubenswrapper[4697]: I0127 15:25:07.899683 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4lzf7\" (UniqueName: \"kubernetes.io/projected/a068f004-7f2c-4c3d-8bfe-98fbc4b65a73-kube-api-access-4lzf7\") pod \"manila-operator-controller-manager-849fcfbb6b-5frlr\" (UID: \"a068f004-7f2c-4c3d-8bfe-98fbc4b65a73\") " pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-5frlr" Jan 27 15:25:07 crc kubenswrapper[4697]: I0127 15:25:07.899734 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gmxtz\" (UniqueName: \"kubernetes.io/projected/39770161-132e-4037-aec7-9db6d10d17d8-kube-api-access-gmxtz\") pod \"keystone-operator-controller-manager-55f684fd56-zppcc\" (UID: \"39770161-132e-4037-aec7-9db6d10d17d8\") " pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-zppcc" Jan 27 15:25:07 crc kubenswrapper[4697]: I0127 15:25:07.899776 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6k5hn\" (UniqueName: \"kubernetes.io/projected/42edadff-8683-4551-b634-33e4ad590fb1-kube-api-access-6k5hn\") pod \"neutron-operator-controller-manager-7ffd8d76d4-7w8b9\" (UID: \"42edadff-8683-4551-b634-33e4ad590fb1\") " pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-7w8b9" Jan 27 15:25:07 crc kubenswrapper[4697]: I0127 15:25:07.927345 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-ddcbfd695-nx7cr"] Jan 27 15:25:07 crc kubenswrapper[4697]: I0127 15:25:07.934442 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7875d7675-kvp8m"] Jan 27 15:25:07 crc kubenswrapper[4697]: I0127 15:25:07.935645 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-7875d7675-kvp8m" Jan 27 15:25:07 crc kubenswrapper[4697]: I0127 15:25:07.945486 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-898f9" Jan 27 15:25:07 crc kubenswrapper[4697]: I0127 15:25:07.950964 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4lzf7\" (UniqueName: \"kubernetes.io/projected/a068f004-7f2c-4c3d-8bfe-98fbc4b65a73-kube-api-access-4lzf7\") pod \"manila-operator-controller-manager-849fcfbb6b-5frlr\" (UID: \"a068f004-7f2c-4c3d-8bfe-98fbc4b65a73\") " pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-5frlr" Jan 27 15:25:07 crc kubenswrapper[4697]: I0127 15:25:07.960462 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gmxtz\" (UniqueName: \"kubernetes.io/projected/39770161-132e-4037-aec7-9db6d10d17d8-kube-api-access-gmxtz\") pod \"keystone-operator-controller-manager-55f684fd56-zppcc\" (UID: \"39770161-132e-4037-aec7-9db6d10d17d8\") " pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-zppcc" Jan 27 15:25:07 crc kubenswrapper[4697]: I0127 15:25:07.993720 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7875d7675-kvp8m"] Jan 27 15:25:08 crc kubenswrapper[4697]: I0127 15:25:08.001208 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6k5hn\" (UniqueName: \"kubernetes.io/projected/42edadff-8683-4551-b634-33e4ad590fb1-kube-api-access-6k5hn\") pod \"neutron-operator-controller-manager-7ffd8d76d4-7w8b9\" (UID: \"42edadff-8683-4551-b634-33e4ad590fb1\") " pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-7w8b9" Jan 27 15:25:08 crc kubenswrapper[4697]: I0127 15:25:08.001259 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgz8t\" (UniqueName: \"kubernetes.io/projected/c3d1f921-6d2e-4c30-9f75-14f206a1fb7e-kube-api-access-hgz8t\") pod \"nova-operator-controller-manager-ddcbfd695-nx7cr\" (UID: \"c3d1f921-6d2e-4c30-9f75-14f206a1fb7e\") " pod="openstack-operators/nova-operator-controller-manager-ddcbfd695-nx7cr" Jan 27 15:25:08 crc kubenswrapper[4697]: I0127 15:25:08.001309 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvjl4\" (UniqueName: \"kubernetes.io/projected/b23f7e1b-6141-4dc3-bf18-70732ae7889a-kube-api-access-dvjl4\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-s2mqs\" (UID: \"b23f7e1b-6141-4dc3-bf18-70732ae7889a\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-s2mqs" Jan 27 15:25:08 crc kubenswrapper[4697]: I0127 15:25:08.010175 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854dgsgx"] Jan 27 15:25:08 crc kubenswrapper[4697]: I0127 15:25:08.011233 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854dgsgx" Jan 27 15:25:08 crc kubenswrapper[4697]: I0127 15:25:08.017160 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-575ffb885b-5h569" Jan 27 15:25:08 crc kubenswrapper[4697]: I0127 15:25:08.017195 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-9qhk4" Jan 27 15:25:08 crc kubenswrapper[4697]: I0127 15:25:08.048028 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 27 15:25:08 crc kubenswrapper[4697]: I0127 15:25:08.048383 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-sg6l5" Jan 27 15:25:08 crc kubenswrapper[4697]: I0127 15:25:08.064738 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-zppcc" Jan 27 15:25:08 crc kubenswrapper[4697]: I0127 15:25:08.078928 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-6f75f45d54-ql7xq"] Jan 27 15:25:08 crc kubenswrapper[4697]: I0127 15:25:08.080042 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-ql7xq" Jan 27 15:25:08 crc kubenswrapper[4697]: I0127 15:25:08.081406 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvjl4\" (UniqueName: \"kubernetes.io/projected/b23f7e1b-6141-4dc3-bf18-70732ae7889a-kube-api-access-dvjl4\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-s2mqs\" (UID: \"b23f7e1b-6141-4dc3-bf18-70732ae7889a\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-s2mqs" Jan 27 15:25:08 crc kubenswrapper[4697]: I0127 15:25:08.084838 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854dgsgx"] Jan 27 15:25:08 crc kubenswrapper[4697]: I0127 15:25:08.088876 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6k5hn\" (UniqueName: \"kubernetes.io/projected/42edadff-8683-4551-b634-33e4ad590fb1-kube-api-access-6k5hn\") pod \"neutron-operator-controller-manager-7ffd8d76d4-7w8b9\" (UID: \"42edadff-8683-4551-b634-33e4ad590fb1\") " pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-7w8b9" Jan 27 15:25:08 crc kubenswrapper[4697]: I0127 15:25:08.094850 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-5frlr" Jan 27 15:25:08 crc kubenswrapper[4697]: I0127 15:25:08.106909 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pw52r\" (UniqueName: \"kubernetes.io/projected/ee7cb913-d3ef-459b-bd70-d6a2aea9ace3-kube-api-access-pw52r\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854dgsgx\" (UID: \"ee7cb913-d3ef-459b-bd70-d6a2aea9ace3\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854dgsgx" Jan 27 15:25:08 crc kubenswrapper[4697]: I0127 15:25:08.106994 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ee7cb913-d3ef-459b-bd70-d6a2aea9ace3-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854dgsgx\" (UID: \"ee7cb913-d3ef-459b-bd70-d6a2aea9ace3\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854dgsgx" Jan 27 15:25:08 crc kubenswrapper[4697]: I0127 15:25:08.107026 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hgz8t\" (UniqueName: \"kubernetes.io/projected/c3d1f921-6d2e-4c30-9f75-14f206a1fb7e-kube-api-access-hgz8t\") pod \"nova-operator-controller-manager-ddcbfd695-nx7cr\" (UID: \"c3d1f921-6d2e-4c30-9f75-14f206a1fb7e\") " pod="openstack-operators/nova-operator-controller-manager-ddcbfd695-nx7cr" Jan 27 15:25:08 crc kubenswrapper[4697]: I0127 15:25:08.107052 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szqnn\" (UniqueName: \"kubernetes.io/projected/66cf11a2-77ca-44a8-ade8-610d02430a2d-kube-api-access-szqnn\") pod \"octavia-operator-controller-manager-7875d7675-kvp8m\" (UID: \"66cf11a2-77ca-44a8-ade8-610d02430a2d\") " pod="openstack-operators/octavia-operator-controller-manager-7875d7675-kvp8m" Jan 27 15:25:08 crc kubenswrapper[4697]: I0127 15:25:08.107953 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-445wx" Jan 27 15:25:08 crc kubenswrapper[4697]: I0127 15:25:08.122453 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-s2mqs" Jan 27 15:25:08 crc kubenswrapper[4697]: I0127 15:25:08.142864 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-79d5ccc684-44hkp"] Jan 27 15:25:08 crc kubenswrapper[4697]: I0127 15:25:08.143675 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-44hkp" Jan 27 15:25:08 crc kubenswrapper[4697]: I0127 15:25:08.150916 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hgz8t\" (UniqueName: \"kubernetes.io/projected/c3d1f921-6d2e-4c30-9f75-14f206a1fb7e-kube-api-access-hgz8t\") pod \"nova-operator-controller-manager-ddcbfd695-nx7cr\" (UID: \"c3d1f921-6d2e-4c30-9f75-14f206a1fb7e\") " pod="openstack-operators/nova-operator-controller-manager-ddcbfd695-nx7cr" Jan 27 15:25:08 crc kubenswrapper[4697]: I0127 15:25:08.151056 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-6kwxl" Jan 27 15:25:08 crc kubenswrapper[4697]: I0127 15:25:08.151247 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-6f75f45d54-ql7xq"] Jan 27 15:25:08 crc kubenswrapper[4697]: I0127 15:25:08.155538 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-7w8b9" Jan 27 15:25:08 crc kubenswrapper[4697]: I0127 15:25:08.203731 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-79d5ccc684-44hkp"] Jan 27 15:25:08 crc kubenswrapper[4697]: I0127 15:25:08.208374 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8wbt\" (UniqueName: \"kubernetes.io/projected/cb062e69-364e-4798-9a7e-4cfb1b1ca571-kube-api-access-s8wbt\") pod \"ovn-operator-controller-manager-6f75f45d54-ql7xq\" (UID: \"cb062e69-364e-4798-9a7e-4cfb1b1ca571\") " pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-ql7xq" Jan 27 15:25:08 crc kubenswrapper[4697]: I0127 15:25:08.208441 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d26a6673-d71e-4f0a-a8f6-e87866dafa6a-cert\") pod \"infra-operator-controller-manager-7d75bc88d5-2zk5c\" (UID: \"d26a6673-d71e-4f0a-a8f6-e87866dafa6a\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-2zk5c" Jan 27 15:25:08 crc kubenswrapper[4697]: I0127 15:25:08.208477 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ee7cb913-d3ef-459b-bd70-d6a2aea9ace3-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854dgsgx\" (UID: \"ee7cb913-d3ef-459b-bd70-d6a2aea9ace3\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854dgsgx" Jan 27 15:25:08 crc kubenswrapper[4697]: I0127 15:25:08.208516 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-szqnn\" (UniqueName: \"kubernetes.io/projected/66cf11a2-77ca-44a8-ade8-610d02430a2d-kube-api-access-szqnn\") pod \"octavia-operator-controller-manager-7875d7675-kvp8m\" (UID: \"66cf11a2-77ca-44a8-ade8-610d02430a2d\") " pod="openstack-operators/octavia-operator-controller-manager-7875d7675-kvp8m" Jan 27 15:25:08 crc kubenswrapper[4697]: I0127 15:25:08.208547 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pw52r\" (UniqueName: \"kubernetes.io/projected/ee7cb913-d3ef-459b-bd70-d6a2aea9ace3-kube-api-access-pw52r\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854dgsgx\" (UID: \"ee7cb913-d3ef-459b-bd70-d6a2aea9ace3\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854dgsgx" Jan 27 15:25:08 crc kubenswrapper[4697]: E0127 15:25:08.208705 4697 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 27 15:25:08 crc kubenswrapper[4697]: E0127 15:25:08.208769 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d26a6673-d71e-4f0a-a8f6-e87866dafa6a-cert podName:d26a6673-d71e-4f0a-a8f6-e87866dafa6a nodeName:}" failed. No retries permitted until 2026-01-27 15:25:09.208751415 +0000 UTC m=+1005.381151196 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/d26a6673-d71e-4f0a-a8f6-e87866dafa6a-cert") pod "infra-operator-controller-manager-7d75bc88d5-2zk5c" (UID: "d26a6673-d71e-4f0a-a8f6-e87866dafa6a") : secret "infra-operator-webhook-server-cert" not found Jan 27 15:25:08 crc kubenswrapper[4697]: E0127 15:25:08.208876 4697 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 15:25:08 crc kubenswrapper[4697]: E0127 15:25:08.208915 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ee7cb913-d3ef-459b-bd70-d6a2aea9ace3-cert podName:ee7cb913-d3ef-459b-bd70-d6a2aea9ace3 nodeName:}" failed. No retries permitted until 2026-01-27 15:25:08.708900589 +0000 UTC m=+1004.881300370 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ee7cb913-d3ef-459b-bd70-d6a2aea9ace3-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854dgsgx" (UID: "ee7cb913-d3ef-459b-bd70-d6a2aea9ace3") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 15:25:08 crc kubenswrapper[4697]: I0127 15:25:08.213237 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-ddcbfd695-nx7cr" Jan 27 15:25:08 crc kubenswrapper[4697]: I0127 15:25:08.262712 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pw52r\" (UniqueName: \"kubernetes.io/projected/ee7cb913-d3ef-459b-bd70-d6a2aea9ace3-kube-api-access-pw52r\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854dgsgx\" (UID: \"ee7cb913-d3ef-459b-bd70-d6a2aea9ace3\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854dgsgx" Jan 27 15:25:08 crc kubenswrapper[4697]: I0127 15:25:08.262823 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-6hdkv"] Jan 27 15:25:08 crc kubenswrapper[4697]: I0127 15:25:08.263740 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-6hdkv" Jan 27 15:25:08 crc kubenswrapper[4697]: I0127 15:25:08.265314 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-l794n" Jan 27 15:25:08 crc kubenswrapper[4697]: I0127 15:25:08.271410 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-szqnn\" (UniqueName: \"kubernetes.io/projected/66cf11a2-77ca-44a8-ade8-610d02430a2d-kube-api-access-szqnn\") pod \"octavia-operator-controller-manager-7875d7675-kvp8m\" (UID: \"66cf11a2-77ca-44a8-ade8-610d02430a2d\") " pod="openstack-operators/octavia-operator-controller-manager-7875d7675-kvp8m" Jan 27 15:25:08 crc kubenswrapper[4697]: I0127 15:25:08.284651 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-799bc87c89-bzmfz"] Jan 27 15:25:08 crc kubenswrapper[4697]: I0127 15:25:08.285682 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-bzmfz" Jan 27 15:25:08 crc kubenswrapper[4697]: I0127 15:25:08.294450 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-sqmms" Jan 27 15:25:08 crc kubenswrapper[4697]: I0127 15:25:08.298474 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-7875d7675-kvp8m" Jan 27 15:25:08 crc kubenswrapper[4697]: I0127 15:25:08.340884 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2brg\" (UniqueName: \"kubernetes.io/projected/a484e650-0a10-44e5-8b88-0f4157293d48-kube-api-access-p2brg\") pod \"placement-operator-controller-manager-79d5ccc684-44hkp\" (UID: \"a484e650-0a10-44e5-8b88-0f4157293d48\") " pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-44hkp" Jan 27 15:25:08 crc kubenswrapper[4697]: I0127 15:25:08.341246 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8wbt\" (UniqueName: \"kubernetes.io/projected/cb062e69-364e-4798-9a7e-4cfb1b1ca571-kube-api-access-s8wbt\") pod \"ovn-operator-controller-manager-6f75f45d54-ql7xq\" (UID: \"cb062e69-364e-4798-9a7e-4cfb1b1ca571\") " pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-ql7xq" Jan 27 15:25:08 crc kubenswrapper[4697]: I0127 15:25:08.344821 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-6hdkv"] Jan 27 15:25:08 crc kubenswrapper[4697]: I0127 15:25:08.361561 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-799bc87c89-bzmfz"] Jan 27 15:25:08 crc kubenswrapper[4697]: I0127 15:25:08.374492 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6c9bb4b66c-xktdh"] Jan 27 15:25:08 crc kubenswrapper[4697]: I0127 15:25:08.380714 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-6c9bb4b66c-xktdh" Jan 27 15:25:08 crc kubenswrapper[4697]: I0127 15:25:08.386019 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-c58rh" Jan 27 15:25:08 crc kubenswrapper[4697]: I0127 15:25:08.440098 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8wbt\" (UniqueName: \"kubernetes.io/projected/cb062e69-364e-4798-9a7e-4cfb1b1ca571-kube-api-access-s8wbt\") pod \"ovn-operator-controller-manager-6f75f45d54-ql7xq\" (UID: \"cb062e69-364e-4798-9a7e-4cfb1b1ca571\") " pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-ql7xq" Jan 27 15:25:08 crc kubenswrapper[4697]: I0127 15:25:08.443539 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfhdw\" (UniqueName: \"kubernetes.io/projected/386961d6-c4f3-48c7-a03f-768c470daee4-kube-api-access-bfhdw\") pod \"telemetry-operator-controller-manager-799bc87c89-bzmfz\" (UID: \"386961d6-c4f3-48c7-a03f-768c470daee4\") " pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-bzmfz" Jan 27 15:25:08 crc kubenswrapper[4697]: I0127 15:25:08.443625 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whcgk\" (UniqueName: \"kubernetes.io/projected/eae7ff28-7cf8-4e7e-bb04-3e75bb4156ec-kube-api-access-whcgk\") pod \"swift-operator-controller-manager-547cbdb99f-6hdkv\" (UID: \"eae7ff28-7cf8-4e7e-bb04-3e75bb4156ec\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-6hdkv" Jan 27 15:25:08 crc kubenswrapper[4697]: I0127 15:25:08.443672 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2brg\" (UniqueName: \"kubernetes.io/projected/a484e650-0a10-44e5-8b88-0f4157293d48-kube-api-access-p2brg\") pod \"placement-operator-controller-manager-79d5ccc684-44hkp\" (UID: \"a484e650-0a10-44e5-8b88-0f4157293d48\") " pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-44hkp" Jan 27 15:25:08 crc kubenswrapper[4697]: I0127 15:25:08.451042 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-bkw8p"] Jan 27 15:25:08 crc kubenswrapper[4697]: I0127 15:25:08.451889 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-bkw8p" Jan 27 15:25:08 crc kubenswrapper[4697]: I0127 15:25:08.454913 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-rnq6x" Jan 27 15:25:08 crc kubenswrapper[4697]: I0127 15:25:08.477286 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6c9bb4b66c-xktdh"] Jan 27 15:25:08 crc kubenswrapper[4697]: I0127 15:25:08.487077 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-bkw8p"] Jan 27 15:25:08 crc kubenswrapper[4697]: I0127 15:25:08.512776 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2brg\" (UniqueName: \"kubernetes.io/projected/a484e650-0a10-44e5-8b88-0f4157293d48-kube-api-access-p2brg\") pod \"placement-operator-controller-manager-79d5ccc684-44hkp\" (UID: \"a484e650-0a10-44e5-8b88-0f4157293d48\") " pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-44hkp" Jan 27 15:25:08 crc kubenswrapper[4697]: I0127 15:25:08.545520 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mk4fn\" (UniqueName: \"kubernetes.io/projected/081ab885-5c5c-41c5-a1ca-69ab3e0b5b45-kube-api-access-mk4fn\") pod \"test-operator-controller-manager-69797bbcbd-bkw8p\" (UID: \"081ab885-5c5c-41c5-a1ca-69ab3e0b5b45\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-bkw8p" Jan 27 15:25:08 crc kubenswrapper[4697]: I0127 15:25:08.545580 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mr49l\" (UniqueName: \"kubernetes.io/projected/89a02bfb-edab-48f6-8c52-6d5f56541057-kube-api-access-mr49l\") pod \"watcher-operator-controller-manager-6c9bb4b66c-xktdh\" (UID: \"89a02bfb-edab-48f6-8c52-6d5f56541057\") " pod="openstack-operators/watcher-operator-controller-manager-6c9bb4b66c-xktdh" Jan 27 15:25:08 crc kubenswrapper[4697]: I0127 15:25:08.545664 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bfhdw\" (UniqueName: \"kubernetes.io/projected/386961d6-c4f3-48c7-a03f-768c470daee4-kube-api-access-bfhdw\") pod \"telemetry-operator-controller-manager-799bc87c89-bzmfz\" (UID: \"386961d6-c4f3-48c7-a03f-768c470daee4\") " pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-bzmfz" Jan 27 15:25:08 crc kubenswrapper[4697]: I0127 15:25:08.545717 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-whcgk\" (UniqueName: \"kubernetes.io/projected/eae7ff28-7cf8-4e7e-bb04-3e75bb4156ec-kube-api-access-whcgk\") pod \"swift-operator-controller-manager-547cbdb99f-6hdkv\" (UID: \"eae7ff28-7cf8-4e7e-bb04-3e75bb4156ec\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-6hdkv" Jan 27 15:25:08 crc kubenswrapper[4697]: I0127 15:25:08.579581 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bfhdw\" (UniqueName: \"kubernetes.io/projected/386961d6-c4f3-48c7-a03f-768c470daee4-kube-api-access-bfhdw\") pod \"telemetry-operator-controller-manager-799bc87c89-bzmfz\" (UID: \"386961d6-c4f3-48c7-a03f-768c470daee4\") " pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-bzmfz" Jan 27 15:25:08 crc kubenswrapper[4697]: I0127 15:25:08.585021 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-whcgk\" (UniqueName: \"kubernetes.io/projected/eae7ff28-7cf8-4e7e-bb04-3e75bb4156ec-kube-api-access-whcgk\") pod \"swift-operator-controller-manager-547cbdb99f-6hdkv\" (UID: \"eae7ff28-7cf8-4e7e-bb04-3e75bb4156ec\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-6hdkv" Jan 27 15:25:08 crc kubenswrapper[4697]: I0127 15:25:08.641678 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-ff554fc88-js46k"] Jan 27 15:25:08 crc kubenswrapper[4697]: I0127 15:25:08.642748 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-ff554fc88-js46k" Jan 27 15:25:08 crc kubenswrapper[4697]: I0127 15:25:08.651519 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mk4fn\" (UniqueName: \"kubernetes.io/projected/081ab885-5c5c-41c5-a1ca-69ab3e0b5b45-kube-api-access-mk4fn\") pod \"test-operator-controller-manager-69797bbcbd-bkw8p\" (UID: \"081ab885-5c5c-41c5-a1ca-69ab3e0b5b45\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-bkw8p" Jan 27 15:25:08 crc kubenswrapper[4697]: I0127 15:25:08.651570 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mr49l\" (UniqueName: \"kubernetes.io/projected/89a02bfb-edab-48f6-8c52-6d5f56541057-kube-api-access-mr49l\") pod \"watcher-operator-controller-manager-6c9bb4b66c-xktdh\" (UID: \"89a02bfb-edab-48f6-8c52-6d5f56541057\") " pod="openstack-operators/watcher-operator-controller-manager-6c9bb4b66c-xktdh" Jan 27 15:25:08 crc kubenswrapper[4697]: I0127 15:25:08.658470 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-78cg5" Jan 27 15:25:08 crc kubenswrapper[4697]: I0127 15:25:08.658690 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 27 15:25:08 crc kubenswrapper[4697]: I0127 15:25:08.658883 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 27 15:25:08 crc kubenswrapper[4697]: I0127 15:25:08.683652 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-6hdkv" Jan 27 15:25:08 crc kubenswrapper[4697]: I0127 15:25:08.705222 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-bzmfz" Jan 27 15:25:08 crc kubenswrapper[4697]: I0127 15:25:08.707070 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-ff554fc88-js46k"] Jan 27 15:25:08 crc kubenswrapper[4697]: I0127 15:25:08.708301 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mk4fn\" (UniqueName: \"kubernetes.io/projected/081ab885-5c5c-41c5-a1ca-69ab3e0b5b45-kube-api-access-mk4fn\") pod \"test-operator-controller-manager-69797bbcbd-bkw8p\" (UID: \"081ab885-5c5c-41c5-a1ca-69ab3e0b5b45\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-bkw8p" Jan 27 15:25:08 crc kubenswrapper[4697]: I0127 15:25:08.724515 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-ql7xq" Jan 27 15:25:08 crc kubenswrapper[4697]: I0127 15:25:08.745321 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mr49l\" (UniqueName: \"kubernetes.io/projected/89a02bfb-edab-48f6-8c52-6d5f56541057-kube-api-access-mr49l\") pod \"watcher-operator-controller-manager-6c9bb4b66c-xktdh\" (UID: \"89a02bfb-edab-48f6-8c52-6d5f56541057\") " pod="openstack-operators/watcher-operator-controller-manager-6c9bb4b66c-xktdh" Jan 27 15:25:08 crc kubenswrapper[4697]: I0127 15:25:08.749776 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-6xzqn" Jan 27 15:25:08 crc kubenswrapper[4697]: I0127 15:25:08.750976 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-s4rdx" Jan 27 15:25:08 crc kubenswrapper[4697]: I0127 15:25:08.752138 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a0f8d486-5d8d-4ae1-9d4c-02f4ab128ede-webhook-certs\") pod \"openstack-operator-controller-manager-ff554fc88-js46k\" (UID: \"a0f8d486-5d8d-4ae1-9d4c-02f4ab128ede\") " pod="openstack-operators/openstack-operator-controller-manager-ff554fc88-js46k" Jan 27 15:25:08 crc kubenswrapper[4697]: I0127 15:25:08.752197 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjc74\" (UniqueName: \"kubernetes.io/projected/a0f8d486-5d8d-4ae1-9d4c-02f4ab128ede-kube-api-access-tjc74\") pod \"openstack-operator-controller-manager-ff554fc88-js46k\" (UID: \"a0f8d486-5d8d-4ae1-9d4c-02f4ab128ede\") " pod="openstack-operators/openstack-operator-controller-manager-ff554fc88-js46k" Jan 27 15:25:08 crc kubenswrapper[4697]: I0127 15:25:08.752242 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a0f8d486-5d8d-4ae1-9d4c-02f4ab128ede-metrics-certs\") pod \"openstack-operator-controller-manager-ff554fc88-js46k\" (UID: \"a0f8d486-5d8d-4ae1-9d4c-02f4ab128ede\") " pod="openstack-operators/openstack-operator-controller-manager-ff554fc88-js46k" Jan 27 15:25:08 crc kubenswrapper[4697]: I0127 15:25:08.752287 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ee7cb913-d3ef-459b-bd70-d6a2aea9ace3-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854dgsgx\" (UID: \"ee7cb913-d3ef-459b-bd70-d6a2aea9ace3\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854dgsgx" Jan 27 15:25:08 crc kubenswrapper[4697]: E0127 15:25:08.752398 4697 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 15:25:08 crc kubenswrapper[4697]: E0127 15:25:08.752434 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ee7cb913-d3ef-459b-bd70-d6a2aea9ace3-cert podName:ee7cb913-d3ef-459b-bd70-d6a2aea9ace3 nodeName:}" failed. No retries permitted until 2026-01-27 15:25:09.752421713 +0000 UTC m=+1005.924821494 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ee7cb913-d3ef-459b-bd70-d6a2aea9ace3-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854dgsgx" (UID: "ee7cb913-d3ef-459b-bd70-d6a2aea9ace3") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 15:25:08 crc kubenswrapper[4697]: I0127 15:25:08.774469 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-65ff799cfd-666rh"] Jan 27 15:25:08 crc kubenswrapper[4697]: I0127 15:25:08.791118 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-44hkp" Jan 27 15:25:08 crc kubenswrapper[4697]: I0127 15:25:08.801473 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-6c9bb4b66c-xktdh" Jan 27 15:25:08 crc kubenswrapper[4697]: I0127 15:25:08.816066 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4wpgd"] Jan 27 15:25:08 crc kubenswrapper[4697]: I0127 15:25:08.816980 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4wpgd" Jan 27 15:25:08 crc kubenswrapper[4697]: I0127 15:25:08.819432 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-b289p" Jan 27 15:25:08 crc kubenswrapper[4697]: I0127 15:25:08.831829 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-bkw8p" Jan 27 15:25:08 crc kubenswrapper[4697]: I0127 15:25:08.834550 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4wpgd"] Jan 27 15:25:08 crc kubenswrapper[4697]: I0127 15:25:08.857436 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a0f8d486-5d8d-4ae1-9d4c-02f4ab128ede-webhook-certs\") pod \"openstack-operator-controller-manager-ff554fc88-js46k\" (UID: \"a0f8d486-5d8d-4ae1-9d4c-02f4ab128ede\") " pod="openstack-operators/openstack-operator-controller-manager-ff554fc88-js46k" Jan 27 15:25:08 crc kubenswrapper[4697]: I0127 15:25:08.857489 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tjc74\" (UniqueName: \"kubernetes.io/projected/a0f8d486-5d8d-4ae1-9d4c-02f4ab128ede-kube-api-access-tjc74\") pod \"openstack-operator-controller-manager-ff554fc88-js46k\" (UID: \"a0f8d486-5d8d-4ae1-9d4c-02f4ab128ede\") " pod="openstack-operators/openstack-operator-controller-manager-ff554fc88-js46k" Jan 27 15:25:08 crc kubenswrapper[4697]: I0127 15:25:08.857527 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a0f8d486-5d8d-4ae1-9d4c-02f4ab128ede-metrics-certs\") pod \"openstack-operator-controller-manager-ff554fc88-js46k\" (UID: \"a0f8d486-5d8d-4ae1-9d4c-02f4ab128ede\") " pod="openstack-operators/openstack-operator-controller-manager-ff554fc88-js46k" Jan 27 15:25:08 crc kubenswrapper[4697]: E0127 15:25:08.857670 4697 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 27 15:25:08 crc kubenswrapper[4697]: E0127 15:25:08.857715 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a0f8d486-5d8d-4ae1-9d4c-02f4ab128ede-metrics-certs podName:a0f8d486-5d8d-4ae1-9d4c-02f4ab128ede nodeName:}" failed. No retries permitted until 2026-01-27 15:25:09.357700469 +0000 UTC m=+1005.530100250 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a0f8d486-5d8d-4ae1-9d4c-02f4ab128ede-metrics-certs") pod "openstack-operator-controller-manager-ff554fc88-js46k" (UID: "a0f8d486-5d8d-4ae1-9d4c-02f4ab128ede") : secret "metrics-server-cert" not found Jan 27 15:25:08 crc kubenswrapper[4697]: E0127 15:25:08.857970 4697 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 27 15:25:08 crc kubenswrapper[4697]: E0127 15:25:08.857993 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a0f8d486-5d8d-4ae1-9d4c-02f4ab128ede-webhook-certs podName:a0f8d486-5d8d-4ae1-9d4c-02f4ab128ede nodeName:}" failed. No retries permitted until 2026-01-27 15:25:09.357985956 +0000 UTC m=+1005.530385727 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/a0f8d486-5d8d-4ae1-9d4c-02f4ab128ede-webhook-certs") pod "openstack-operator-controller-manager-ff554fc88-js46k" (UID: "a0f8d486-5d8d-4ae1-9d4c-02f4ab128ede") : secret "webhook-server-cert" not found Jan 27 15:25:08 crc kubenswrapper[4697]: I0127 15:25:08.877894 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tjc74\" (UniqueName: \"kubernetes.io/projected/a0f8d486-5d8d-4ae1-9d4c-02f4ab128ede-kube-api-access-tjc74\") pod \"openstack-operator-controller-manager-ff554fc88-js46k\" (UID: \"a0f8d486-5d8d-4ae1-9d4c-02f4ab128ede\") " pod="openstack-operators/openstack-operator-controller-manager-ff554fc88-js46k" Jan 27 15:25:08 crc kubenswrapper[4697]: I0127 15:25:08.958856 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fshqt\" (UniqueName: \"kubernetes.io/projected/c74a171d-554d-4e80-ae59-cc340cad54be-kube-api-access-fshqt\") pod \"rabbitmq-cluster-operator-manager-668c99d594-4wpgd\" (UID: \"c74a171d-554d-4e80-ae59-cc340cad54be\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4wpgd" Jan 27 15:25:09 crc kubenswrapper[4697]: I0127 15:25:09.059714 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fshqt\" (UniqueName: \"kubernetes.io/projected/c74a171d-554d-4e80-ae59-cc340cad54be-kube-api-access-fshqt\") pod \"rabbitmq-cluster-operator-manager-668c99d594-4wpgd\" (UID: \"c74a171d-554d-4e80-ae59-cc340cad54be\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4wpgd" Jan 27 15:25:09 crc kubenswrapper[4697]: I0127 15:25:09.080999 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-65ff799cfd-666rh" event={"ID":"349690fb-f1d2-4848-8424-01e794dc6317","Type":"ContainerStarted","Data":"01e0f6725038e7f2d4fffe3f39d1ea697e3210839134971b0d5816ba1f8ea327"} Jan 27 15:25:09 crc kubenswrapper[4697]: I0127 15:25:09.084044 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fshqt\" (UniqueName: \"kubernetes.io/projected/c74a171d-554d-4e80-ae59-cc340cad54be-kube-api-access-fshqt\") pod \"rabbitmq-cluster-operator-manager-668c99d594-4wpgd\" (UID: \"c74a171d-554d-4e80-ae59-cc340cad54be\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4wpgd" Jan 27 15:25:09 crc kubenswrapper[4697]: I0127 15:25:09.151410 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4wpgd" Jan 27 15:25:09 crc kubenswrapper[4697]: I0127 15:25:09.161325 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-655bf9cfbb-wppqr"] Jan 27 15:25:09 crc kubenswrapper[4697]: W0127 15:25:09.241208 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6be24454_9d04_4e38_a00e_d6f62e156bd0.slice/crio-f661df4e98954f0dab6cef749ede88d104baa01ff788a2e29a50118309efa239 WatchSource:0}: Error finding container f661df4e98954f0dab6cef749ede88d104baa01ff788a2e29a50118309efa239: Status 404 returned error can't find the container with id f661df4e98954f0dab6cef749ede88d104baa01ff788a2e29a50118309efa239 Jan 27 15:25:09 crc kubenswrapper[4697]: I0127 15:25:09.265041 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d26a6673-d71e-4f0a-a8f6-e87866dafa6a-cert\") pod \"infra-operator-controller-manager-7d75bc88d5-2zk5c\" (UID: \"d26a6673-d71e-4f0a-a8f6-e87866dafa6a\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-2zk5c" Jan 27 15:25:09 crc kubenswrapper[4697]: E0127 15:25:09.265221 4697 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 27 15:25:09 crc kubenswrapper[4697]: E0127 15:25:09.265271 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d26a6673-d71e-4f0a-a8f6-e87866dafa6a-cert podName:d26a6673-d71e-4f0a-a8f6-e87866dafa6a nodeName:}" failed. No retries permitted until 2026-01-27 15:25:11.265254178 +0000 UTC m=+1007.437653959 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/d26a6673-d71e-4f0a-a8f6-e87866dafa6a-cert") pod "infra-operator-controller-manager-7d75bc88d5-2zk5c" (UID: "d26a6673-d71e-4f0a-a8f6-e87866dafa6a") : secret "infra-operator-webhook-server-cert" not found Jan 27 15:25:09 crc kubenswrapper[4697]: I0127 15:25:09.366333 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a0f8d486-5d8d-4ae1-9d4c-02f4ab128ede-webhook-certs\") pod \"openstack-operator-controller-manager-ff554fc88-js46k\" (UID: \"a0f8d486-5d8d-4ae1-9d4c-02f4ab128ede\") " pod="openstack-operators/openstack-operator-controller-manager-ff554fc88-js46k" Jan 27 15:25:09 crc kubenswrapper[4697]: I0127 15:25:09.366415 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a0f8d486-5d8d-4ae1-9d4c-02f4ab128ede-metrics-certs\") pod \"openstack-operator-controller-manager-ff554fc88-js46k\" (UID: \"a0f8d486-5d8d-4ae1-9d4c-02f4ab128ede\") " pod="openstack-operators/openstack-operator-controller-manager-ff554fc88-js46k" Jan 27 15:25:09 crc kubenswrapper[4697]: E0127 15:25:09.366550 4697 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 27 15:25:09 crc kubenswrapper[4697]: E0127 15:25:09.366597 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a0f8d486-5d8d-4ae1-9d4c-02f4ab128ede-metrics-certs podName:a0f8d486-5d8d-4ae1-9d4c-02f4ab128ede nodeName:}" failed. No retries permitted until 2026-01-27 15:25:10.366583067 +0000 UTC m=+1006.538982848 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a0f8d486-5d8d-4ae1-9d4c-02f4ab128ede-metrics-certs") pod "openstack-operator-controller-manager-ff554fc88-js46k" (UID: "a0f8d486-5d8d-4ae1-9d4c-02f4ab128ede") : secret "metrics-server-cert" not found Jan 27 15:25:09 crc kubenswrapper[4697]: E0127 15:25:09.366871 4697 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 27 15:25:09 crc kubenswrapper[4697]: E0127 15:25:09.366949 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a0f8d486-5d8d-4ae1-9d4c-02f4ab128ede-webhook-certs podName:a0f8d486-5d8d-4ae1-9d4c-02f4ab128ede nodeName:}" failed. No retries permitted until 2026-01-27 15:25:10.366930025 +0000 UTC m=+1006.539329806 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/a0f8d486-5d8d-4ae1-9d4c-02f4ab128ede-webhook-certs") pod "openstack-operator-controller-manager-ff554fc88-js46k" (UID: "a0f8d486-5d8d-4ae1-9d4c-02f4ab128ede") : secret "webhook-server-cert" not found Jan 27 15:25:09 crc kubenswrapper[4697]: I0127 15:25:09.470389 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-55f684fd56-zppcc"] Jan 27 15:25:09 crc kubenswrapper[4697]: I0127 15:25:09.499171 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-575ffb885b-5h569"] Jan 27 15:25:09 crc kubenswrapper[4697]: I0127 15:25:09.521192 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-67dd55ff59-hv8n2"] Jan 27 15:25:09 crc kubenswrapper[4697]: W0127 15:25:09.538845 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71562cb6_5243_4433_bd90_07c45cf11203.slice/crio-4aef6bc750025287543e12ced51ad3b653fb07a2975061791ccedc2277744dc2 WatchSource:0}: Error finding container 4aef6bc750025287543e12ced51ad3b653fb07a2975061791ccedc2277744dc2: Status 404 returned error can't find the container with id 4aef6bc750025287543e12ced51ad3b653fb07a2975061791ccedc2277744dc2 Jan 27 15:25:09 crc kubenswrapper[4697]: I0127 15:25:09.561388 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-768b776ffb-9qhk4"] Jan 27 15:25:09 crc kubenswrapper[4697]: I0127 15:25:09.569323 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-s2mqs"] Jan 27 15:25:09 crc kubenswrapper[4697]: I0127 15:25:09.575218 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-9nsp6"] Jan 27 15:25:09 crc kubenswrapper[4697]: I0127 15:25:09.771438 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ee7cb913-d3ef-459b-bd70-d6a2aea9ace3-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854dgsgx\" (UID: \"ee7cb913-d3ef-459b-bd70-d6a2aea9ace3\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854dgsgx" Jan 27 15:25:09 crc kubenswrapper[4697]: E0127 15:25:09.771642 4697 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 15:25:09 crc kubenswrapper[4697]: E0127 15:25:09.771689 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ee7cb913-d3ef-459b-bd70-d6a2aea9ace3-cert podName:ee7cb913-d3ef-459b-bd70-d6a2aea9ace3 nodeName:}" failed. No retries permitted until 2026-01-27 15:25:11.771672544 +0000 UTC m=+1007.944072325 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ee7cb913-d3ef-459b-bd70-d6a2aea9ace3-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854dgsgx" (UID: "ee7cb913-d3ef-459b-bd70-d6a2aea9ace3") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 15:25:09 crc kubenswrapper[4697]: I0127 15:25:09.825793 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-7w8b9"] Jan 27 15:25:09 crc kubenswrapper[4697]: I0127 15:25:09.839861 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-ddcbfd695-nx7cr"] Jan 27 15:25:09 crc kubenswrapper[4697]: W0127 15:25:09.846858 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod42edadff_8683_4551_b634_33e4ad590fb1.slice/crio-11b37126a7b9f4918c5d3f406f5d044937ffcdcbc5d9eb425847234a3c35c02d WatchSource:0}: Error finding container 11b37126a7b9f4918c5d3f406f5d044937ffcdcbc5d9eb425847234a3c35c02d: Status 404 returned error can't find the container with id 11b37126a7b9f4918c5d3f406f5d044937ffcdcbc5d9eb425847234a3c35c02d Jan 27 15:25:09 crc kubenswrapper[4697]: I0127 15:25:09.929582 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7875d7675-kvp8m"] Jan 27 15:25:09 crc kubenswrapper[4697]: I0127 15:25:09.939349 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-849fcfbb6b-5frlr"] Jan 27 15:25:09 crc kubenswrapper[4697]: I0127 15:25:09.962089 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-77554cdc5c-s4rdx"] Jan 27 15:25:09 crc kubenswrapper[4697]: W0127 15:25:09.978536 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd930a939_ecb8_4955_88bf_274d35ed9e6a.slice/crio-3c2eca24bd8f2840fb38b9a562c0c111315fb9f9b11251538fe06ef82c7c94d5 WatchSource:0}: Error finding container 3c2eca24bd8f2840fb38b9a562c0c111315fb9f9b11251538fe06ef82c7c94d5: Status 404 returned error can't find the container with id 3c2eca24bd8f2840fb38b9a562c0c111315fb9f9b11251538fe06ef82c7c94d5 Jan 27 15:25:10 crc kubenswrapper[4697]: I0127 15:25:10.094600 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-bkw8p"] Jan 27 15:25:10 crc kubenswrapper[4697]: W0127 15:25:10.117272 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod081ab885_5c5c_41c5_a1ca_69ab3e0b5b45.slice/crio-d8bc16b4b19a94852fc293a3a47fdfbbb15d9e82a58b03afc0df7fff6c02c4bd WatchSource:0}: Error finding container d8bc16b4b19a94852fc293a3a47fdfbbb15d9e82a58b03afc0df7fff6c02c4bd: Status 404 returned error can't find the container with id d8bc16b4b19a94852fc293a3a47fdfbbb15d9e82a58b03afc0df7fff6c02c4bd Jan 27 15:25:10 crc kubenswrapper[4697]: I0127 15:25:10.118522 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-s4rdx" event={"ID":"d930a939-ecb8-4955-88bf-274d35ed9e6a","Type":"ContainerStarted","Data":"3c2eca24bd8f2840fb38b9a562c0c111315fb9f9b11251538fe06ef82c7c94d5"} Jan 27 15:25:10 crc kubenswrapper[4697]: I0127 15:25:10.121621 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-wppqr" event={"ID":"6be24454-9d04-4e38-a00e-d6f62e156bd0","Type":"ContainerStarted","Data":"f661df4e98954f0dab6cef749ede88d104baa01ff788a2e29a50118309efa239"} Jan 27 15:25:10 crc kubenswrapper[4697]: I0127 15:25:10.123998 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-9qhk4" event={"ID":"ea0ee1bf-fe8d-4c6d-bf66-bb6b4b632ccf","Type":"ContainerStarted","Data":"c658e4db4ee636f3206a375fe6f126ea3ad501434c3661902ddfd7200c2906e5"} Jan 27 15:25:10 crc kubenswrapper[4697]: I0127 15:25:10.128974 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-575ffb885b-5h569" event={"ID":"ab1c79ce-8e28-4565-9760-5fd20ddf47eb","Type":"ContainerStarted","Data":"629e2957fafb2a788f2db03c59d392207263b6f18427510923575c267db9543a"} Jan 27 15:25:10 crc kubenswrapper[4697]: I0127 15:25:10.143602 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-7w8b9" event={"ID":"42edadff-8683-4551-b634-33e4ad590fb1","Type":"ContainerStarted","Data":"11b37126a7b9f4918c5d3f406f5d044937ffcdcbc5d9eb425847234a3c35c02d"} Jan 27 15:25:10 crc kubenswrapper[4697]: I0127 15:25:10.143667 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6c9bb4b66c-xktdh"] Jan 27 15:25:10 crc kubenswrapper[4697]: I0127 15:25:10.149986 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-6hdkv"] Jan 27 15:25:10 crc kubenswrapper[4697]: I0127 15:25:10.151935 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-9nsp6" event={"ID":"88db0cc4-3d70-47be-83e1-e5d2d3f3ff24","Type":"ContainerStarted","Data":"0e100f942ab70da8d977a3df1cf6bbc4875e55b1de9ff499fb88d58bcbf41b98"} Jan 27 15:25:10 crc kubenswrapper[4697]: I0127 15:25:10.154305 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-hv8n2" event={"ID":"71562cb6-5243-4433-bd90-07c45cf11203","Type":"ContainerStarted","Data":"4aef6bc750025287543e12ced51ad3b653fb07a2975061791ccedc2277744dc2"} Jan 27 15:25:10 crc kubenswrapper[4697]: I0127 15:25:10.155617 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-s2mqs" event={"ID":"b23f7e1b-6141-4dc3-bf18-70732ae7889a","Type":"ContainerStarted","Data":"365b235e05c27e4fdb3214eb959e023049232d54e7f144cc6446bb9cd251703f"} Jan 27 15:25:10 crc kubenswrapper[4697]: I0127 15:25:10.163527 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-5frlr" event={"ID":"a068f004-7f2c-4c3d-8bfe-98fbc4b65a73","Type":"ContainerStarted","Data":"531e71c6e580bb55cdd3fe4bf19f9a7d7a3e478f97093da122a0736c4b15217f"} Jan 27 15:25:10 crc kubenswrapper[4697]: I0127 15:25:10.168881 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-79d5ccc684-44hkp"] Jan 27 15:25:10 crc kubenswrapper[4697]: I0127 15:25:10.174220 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-6f75f45d54-ql7xq"] Jan 27 15:25:10 crc kubenswrapper[4697]: I0127 15:25:10.174250 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-ddcbfd695-nx7cr" event={"ID":"c3d1f921-6d2e-4c30-9f75-14f206a1fb7e","Type":"ContainerStarted","Data":"6e91314b1fd0371808e53abbc871abb869e4c2d16b0de3b2d4c48513260baf65"} Jan 27 15:25:10 crc kubenswrapper[4697]: I0127 15:25:10.185661 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-799bc87c89-bzmfz"] Jan 27 15:25:10 crc kubenswrapper[4697]: I0127 15:25:10.193318 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-zppcc" event={"ID":"39770161-132e-4037-aec7-9db6d10d17d8","Type":"ContainerStarted","Data":"477e358aabb57e7c660064da6e5379096b453d0234f0a21d7f6f2303a9e3c429"} Jan 27 15:25:10 crc kubenswrapper[4697]: I0127 15:25:10.195398 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7875d7675-kvp8m" event={"ID":"66cf11a2-77ca-44a8-ade8-610d02430a2d","Type":"ContainerStarted","Data":"02b2c1e61b85dc11929e7f67fc052c919b16b8d834dc4f232aff934a0ca0a8b4"} Jan 27 15:25:10 crc kubenswrapper[4697]: W0127 15:25:10.196082 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda484e650_0a10_44e5_8b88_0f4157293d48.slice/crio-8d10e186a5d3709665487496f906729a8e12fe7a98db48ebf0fd82d61b0047d1 WatchSource:0}: Error finding container 8d10e186a5d3709665487496f906729a8e12fe7a98db48ebf0fd82d61b0047d1: Status 404 returned error can't find the container with id 8d10e186a5d3709665487496f906729a8e12fe7a98db48ebf0fd82d61b0047d1 Jan 27 15:25:10 crc kubenswrapper[4697]: E0127 15:25:10.202607 4697 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:013c0ad82d21a21c7eece5cd4b5d5c4b8eb410b6671ac33a6f3fb78c8510811d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-p2brg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-79d5ccc684-44hkp_openstack-operators(a484e650-0a10-44e5-8b88-0f4157293d48): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 27 15:25:10 crc kubenswrapper[4697]: E0127 15:25:10.206951 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-44hkp" podUID="a484e650-0a10-44e5-8b88-0f4157293d48" Jan 27 15:25:10 crc kubenswrapper[4697]: I0127 15:25:10.207220 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4wpgd"] Jan 27 15:25:10 crc kubenswrapper[4697]: W0127 15:25:10.220762 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod89a02bfb_edab_48f6_8c52_6d5f56541057.slice/crio-37cb23a45cbaf5fb982be4f3c3ce567b1eac192fd1c2caa23ccb364676b0c187 WatchSource:0}: Error finding container 37cb23a45cbaf5fb982be4f3c3ce567b1eac192fd1c2caa23ccb364676b0c187: Status 404 returned error can't find the container with id 37cb23a45cbaf5fb982be4f3c3ce567b1eac192fd1c2caa23ccb364676b0c187 Jan 27 15:25:10 crc kubenswrapper[4697]: W0127 15:25:10.222909 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod386961d6_c4f3_48c7_a03f_768c470daee4.slice/crio-64574962d01512fa9a9f423a98c82f0d96ab634eae263f43b41d361644fdfbdb WatchSource:0}: Error finding container 64574962d01512fa9a9f423a98c82f0d96ab634eae263f43b41d361644fdfbdb: Status 404 returned error can't find the container with id 64574962d01512fa9a9f423a98c82f0d96ab634eae263f43b41d361644fdfbdb Jan 27 15:25:10 crc kubenswrapper[4697]: W0127 15:25:10.223897 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc74a171d_554d_4e80_ae59_cc340cad54be.slice/crio-e0a8401e399747eb718a8f627c541587d5e4128fe515495a6f5bd99389c3091d WatchSource:0}: Error finding container e0a8401e399747eb718a8f627c541587d5e4128fe515495a6f5bd99389c3091d: Status 404 returned error can't find the container with id e0a8401e399747eb718a8f627c541587d5e4128fe515495a6f5bd99389c3091d Jan 27 15:25:10 crc kubenswrapper[4697]: W0127 15:25:10.226110 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcb062e69_364e_4798_9a7e_4cfb1b1ca571.slice/crio-701add37a4cbe70a6eb39c2a499166fcb7ca068fba86aa1d94bdd0c037fc47fe WatchSource:0}: Error finding container 701add37a4cbe70a6eb39c2a499166fcb7ca068fba86aa1d94bdd0c037fc47fe: Status 404 returned error can't find the container with id 701add37a4cbe70a6eb39c2a499166fcb7ca068fba86aa1d94bdd0c037fc47fe Jan 27 15:25:10 crc kubenswrapper[4697]: E0127 15:25:10.228323 4697 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/telemetry-operator@sha256:1f1fea3b7df89b81756eab8e6f4c9bed01ab7e949a6ce2d7692c260f41dfbc20,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bfhdw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-799bc87c89-bzmfz_openstack-operators(386961d6-c4f3-48c7-a03f-768c470daee4): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 27 15:25:10 crc kubenswrapper[4697]: E0127 15:25:10.229749 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-bzmfz" podUID="386961d6-c4f3-48c7-a03f-768c470daee4" Jan 27 15:25:10 crc kubenswrapper[4697]: E0127 15:25:10.233455 4697 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:fa46fc14710961e6b4a76a3522dca3aa3cfa71436c7cf7ade533d3712822f327,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-s8wbt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-6f75f45d54-ql7xq_openstack-operators(cb062e69-364e-4798-9a7e-4cfb1b1ca571): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 27 15:25:10 crc kubenswrapper[4697]: E0127 15:25:10.234809 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-ql7xq" podUID="cb062e69-364e-4798-9a7e-4cfb1b1ca571" Jan 27 15:25:10 crc kubenswrapper[4697]: E0127 15:25:10.241264 4697 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fshqt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-4wpgd_openstack-operators(c74a171d-554d-4e80-ae59-cc340cad54be): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 27 15:25:10 crc kubenswrapper[4697]: E0127 15:25:10.242847 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4wpgd" podUID="c74a171d-554d-4e80-ae59-cc340cad54be" Jan 27 15:25:10 crc kubenswrapper[4697]: E0127 15:25:10.255989 4697 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/watcher-operator@sha256:162fb83ed76cbf5d44ba057fbeee02a9182fdf02346afadb3e16b2e3627e1940,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mr49l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-6c9bb4b66c-xktdh_openstack-operators(89a02bfb-edab-48f6-8c52-6d5f56541057): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 27 15:25:10 crc kubenswrapper[4697]: E0127 15:25:10.257208 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-6c9bb4b66c-xktdh" podUID="89a02bfb-edab-48f6-8c52-6d5f56541057" Jan 27 15:25:10 crc kubenswrapper[4697]: I0127 15:25:10.397751 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a0f8d486-5d8d-4ae1-9d4c-02f4ab128ede-webhook-certs\") pod \"openstack-operator-controller-manager-ff554fc88-js46k\" (UID: \"a0f8d486-5d8d-4ae1-9d4c-02f4ab128ede\") " pod="openstack-operators/openstack-operator-controller-manager-ff554fc88-js46k" Jan 27 15:25:10 crc kubenswrapper[4697]: I0127 15:25:10.397882 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a0f8d486-5d8d-4ae1-9d4c-02f4ab128ede-metrics-certs\") pod \"openstack-operator-controller-manager-ff554fc88-js46k\" (UID: \"a0f8d486-5d8d-4ae1-9d4c-02f4ab128ede\") " pod="openstack-operators/openstack-operator-controller-manager-ff554fc88-js46k" Jan 27 15:25:10 crc kubenswrapper[4697]: E0127 15:25:10.398016 4697 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 27 15:25:10 crc kubenswrapper[4697]: E0127 15:25:10.398148 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a0f8d486-5d8d-4ae1-9d4c-02f4ab128ede-webhook-certs podName:a0f8d486-5d8d-4ae1-9d4c-02f4ab128ede nodeName:}" failed. No retries permitted until 2026-01-27 15:25:12.398102341 +0000 UTC m=+1008.570502122 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/a0f8d486-5d8d-4ae1-9d4c-02f4ab128ede-webhook-certs") pod "openstack-operator-controller-manager-ff554fc88-js46k" (UID: "a0f8d486-5d8d-4ae1-9d4c-02f4ab128ede") : secret "webhook-server-cert" not found Jan 27 15:25:10 crc kubenswrapper[4697]: E0127 15:25:10.398016 4697 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 27 15:25:10 crc kubenswrapper[4697]: E0127 15:25:10.398954 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a0f8d486-5d8d-4ae1-9d4c-02f4ab128ede-metrics-certs podName:a0f8d486-5d8d-4ae1-9d4c-02f4ab128ede nodeName:}" failed. No retries permitted until 2026-01-27 15:25:12.398938972 +0000 UTC m=+1008.571338823 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a0f8d486-5d8d-4ae1-9d4c-02f4ab128ede-metrics-certs") pod "openstack-operator-controller-manager-ff554fc88-js46k" (UID: "a0f8d486-5d8d-4ae1-9d4c-02f4ab128ede") : secret "metrics-server-cert" not found Jan 27 15:25:11 crc kubenswrapper[4697]: I0127 15:25:11.211508 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-6hdkv" event={"ID":"eae7ff28-7cf8-4e7e-bb04-3e75bb4156ec","Type":"ContainerStarted","Data":"c69e2fca676d577faffd6e51fb4ff49ff727c128e5fed1ab03c8f8b3f6cc29b8"} Jan 27 15:25:11 crc kubenswrapper[4697]: I0127 15:25:11.214695 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-bzmfz" event={"ID":"386961d6-c4f3-48c7-a03f-768c470daee4","Type":"ContainerStarted","Data":"64574962d01512fa9a9f423a98c82f0d96ab634eae263f43b41d361644fdfbdb"} Jan 27 15:25:11 crc kubenswrapper[4697]: E0127 15:25:11.217776 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/telemetry-operator@sha256:1f1fea3b7df89b81756eab8e6f4c9bed01ab7e949a6ce2d7692c260f41dfbc20\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-bzmfz" podUID="386961d6-c4f3-48c7-a03f-768c470daee4" Jan 27 15:25:11 crc kubenswrapper[4697]: I0127 15:25:11.232707 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-6c9bb4b66c-xktdh" event={"ID":"89a02bfb-edab-48f6-8c52-6d5f56541057","Type":"ContainerStarted","Data":"37cb23a45cbaf5fb982be4f3c3ce567b1eac192fd1c2caa23ccb364676b0c187"} Jan 27 15:25:11 crc kubenswrapper[4697]: E0127 15:25:11.243098 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/watcher-operator@sha256:162fb83ed76cbf5d44ba057fbeee02a9182fdf02346afadb3e16b2e3627e1940\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-6c9bb4b66c-xktdh" podUID="89a02bfb-edab-48f6-8c52-6d5f56541057" Jan 27 15:25:11 crc kubenswrapper[4697]: I0127 15:25:11.246853 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-ql7xq" event={"ID":"cb062e69-364e-4798-9a7e-4cfb1b1ca571","Type":"ContainerStarted","Data":"701add37a4cbe70a6eb39c2a499166fcb7ca068fba86aa1d94bdd0c037fc47fe"} Jan 27 15:25:11 crc kubenswrapper[4697]: E0127 15:25:11.250379 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:fa46fc14710961e6b4a76a3522dca3aa3cfa71436c7cf7ade533d3712822f327\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-ql7xq" podUID="cb062e69-364e-4798-9a7e-4cfb1b1ca571" Jan 27 15:25:11 crc kubenswrapper[4697]: I0127 15:25:11.253234 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4wpgd" event={"ID":"c74a171d-554d-4e80-ae59-cc340cad54be","Type":"ContainerStarted","Data":"e0a8401e399747eb718a8f627c541587d5e4128fe515495a6f5bd99389c3091d"} Jan 27 15:25:11 crc kubenswrapper[4697]: E0127 15:25:11.254338 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4wpgd" podUID="c74a171d-554d-4e80-ae59-cc340cad54be" Jan 27 15:25:11 crc kubenswrapper[4697]: I0127 15:25:11.254943 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-bkw8p" event={"ID":"081ab885-5c5c-41c5-a1ca-69ab3e0b5b45","Type":"ContainerStarted","Data":"d8bc16b4b19a94852fc293a3a47fdfbbb15d9e82a58b03afc0df7fff6c02c4bd"} Jan 27 15:25:11 crc kubenswrapper[4697]: I0127 15:25:11.258730 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-44hkp" event={"ID":"a484e650-0a10-44e5-8b88-0f4157293d48","Type":"ContainerStarted","Data":"8d10e186a5d3709665487496f906729a8e12fe7a98db48ebf0fd82d61b0047d1"} Jan 27 15:25:11 crc kubenswrapper[4697]: E0127 15:25:11.260321 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:013c0ad82d21a21c7eece5cd4b5d5c4b8eb410b6671ac33a6f3fb78c8510811d\\\"\"" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-44hkp" podUID="a484e650-0a10-44e5-8b88-0f4157293d48" Jan 27 15:25:11 crc kubenswrapper[4697]: I0127 15:25:11.342969 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d26a6673-d71e-4f0a-a8f6-e87866dafa6a-cert\") pod \"infra-operator-controller-manager-7d75bc88d5-2zk5c\" (UID: \"d26a6673-d71e-4f0a-a8f6-e87866dafa6a\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-2zk5c" Jan 27 15:25:11 crc kubenswrapper[4697]: E0127 15:25:11.343131 4697 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 27 15:25:11 crc kubenswrapper[4697]: E0127 15:25:11.343177 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d26a6673-d71e-4f0a-a8f6-e87866dafa6a-cert podName:d26a6673-d71e-4f0a-a8f6-e87866dafa6a nodeName:}" failed. No retries permitted until 2026-01-27 15:25:15.343161415 +0000 UTC m=+1011.515561196 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/d26a6673-d71e-4f0a-a8f6-e87866dafa6a-cert") pod "infra-operator-controller-manager-7d75bc88d5-2zk5c" (UID: "d26a6673-d71e-4f0a-a8f6-e87866dafa6a") : secret "infra-operator-webhook-server-cert" not found Jan 27 15:25:11 crc kubenswrapper[4697]: I0127 15:25:11.851472 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ee7cb913-d3ef-459b-bd70-d6a2aea9ace3-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854dgsgx\" (UID: \"ee7cb913-d3ef-459b-bd70-d6a2aea9ace3\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854dgsgx" Jan 27 15:25:11 crc kubenswrapper[4697]: E0127 15:25:11.851821 4697 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 15:25:11 crc kubenswrapper[4697]: E0127 15:25:11.851929 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ee7cb913-d3ef-459b-bd70-d6a2aea9ace3-cert podName:ee7cb913-d3ef-459b-bd70-d6a2aea9ace3 nodeName:}" failed. No retries permitted until 2026-01-27 15:25:15.851903689 +0000 UTC m=+1012.024303470 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ee7cb913-d3ef-459b-bd70-d6a2aea9ace3-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854dgsgx" (UID: "ee7cb913-d3ef-459b-bd70-d6a2aea9ace3") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 15:25:12 crc kubenswrapper[4697]: E0127 15:25:12.273546 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/telemetry-operator@sha256:1f1fea3b7df89b81756eab8e6f4c9bed01ab7e949a6ce2d7692c260f41dfbc20\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-bzmfz" podUID="386961d6-c4f3-48c7-a03f-768c470daee4" Jan 27 15:25:12 crc kubenswrapper[4697]: E0127 15:25:12.274245 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:013c0ad82d21a21c7eece5cd4b5d5c4b8eb410b6671ac33a6f3fb78c8510811d\\\"\"" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-44hkp" podUID="a484e650-0a10-44e5-8b88-0f4157293d48" Jan 27 15:25:12 crc kubenswrapper[4697]: E0127 15:25:12.274504 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:fa46fc14710961e6b4a76a3522dca3aa3cfa71436c7cf7ade533d3712822f327\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-ql7xq" podUID="cb062e69-364e-4798-9a7e-4cfb1b1ca571" Jan 27 15:25:12 crc kubenswrapper[4697]: E0127 15:25:12.274557 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4wpgd" podUID="c74a171d-554d-4e80-ae59-cc340cad54be" Jan 27 15:25:12 crc kubenswrapper[4697]: E0127 15:25:12.274673 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/watcher-operator@sha256:162fb83ed76cbf5d44ba057fbeee02a9182fdf02346afadb3e16b2e3627e1940\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-6c9bb4b66c-xktdh" podUID="89a02bfb-edab-48f6-8c52-6d5f56541057" Jan 27 15:25:12 crc kubenswrapper[4697]: I0127 15:25:12.462583 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a0f8d486-5d8d-4ae1-9d4c-02f4ab128ede-metrics-certs\") pod \"openstack-operator-controller-manager-ff554fc88-js46k\" (UID: \"a0f8d486-5d8d-4ae1-9d4c-02f4ab128ede\") " pod="openstack-operators/openstack-operator-controller-manager-ff554fc88-js46k" Jan 27 15:25:12 crc kubenswrapper[4697]: I0127 15:25:12.462671 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a0f8d486-5d8d-4ae1-9d4c-02f4ab128ede-webhook-certs\") pod \"openstack-operator-controller-manager-ff554fc88-js46k\" (UID: \"a0f8d486-5d8d-4ae1-9d4c-02f4ab128ede\") " pod="openstack-operators/openstack-operator-controller-manager-ff554fc88-js46k" Jan 27 15:25:12 crc kubenswrapper[4697]: E0127 15:25:12.462827 4697 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 27 15:25:12 crc kubenswrapper[4697]: E0127 15:25:12.462873 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a0f8d486-5d8d-4ae1-9d4c-02f4ab128ede-webhook-certs podName:a0f8d486-5d8d-4ae1-9d4c-02f4ab128ede nodeName:}" failed. No retries permitted until 2026-01-27 15:25:16.462859943 +0000 UTC m=+1012.635259724 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/a0f8d486-5d8d-4ae1-9d4c-02f4ab128ede-webhook-certs") pod "openstack-operator-controller-manager-ff554fc88-js46k" (UID: "a0f8d486-5d8d-4ae1-9d4c-02f4ab128ede") : secret "webhook-server-cert" not found Jan 27 15:25:12 crc kubenswrapper[4697]: E0127 15:25:12.463173 4697 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 27 15:25:12 crc kubenswrapper[4697]: E0127 15:25:12.463201 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a0f8d486-5d8d-4ae1-9d4c-02f4ab128ede-metrics-certs podName:a0f8d486-5d8d-4ae1-9d4c-02f4ab128ede nodeName:}" failed. No retries permitted until 2026-01-27 15:25:16.463193711 +0000 UTC m=+1012.635593492 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a0f8d486-5d8d-4ae1-9d4c-02f4ab128ede-metrics-certs") pod "openstack-operator-controller-manager-ff554fc88-js46k" (UID: "a0f8d486-5d8d-4ae1-9d4c-02f4ab128ede") : secret "metrics-server-cert" not found Jan 27 15:25:15 crc kubenswrapper[4697]: I0127 15:25:15.414370 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d26a6673-d71e-4f0a-a8f6-e87866dafa6a-cert\") pod \"infra-operator-controller-manager-7d75bc88d5-2zk5c\" (UID: \"d26a6673-d71e-4f0a-a8f6-e87866dafa6a\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-2zk5c" Jan 27 15:25:15 crc kubenswrapper[4697]: E0127 15:25:15.414633 4697 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 27 15:25:15 crc kubenswrapper[4697]: E0127 15:25:15.414890 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d26a6673-d71e-4f0a-a8f6-e87866dafa6a-cert podName:d26a6673-d71e-4f0a-a8f6-e87866dafa6a nodeName:}" failed. No retries permitted until 2026-01-27 15:25:23.414871099 +0000 UTC m=+1019.587270890 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/d26a6673-d71e-4f0a-a8f6-e87866dafa6a-cert") pod "infra-operator-controller-manager-7d75bc88d5-2zk5c" (UID: "d26a6673-d71e-4f0a-a8f6-e87866dafa6a") : secret "infra-operator-webhook-server-cert" not found Jan 27 15:25:15 crc kubenswrapper[4697]: I0127 15:25:15.920296 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ee7cb913-d3ef-459b-bd70-d6a2aea9ace3-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854dgsgx\" (UID: \"ee7cb913-d3ef-459b-bd70-d6a2aea9ace3\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854dgsgx" Jan 27 15:25:15 crc kubenswrapper[4697]: E0127 15:25:15.920534 4697 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 15:25:15 crc kubenswrapper[4697]: E0127 15:25:15.920663 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ee7cb913-d3ef-459b-bd70-d6a2aea9ace3-cert podName:ee7cb913-d3ef-459b-bd70-d6a2aea9ace3 nodeName:}" failed. No retries permitted until 2026-01-27 15:25:23.920625128 +0000 UTC m=+1020.093024939 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ee7cb913-d3ef-459b-bd70-d6a2aea9ace3-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854dgsgx" (UID: "ee7cb913-d3ef-459b-bd70-d6a2aea9ace3") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 15:25:16 crc kubenswrapper[4697]: I0127 15:25:16.527909 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a0f8d486-5d8d-4ae1-9d4c-02f4ab128ede-metrics-certs\") pod \"openstack-operator-controller-manager-ff554fc88-js46k\" (UID: \"a0f8d486-5d8d-4ae1-9d4c-02f4ab128ede\") " pod="openstack-operators/openstack-operator-controller-manager-ff554fc88-js46k" Jan 27 15:25:16 crc kubenswrapper[4697]: I0127 15:25:16.528010 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a0f8d486-5d8d-4ae1-9d4c-02f4ab128ede-webhook-certs\") pod \"openstack-operator-controller-manager-ff554fc88-js46k\" (UID: \"a0f8d486-5d8d-4ae1-9d4c-02f4ab128ede\") " pod="openstack-operators/openstack-operator-controller-manager-ff554fc88-js46k" Jan 27 15:25:16 crc kubenswrapper[4697]: E0127 15:25:16.528118 4697 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 27 15:25:16 crc kubenswrapper[4697]: E0127 15:25:16.528142 4697 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 27 15:25:16 crc kubenswrapper[4697]: E0127 15:25:16.528200 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a0f8d486-5d8d-4ae1-9d4c-02f4ab128ede-metrics-certs podName:a0f8d486-5d8d-4ae1-9d4c-02f4ab128ede nodeName:}" failed. No retries permitted until 2026-01-27 15:25:24.528179388 +0000 UTC m=+1020.700579179 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a0f8d486-5d8d-4ae1-9d4c-02f4ab128ede-metrics-certs") pod "openstack-operator-controller-manager-ff554fc88-js46k" (UID: "a0f8d486-5d8d-4ae1-9d4c-02f4ab128ede") : secret "metrics-server-cert" not found Jan 27 15:25:16 crc kubenswrapper[4697]: E0127 15:25:16.528221 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a0f8d486-5d8d-4ae1-9d4c-02f4ab128ede-webhook-certs podName:a0f8d486-5d8d-4ae1-9d4c-02f4ab128ede nodeName:}" failed. No retries permitted until 2026-01-27 15:25:24.528211959 +0000 UTC m=+1020.700611750 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/a0f8d486-5d8d-4ae1-9d4c-02f4ab128ede-webhook-certs") pod "openstack-operator-controller-manager-ff554fc88-js46k" (UID: "a0f8d486-5d8d-4ae1-9d4c-02f4ab128ede") : secret "webhook-server-cert" not found Jan 27 15:25:23 crc kubenswrapper[4697]: I0127 15:25:23.447201 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d26a6673-d71e-4f0a-a8f6-e87866dafa6a-cert\") pod \"infra-operator-controller-manager-7d75bc88d5-2zk5c\" (UID: \"d26a6673-d71e-4f0a-a8f6-e87866dafa6a\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-2zk5c" Jan 27 15:25:23 crc kubenswrapper[4697]: I0127 15:25:23.458408 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d26a6673-d71e-4f0a-a8f6-e87866dafa6a-cert\") pod \"infra-operator-controller-manager-7d75bc88d5-2zk5c\" (UID: \"d26a6673-d71e-4f0a-a8f6-e87866dafa6a\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-2zk5c" Jan 27 15:25:23 crc kubenswrapper[4697]: I0127 15:25:23.715851 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-2zk5c" Jan 27 15:25:23 crc kubenswrapper[4697]: I0127 15:25:23.955219 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ee7cb913-d3ef-459b-bd70-d6a2aea9ace3-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854dgsgx\" (UID: \"ee7cb913-d3ef-459b-bd70-d6a2aea9ace3\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854dgsgx" Jan 27 15:25:23 crc kubenswrapper[4697]: I0127 15:25:23.960383 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ee7cb913-d3ef-459b-bd70-d6a2aea9ace3-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854dgsgx\" (UID: \"ee7cb913-d3ef-459b-bd70-d6a2aea9ace3\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854dgsgx" Jan 27 15:25:24 crc kubenswrapper[4697]: E0127 15:25:24.020034 4697 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d" Jan 27 15:25:24 crc kubenswrapper[4697]: E0127 15:25:24.020611 4697 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mk4fn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-69797bbcbd-bkw8p_openstack-operators(081ab885-5c5c-41c5-a1ca-69ab3e0b5b45): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 15:25:24 crc kubenswrapper[4697]: E0127 15:25:24.021861 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-bkw8p" podUID="081ab885-5c5c-41c5-a1ca-69ab3e0b5b45" Jan 27 15:25:24 crc kubenswrapper[4697]: I0127 15:25:24.236673 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854dgsgx" Jan 27 15:25:24 crc kubenswrapper[4697]: E0127 15:25:24.352524 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d\\\"\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-bkw8p" podUID="081ab885-5c5c-41c5-a1ca-69ab3e0b5b45" Jan 27 15:25:24 crc kubenswrapper[4697]: I0127 15:25:24.565523 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a0f8d486-5d8d-4ae1-9d4c-02f4ab128ede-metrics-certs\") pod \"openstack-operator-controller-manager-ff554fc88-js46k\" (UID: \"a0f8d486-5d8d-4ae1-9d4c-02f4ab128ede\") " pod="openstack-operators/openstack-operator-controller-manager-ff554fc88-js46k" Jan 27 15:25:24 crc kubenswrapper[4697]: I0127 15:25:24.566599 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a0f8d486-5d8d-4ae1-9d4c-02f4ab128ede-webhook-certs\") pod \"openstack-operator-controller-manager-ff554fc88-js46k\" (UID: \"a0f8d486-5d8d-4ae1-9d4c-02f4ab128ede\") " pod="openstack-operators/openstack-operator-controller-manager-ff554fc88-js46k" Jan 27 15:25:24 crc kubenswrapper[4697]: I0127 15:25:24.570465 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a0f8d486-5d8d-4ae1-9d4c-02f4ab128ede-metrics-certs\") pod \"openstack-operator-controller-manager-ff554fc88-js46k\" (UID: \"a0f8d486-5d8d-4ae1-9d4c-02f4ab128ede\") " pod="openstack-operators/openstack-operator-controller-manager-ff554fc88-js46k" Jan 27 15:25:24 crc kubenswrapper[4697]: I0127 15:25:24.570923 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a0f8d486-5d8d-4ae1-9d4c-02f4ab128ede-webhook-certs\") pod \"openstack-operator-controller-manager-ff554fc88-js46k\" (UID: \"a0f8d486-5d8d-4ae1-9d4c-02f4ab128ede\") " pod="openstack-operators/openstack-operator-controller-manager-ff554fc88-js46k" Jan 27 15:25:24 crc kubenswrapper[4697]: I0127 15:25:24.660741 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-78cg5" Jan 27 15:25:24 crc kubenswrapper[4697]: E0127 15:25:24.662349 4697 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/barbican-operator@sha256:44022a4042de334e1f04985eb102df0076ddbe3065e85b243a02a7c509952977" Jan 27 15:25:24 crc kubenswrapper[4697]: E0127 15:25:24.662827 4697 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/barbican-operator@sha256:44022a4042de334e1f04985eb102df0076ddbe3065e85b243a02a7c509952977,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fdq8j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-operator-controller-manager-65ff799cfd-666rh_openstack-operators(349690fb-f1d2-4848-8424-01e794dc6317): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 15:25:24 crc kubenswrapper[4697]: E0127 15:25:24.664127 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/barbican-operator-controller-manager-65ff799cfd-666rh" podUID="349690fb-f1d2-4848-8424-01e794dc6317" Jan 27 15:25:24 crc kubenswrapper[4697]: I0127 15:25:24.668572 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-ff554fc88-js46k" Jan 27 15:25:25 crc kubenswrapper[4697]: E0127 15:25:25.362224 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/barbican-operator@sha256:44022a4042de334e1f04985eb102df0076ddbe3065e85b243a02a7c509952977\\\"\"" pod="openstack-operators/barbican-operator-controller-manager-65ff799cfd-666rh" podUID="349690fb-f1d2-4848-8424-01e794dc6317" Jan 27 15:25:26 crc kubenswrapper[4697]: E0127 15:25:26.292805 4697 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/heat-operator@sha256:027f3118543388d561b452a9777783b1f866ffaf59d9a1b16a225b1c5636111f" Jan 27 15:25:26 crc kubenswrapper[4697]: E0127 15:25:26.293383 4697 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/heat-operator@sha256:027f3118543388d561b452a9777783b1f866ffaf59d9a1b16a225b1c5636111f,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qvl79,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-575ffb885b-5h569_openstack-operators(ab1c79ce-8e28-4565-9760-5fd20ddf47eb): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 15:25:26 crc kubenswrapper[4697]: E0127 15:25:26.295118 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/heat-operator-controller-manager-575ffb885b-5h569" podUID="ab1c79ce-8e28-4565-9760-5fd20ddf47eb" Jan 27 15:25:26 crc kubenswrapper[4697]: E0127 15:25:26.368221 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/heat-operator@sha256:027f3118543388d561b452a9777783b1f866ffaf59d9a1b16a225b1c5636111f\\\"\"" pod="openstack-operators/heat-operator-controller-manager-575ffb885b-5h569" podUID="ab1c79ce-8e28-4565-9760-5fd20ddf47eb" Jan 27 15:25:28 crc kubenswrapper[4697]: E0127 15:25:28.037383 4697 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/keystone-operator@sha256:008a2e338430e7dd513f81f66320cc5c1332c332a3191b537d75786489d7f487" Jan 27 15:25:28 crc kubenswrapper[4697]: E0127 15:25:28.037750 4697 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/keystone-operator@sha256:008a2e338430e7dd513f81f66320cc5c1332c332a3191b537d75786489d7f487,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gmxtz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-55f684fd56-zppcc_openstack-operators(39770161-132e-4037-aec7-9db6d10d17d8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 15:25:28 crc kubenswrapper[4697]: E0127 15:25:28.039468 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-zppcc" podUID="39770161-132e-4037-aec7-9db6d10d17d8" Jan 27 15:25:28 crc kubenswrapper[4697]: E0127 15:25:28.384378 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/keystone-operator@sha256:008a2e338430e7dd513f81f66320cc5c1332c332a3191b537d75786489d7f487\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-zppcc" podUID="39770161-132e-4037-aec7-9db6d10d17d8" Jan 27 15:25:29 crc kubenswrapper[4697]: E0127 15:25:29.652392 4697 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/cinder-operator@sha256:7619b8e8814c4d22fcdcc392cdaba2ce279d356fc9263275c91acfba86533591" Jan 27 15:25:29 crc kubenswrapper[4697]: E0127 15:25:29.652980 4697 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/cinder-operator@sha256:7619b8e8814c4d22fcdcc392cdaba2ce279d356fc9263275c91acfba86533591,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8jcsf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-operator-controller-manager-655bf9cfbb-wppqr_openstack-operators(6be24454-9d04-4e38-a00e-d6f62e156bd0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 15:25:29 crc kubenswrapper[4697]: E0127 15:25:29.654221 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-wppqr" podUID="6be24454-9d04-4e38-a00e-d6f62e156bd0" Jan 27 15:25:30 crc kubenswrapper[4697]: E0127 15:25:30.225887 4697 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/manila-operator@sha256:82feceb236aaeae01761b172c94173d2624fe12feeb76a18c8aa2a664bafaf84" Jan 27 15:25:30 crc kubenswrapper[4697]: E0127 15:25:30.226080 4697 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/manila-operator@sha256:82feceb236aaeae01761b172c94173d2624fe12feeb76a18c8aa2a664bafaf84,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4lzf7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-849fcfbb6b-5frlr_openstack-operators(a068f004-7f2c-4c3d-8bfe-98fbc4b65a73): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 15:25:30 crc kubenswrapper[4697]: E0127 15:25:30.227276 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-5frlr" podUID="a068f004-7f2c-4c3d-8bfe-98fbc4b65a73" Jan 27 15:25:30 crc kubenswrapper[4697]: E0127 15:25:30.398246 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/cinder-operator@sha256:7619b8e8814c4d22fcdcc392cdaba2ce279d356fc9263275c91acfba86533591\\\"\"" pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-wppqr" podUID="6be24454-9d04-4e38-a00e-d6f62e156bd0" Jan 27 15:25:30 crc kubenswrapper[4697]: E0127 15:25:30.400202 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/manila-operator@sha256:82feceb236aaeae01761b172c94173d2624fe12feeb76a18c8aa2a664bafaf84\\\"\"" pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-5frlr" podUID="a068f004-7f2c-4c3d-8bfe-98fbc4b65a73" Jan 27 15:25:32 crc kubenswrapper[4697]: E0127 15:25:32.904939 4697 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/designate-operator@sha256:d26a32730ba8b64e98f68194bd1a766aadc942392b24fa6a2cf1c136969dd99f" Jan 27 15:25:32 crc kubenswrapper[4697]: E0127 15:25:32.905951 4697 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/designate-operator@sha256:d26a32730ba8b64e98f68194bd1a766aadc942392b24fa6a2cf1c136969dd99f,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-q68d8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-77554cdc5c-s4rdx_openstack-operators(d930a939-ecb8-4955-88bf-274d35ed9e6a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 15:25:32 crc kubenswrapper[4697]: E0127 15:25:32.907287 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-s4rdx" podUID="d930a939-ecb8-4955-88bf-274d35ed9e6a" Jan 27 15:25:33 crc kubenswrapper[4697]: E0127 15:25:33.414889 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/designate-operator@sha256:d26a32730ba8b64e98f68194bd1a766aadc942392b24fa6a2cf1c136969dd99f\\\"\"" pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-s4rdx" podUID="d930a939-ecb8-4955-88bf-274d35ed9e6a" Jan 27 15:25:38 crc kubenswrapper[4697]: I0127 15:25:38.570357 4697 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 15:25:40 crc kubenswrapper[4697]: E0127 15:25:40.176535 4697 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/nova-operator@sha256:a992613466db3478a00c20c28639c4a12f6326aa52c40a418d1ec40038c83b61" Jan 27 15:25:40 crc kubenswrapper[4697]: E0127 15:25:40.177009 4697 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/nova-operator@sha256:a992613466db3478a00c20c28639c4a12f6326aa52c40a418d1ec40038c83b61,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hgz8t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-ddcbfd695-nx7cr_openstack-operators(c3d1f921-6d2e-4c30-9f75-14f206a1fb7e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 15:25:40 crc kubenswrapper[4697]: E0127 15:25:40.179720 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-ddcbfd695-nx7cr" podUID="c3d1f921-6d2e-4c30-9f75-14f206a1fb7e" Jan 27 15:25:40 crc kubenswrapper[4697]: E0127 15:25:40.459103 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/nova-operator@sha256:a992613466db3478a00c20c28639c4a12f6326aa52c40a418d1ec40038c83b61\\\"\"" pod="openstack-operators/nova-operator-controller-manager-ddcbfd695-nx7cr" podUID="c3d1f921-6d2e-4c30-9f75-14f206a1fb7e" Jan 27 15:25:40 crc kubenswrapper[4697]: E0127 15:25:40.764833 4697 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Jan 27 15:25:40 crc kubenswrapper[4697]: E0127 15:25:40.764991 4697 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fshqt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-4wpgd_openstack-operators(c74a171d-554d-4e80-ae59-cc340cad54be): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 15:25:40 crc kubenswrapper[4697]: E0127 15:25:40.766139 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4wpgd" podUID="c74a171d-554d-4e80-ae59-cc340cad54be" Jan 27 15:25:41 crc kubenswrapper[4697]: I0127 15:25:41.113585 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-7d75bc88d5-2zk5c"] Jan 27 15:25:41 crc kubenswrapper[4697]: I0127 15:25:41.182490 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-ff554fc88-js46k"] Jan 27 15:25:41 crc kubenswrapper[4697]: I0127 15:25:41.488043 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-7w8b9" event={"ID":"42edadff-8683-4551-b634-33e4ad590fb1","Type":"ContainerStarted","Data":"a40f0dcf8ffe05d9221423b0f01bdfe904480a2e20a7a7cc4003854ab3b0c004"} Jan 27 15:25:41 crc kubenswrapper[4697]: I0127 15:25:41.488842 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-7w8b9" Jan 27 15:25:41 crc kubenswrapper[4697]: I0127 15:25:41.495365 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854dgsgx"] Jan 27 15:25:41 crc kubenswrapper[4697]: I0127 15:25:41.503468 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-ff554fc88-js46k" event={"ID":"a0f8d486-5d8d-4ae1-9d4c-02f4ab128ede","Type":"ContainerStarted","Data":"31606325f0554475bb7f5f6096a5e4d2a53fed6d8f2a4761f8ca489505b7215f"} Jan 27 15:25:41 crc kubenswrapper[4697]: I0127 15:25:41.510437 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-s2mqs" event={"ID":"b23f7e1b-6141-4dc3-bf18-70732ae7889a","Type":"ContainerStarted","Data":"e71f0a9e24d025d5ee40d006099bee0bbb7e595bcdf1648651657d72f5741eca"} Jan 27 15:25:41 crc kubenswrapper[4697]: I0127 15:25:41.511143 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-s2mqs" Jan 27 15:25:41 crc kubenswrapper[4697]: I0127 15:25:41.523544 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-6hdkv" event={"ID":"eae7ff28-7cf8-4e7e-bb04-3e75bb4156ec","Type":"ContainerStarted","Data":"c6823dd7a7535d5f1559b0a92f227affc4a21ea7685dedf48383018d72aef8ca"} Jan 27 15:25:41 crc kubenswrapper[4697]: I0127 15:25:41.524381 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-6hdkv" Jan 27 15:25:41 crc kubenswrapper[4697]: I0127 15:25:41.528251 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-9qhk4" event={"ID":"ea0ee1bf-fe8d-4c6d-bf66-bb6b4b632ccf","Type":"ContainerStarted","Data":"2cd90a4350b4bea2dcc5ab1f18b09573fb41d8c7f6e84e3ffacd0ee953eeb05a"} Jan 27 15:25:41 crc kubenswrapper[4697]: I0127 15:25:41.528905 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-9qhk4" Jan 27 15:25:41 crc kubenswrapper[4697]: I0127 15:25:41.537858 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-7w8b9" podStartSLOduration=10.956176187 podStartE2EDuration="34.537833946s" podCreationTimestamp="2026-01-27 15:25:07 +0000 UTC" firstStartedPulling="2026-01-27 15:25:09.891761217 +0000 UTC m=+1006.064160998" lastFinishedPulling="2026-01-27 15:25:33.473418976 +0000 UTC m=+1029.645818757" observedRunningTime="2026-01-27 15:25:41.529248853 +0000 UTC m=+1037.701648644" watchObservedRunningTime="2026-01-27 15:25:41.537833946 +0000 UTC m=+1037.710233727" Jan 27 15:25:41 crc kubenswrapper[4697]: I0127 15:25:41.541007 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-2zk5c" event={"ID":"d26a6673-d71e-4f0a-a8f6-e87866dafa6a","Type":"ContainerStarted","Data":"f0431d1743672c9e639ca3c2c8d31726ce1ed227478965fb749eac7e896b0b3f"} Jan 27 15:25:41 crc kubenswrapper[4697]: I0127 15:25:41.552224 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-ql7xq" event={"ID":"cb062e69-364e-4798-9a7e-4cfb1b1ca571","Type":"ContainerStarted","Data":"01c6754fe2f7fc1b3017a10f13b46406c17a9433355838ebfd9625478e10c720"} Jan 27 15:25:41 crc kubenswrapper[4697]: I0127 15:25:41.552584 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-ql7xq" Jan 27 15:25:41 crc kubenswrapper[4697]: I0127 15:25:41.558144 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7875d7675-kvp8m" event={"ID":"66cf11a2-77ca-44a8-ade8-610d02430a2d","Type":"ContainerStarted","Data":"d91e3dde05bd8d2df1e647da506fbcbda4ba555d9daaba6800c88075b79bf1d7"} Jan 27 15:25:41 crc kubenswrapper[4697]: I0127 15:25:41.558734 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-7875d7675-kvp8m" Jan 27 15:25:41 crc kubenswrapper[4697]: I0127 15:25:41.565133 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-hv8n2" event={"ID":"71562cb6-5243-4433-bd90-07c45cf11203","Type":"ContainerStarted","Data":"4f2d640640941b98bb9ad0866566286d8214ed33d3a687aa37fc6413207999c9"} Jan 27 15:25:41 crc kubenswrapper[4697]: I0127 15:25:41.565714 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-hv8n2" Jan 27 15:25:41 crc kubenswrapper[4697]: I0127 15:25:41.567055 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-s2mqs" podStartSLOduration=10.66912545 podStartE2EDuration="34.567034579s" podCreationTimestamp="2026-01-27 15:25:07 +0000 UTC" firstStartedPulling="2026-01-27 15:25:09.574833341 +0000 UTC m=+1005.747233122" lastFinishedPulling="2026-01-27 15:25:33.47274247 +0000 UTC m=+1029.645142251" observedRunningTime="2026-01-27 15:25:41.564036105 +0000 UTC m=+1037.736435886" watchObservedRunningTime="2026-01-27 15:25:41.567034579 +0000 UTC m=+1037.739434370" Jan 27 15:25:41 crc kubenswrapper[4697]: I0127 15:25:41.652630 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-9qhk4" podStartSLOduration=10.75073947 podStartE2EDuration="34.652608887s" podCreationTimestamp="2026-01-27 15:25:07 +0000 UTC" firstStartedPulling="2026-01-27 15:25:09.572029282 +0000 UTC m=+1005.744429053" lastFinishedPulling="2026-01-27 15:25:33.473898689 +0000 UTC m=+1029.646298470" observedRunningTime="2026-01-27 15:25:41.586231653 +0000 UTC m=+1037.758631434" watchObservedRunningTime="2026-01-27 15:25:41.652608887 +0000 UTC m=+1037.825008658" Jan 27 15:25:41 crc kubenswrapper[4697]: I0127 15:25:41.667904 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-6hdkv" podStartSLOduration=10.383587047 podStartE2EDuration="33.667882065s" podCreationTimestamp="2026-01-27 15:25:08 +0000 UTC" firstStartedPulling="2026-01-27 15:25:10.190129683 +0000 UTC m=+1006.362529464" lastFinishedPulling="2026-01-27 15:25:33.474424701 +0000 UTC m=+1029.646824482" observedRunningTime="2026-01-27 15:25:41.622724308 +0000 UTC m=+1037.795124089" watchObservedRunningTime="2026-01-27 15:25:41.667882065 +0000 UTC m=+1037.840281846" Jan 27 15:25:41 crc kubenswrapper[4697]: I0127 15:25:41.678888 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-hv8n2" podStartSLOduration=11.310035577 podStartE2EDuration="34.678872067s" podCreationTimestamp="2026-01-27 15:25:07 +0000 UTC" firstStartedPulling="2026-01-27 15:25:09.557554064 +0000 UTC m=+1005.729953845" lastFinishedPulling="2026-01-27 15:25:32.926390554 +0000 UTC m=+1029.098790335" observedRunningTime="2026-01-27 15:25:41.669824153 +0000 UTC m=+1037.842223934" watchObservedRunningTime="2026-01-27 15:25:41.678872067 +0000 UTC m=+1037.851271848" Jan 27 15:25:41 crc kubenswrapper[4697]: I0127 15:25:41.689084 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-ql7xq" podStartSLOduration=4.056087889 podStartE2EDuration="34.689070029s" podCreationTimestamp="2026-01-27 15:25:07 +0000 UTC" firstStartedPulling="2026-01-27 15:25:10.233362813 +0000 UTC m=+1006.405762594" lastFinishedPulling="2026-01-27 15:25:40.866344933 +0000 UTC m=+1037.038744734" observedRunningTime="2026-01-27 15:25:41.688966827 +0000 UTC m=+1037.861366608" watchObservedRunningTime="2026-01-27 15:25:41.689070029 +0000 UTC m=+1037.861469810" Jan 27 15:25:41 crc kubenswrapper[4697]: I0127 15:25:41.713672 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-7875d7675-kvp8m" podStartSLOduration=11.139718479 podStartE2EDuration="34.713655548s" podCreationTimestamp="2026-01-27 15:25:07 +0000 UTC" firstStartedPulling="2026-01-27 15:25:09.900765299 +0000 UTC m=+1006.073165080" lastFinishedPulling="2026-01-27 15:25:33.474702368 +0000 UTC m=+1029.647102149" observedRunningTime="2026-01-27 15:25:41.709167967 +0000 UTC m=+1037.881567738" watchObservedRunningTime="2026-01-27 15:25:41.713655548 +0000 UTC m=+1037.886055319" Jan 27 15:25:42 crc kubenswrapper[4697]: I0127 15:25:42.585698 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-bkw8p" event={"ID":"081ab885-5c5c-41c5-a1ca-69ab3e0b5b45","Type":"ContainerStarted","Data":"0562884d06497c718b770b6f14e1315d3c14271028625d61d59f4a2771a0d64c"} Jan 27 15:25:42 crc kubenswrapper[4697]: I0127 15:25:42.586254 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-bkw8p" Jan 27 15:25:42 crc kubenswrapper[4697]: I0127 15:25:42.588254 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-bzmfz" event={"ID":"386961d6-c4f3-48c7-a03f-768c470daee4","Type":"ContainerStarted","Data":"76db574a3b5eae75ae6ce1aff848cef6114576045744f01704ba5805178ed0d3"} Jan 27 15:25:42 crc kubenswrapper[4697]: I0127 15:25:42.588410 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-bzmfz" Jan 27 15:25:42 crc kubenswrapper[4697]: I0127 15:25:42.591213 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-6c9bb4b66c-xktdh" event={"ID":"89a02bfb-edab-48f6-8c52-6d5f56541057","Type":"ContainerStarted","Data":"df7eb27a601d21913fa5fee30691a13cb478f9483891b94b76f50c75e055b74b"} Jan 27 15:25:42 crc kubenswrapper[4697]: I0127 15:25:42.591372 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-6c9bb4b66c-xktdh" Jan 27 15:25:42 crc kubenswrapper[4697]: I0127 15:25:42.593303 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-ff554fc88-js46k" event={"ID":"a0f8d486-5d8d-4ae1-9d4c-02f4ab128ede","Type":"ContainerStarted","Data":"7109a5df5bda2a92145c155bddbb5a8e5c55af02a4962337af8fbf8602867e88"} Jan 27 15:25:42 crc kubenswrapper[4697]: I0127 15:25:42.593392 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-ff554fc88-js46k" Jan 27 15:25:42 crc kubenswrapper[4697]: I0127 15:25:42.597607 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854dgsgx" event={"ID":"ee7cb913-d3ef-459b-bd70-d6a2aea9ace3","Type":"ContainerStarted","Data":"30f8a3a0984e24451199f1fb8824a6b4ee3aac74cb53e67576a6a3ae0dd5ded5"} Jan 27 15:25:42 crc kubenswrapper[4697]: I0127 15:25:42.599351 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-9nsp6" event={"ID":"88db0cc4-3d70-47be-83e1-e5d2d3f3ff24","Type":"ContainerStarted","Data":"163be3fb3212f76399c9b0e0a2aaea26efdd1f355b555007add30875c3805560"} Jan 27 15:25:42 crc kubenswrapper[4697]: I0127 15:25:42.599447 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-9nsp6" Jan 27 15:25:42 crc kubenswrapper[4697]: I0127 15:25:42.601209 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-44hkp" event={"ID":"a484e650-0a10-44e5-8b88-0f4157293d48","Type":"ContainerStarted","Data":"ba438f50e3dedcc76665ac6415077862993ae15d867555d9b3b345ca0bc97986"} Jan 27 15:25:42 crc kubenswrapper[4697]: I0127 15:25:42.601389 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-44hkp" Jan 27 15:25:42 crc kubenswrapper[4697]: I0127 15:25:42.604035 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-65ff799cfd-666rh" event={"ID":"349690fb-f1d2-4848-8424-01e794dc6317","Type":"ContainerStarted","Data":"ef213f4dcb201df77bf34169f7d752eafe09ec8c980b5983a7e89abfbf89122d"} Jan 27 15:25:42 crc kubenswrapper[4697]: I0127 15:25:42.604239 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-65ff799cfd-666rh" Jan 27 15:25:42 crc kubenswrapper[4697]: I0127 15:25:42.605613 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-575ffb885b-5h569" event={"ID":"ab1c79ce-8e28-4565-9760-5fd20ddf47eb","Type":"ContainerStarted","Data":"ca1f1cc95eaf245e4e4104c576e69300ede9c3103229e1f1ada8061bc92228ad"} Jan 27 15:25:42 crc kubenswrapper[4697]: I0127 15:25:42.633052 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-bkw8p" podStartSLOduration=3.792472078 podStartE2EDuration="34.633036427s" podCreationTimestamp="2026-01-27 15:25:08 +0000 UTC" firstStartedPulling="2026-01-27 15:25:10.155135407 +0000 UTC m=+1006.327535188" lastFinishedPulling="2026-01-27 15:25:40.995699756 +0000 UTC m=+1037.168099537" observedRunningTime="2026-01-27 15:25:42.62429883 +0000 UTC m=+1038.796698611" watchObservedRunningTime="2026-01-27 15:25:42.633036427 +0000 UTC m=+1038.805436208" Jan 27 15:25:42 crc kubenswrapper[4697]: I0127 15:25:42.651604 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-6c9bb4b66c-xktdh" podStartSLOduration=4.0398522119999996 podStartE2EDuration="34.651586996s" podCreationTimestamp="2026-01-27 15:25:08 +0000 UTC" firstStartedPulling="2026-01-27 15:25:10.255767038 +0000 UTC m=+1006.428166809" lastFinishedPulling="2026-01-27 15:25:40.867501792 +0000 UTC m=+1037.039901593" observedRunningTime="2026-01-27 15:25:42.642251265 +0000 UTC m=+1038.814651046" watchObservedRunningTime="2026-01-27 15:25:42.651586996 +0000 UTC m=+1038.823986777" Jan 27 15:25:42 crc kubenswrapper[4697]: I0127 15:25:42.691719 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-9nsp6" podStartSLOduration=11.79330966 podStartE2EDuration="35.691700269s" podCreationTimestamp="2026-01-27 15:25:07 +0000 UTC" firstStartedPulling="2026-01-27 15:25:09.574065373 +0000 UTC m=+1005.746465154" lastFinishedPulling="2026-01-27 15:25:33.472455992 +0000 UTC m=+1029.644855763" observedRunningTime="2026-01-27 15:25:42.680916302 +0000 UTC m=+1038.853316083" watchObservedRunningTime="2026-01-27 15:25:42.691700269 +0000 UTC m=+1038.864100050" Jan 27 15:25:42 crc kubenswrapper[4697]: I0127 15:25:42.745141 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-575ffb885b-5h569" podStartSLOduration=3.8728852529999998 podStartE2EDuration="35.745118611s" podCreationTimestamp="2026-01-27 15:25:07 +0000 UTC" firstStartedPulling="2026-01-27 15:25:09.536853901 +0000 UTC m=+1005.709253682" lastFinishedPulling="2026-01-27 15:25:41.409087259 +0000 UTC m=+1037.581487040" observedRunningTime="2026-01-27 15:25:42.736351844 +0000 UTC m=+1038.908751625" watchObservedRunningTime="2026-01-27 15:25:42.745118611 +0000 UTC m=+1038.917518412" Jan 27 15:25:42 crc kubenswrapper[4697]: I0127 15:25:42.846110 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-44hkp" podStartSLOduration=4.18120787 podStartE2EDuration="34.84607734s" podCreationTimestamp="2026-01-27 15:25:08 +0000 UTC" firstStartedPulling="2026-01-27 15:25:10.202464188 +0000 UTC m=+1006.374863979" lastFinishedPulling="2026-01-27 15:25:40.867333668 +0000 UTC m=+1037.039733449" observedRunningTime="2026-01-27 15:25:42.80245489 +0000 UTC m=+1038.974854671" watchObservedRunningTime="2026-01-27 15:25:42.84607734 +0000 UTC m=+1039.018477121" Jan 27 15:25:42 crc kubenswrapper[4697]: I0127 15:25:42.849432 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-65ff799cfd-666rh" podStartSLOduration=3.64209229 podStartE2EDuration="35.849415143s" podCreationTimestamp="2026-01-27 15:25:07 +0000 UTC" firstStartedPulling="2026-01-27 15:25:08.774644503 +0000 UTC m=+1004.947044284" lastFinishedPulling="2026-01-27 15:25:40.981967356 +0000 UTC m=+1037.154367137" observedRunningTime="2026-01-27 15:25:42.844118032 +0000 UTC m=+1039.016517813" watchObservedRunningTime="2026-01-27 15:25:42.849415143 +0000 UTC m=+1039.021814924" Jan 27 15:25:42 crc kubenswrapper[4697]: I0127 15:25:42.935731 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-ff554fc88-js46k" podStartSLOduration=34.935711299 podStartE2EDuration="34.935711299s" podCreationTimestamp="2026-01-27 15:25:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:25:42.93047432 +0000 UTC m=+1039.102874101" watchObservedRunningTime="2026-01-27 15:25:42.935711299 +0000 UTC m=+1039.108111080" Jan 27 15:25:43 crc kubenswrapper[4697]: I0127 15:25:43.596147 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-bzmfz" podStartSLOduration=4.908186306 podStartE2EDuration="35.596127387s" podCreationTimestamp="2026-01-27 15:25:08 +0000 UTC" firstStartedPulling="2026-01-27 15:25:10.228203245 +0000 UTC m=+1006.400603026" lastFinishedPulling="2026-01-27 15:25:40.916144326 +0000 UTC m=+1037.088544107" observedRunningTime="2026-01-27 15:25:42.962149804 +0000 UTC m=+1039.134549585" watchObservedRunningTime="2026-01-27 15:25:43.596127387 +0000 UTC m=+1039.768527178" Jan 27 15:25:44 crc kubenswrapper[4697]: I0127 15:25:44.640177 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-zppcc" event={"ID":"39770161-132e-4037-aec7-9db6d10d17d8","Type":"ContainerStarted","Data":"07a57679a34f8d8d194b55ea09f47087d93607ae22e8346e2cb8864ab60fe284"} Jan 27 15:25:44 crc kubenswrapper[4697]: I0127 15:25:44.641223 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-zppcc" Jan 27 15:25:44 crc kubenswrapper[4697]: I0127 15:25:44.664918 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-zppcc" podStartSLOduration=3.022952174 podStartE2EDuration="37.664896853s" podCreationTimestamp="2026-01-27 15:25:07 +0000 UTC" firstStartedPulling="2026-01-27 15:25:09.484605858 +0000 UTC m=+1005.657005639" lastFinishedPulling="2026-01-27 15:25:44.126550537 +0000 UTC m=+1040.298950318" observedRunningTime="2026-01-27 15:25:44.662922814 +0000 UTC m=+1040.835322595" watchObservedRunningTime="2026-01-27 15:25:44.664896853 +0000 UTC m=+1040.837296634" Jan 27 15:25:46 crc kubenswrapper[4697]: I0127 15:25:46.653457 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-5frlr" event={"ID":"a068f004-7f2c-4c3d-8bfe-98fbc4b65a73","Type":"ContainerStarted","Data":"811f117ecef293eb2e79b7801628057f464305c72494196f5f627722031c07e6"} Jan 27 15:25:46 crc kubenswrapper[4697]: I0127 15:25:46.653799 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-5frlr" Jan 27 15:25:46 crc kubenswrapper[4697]: I0127 15:25:46.667675 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-2zk5c" event={"ID":"d26a6673-d71e-4f0a-a8f6-e87866dafa6a","Type":"ContainerStarted","Data":"a575ead850be22faeafc41575646e7654c7e7e4920b7327d599dc2e6366be9d7"} Jan 27 15:25:46 crc kubenswrapper[4697]: I0127 15:25:46.667735 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-2zk5c" Jan 27 15:25:46 crc kubenswrapper[4697]: I0127 15:25:46.670287 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854dgsgx" event={"ID":"ee7cb913-d3ef-459b-bd70-d6a2aea9ace3","Type":"ContainerStarted","Data":"8e62944e1dbc3cb30568eddc6c3fec6fce20e16aa1e537fef24e0d7e7a3c1ab9"} Jan 27 15:25:46 crc kubenswrapper[4697]: I0127 15:25:46.670497 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854dgsgx" Jan 27 15:25:46 crc kubenswrapper[4697]: I0127 15:25:46.686466 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-5frlr" podStartSLOduration=3.890728855 podStartE2EDuration="39.686444805s" podCreationTimestamp="2026-01-27 15:25:07 +0000 UTC" firstStartedPulling="2026-01-27 15:25:09.913144766 +0000 UTC m=+1006.085544547" lastFinishedPulling="2026-01-27 15:25:45.708860716 +0000 UTC m=+1041.881260497" observedRunningTime="2026-01-27 15:25:46.679727159 +0000 UTC m=+1042.852126940" watchObservedRunningTime="2026-01-27 15:25:46.686444805 +0000 UTC m=+1042.858844586" Jan 27 15:25:46 crc kubenswrapper[4697]: I0127 15:25:46.737509 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854dgsgx" podStartSLOduration=35.638847781 podStartE2EDuration="39.737493479s" podCreationTimestamp="2026-01-27 15:25:07 +0000 UTC" firstStartedPulling="2026-01-27 15:25:41.614573716 +0000 UTC m=+1037.786973497" lastFinishedPulling="2026-01-27 15:25:45.713219404 +0000 UTC m=+1041.885619195" observedRunningTime="2026-01-27 15:25:46.721714918 +0000 UTC m=+1042.894114699" watchObservedRunningTime="2026-01-27 15:25:46.737493479 +0000 UTC m=+1042.909893260" Jan 27 15:25:47 crc kubenswrapper[4697]: I0127 15:25:47.570992 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-65ff799cfd-666rh" Jan 27 15:25:47 crc kubenswrapper[4697]: I0127 15:25:47.599817 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-2zk5c" podStartSLOduration=36.065242716 podStartE2EDuration="40.599777364s" podCreationTimestamp="2026-01-27 15:25:07 +0000 UTC" firstStartedPulling="2026-01-27 15:25:41.170823611 +0000 UTC m=+1037.343223392" lastFinishedPulling="2026-01-27 15:25:45.705358259 +0000 UTC m=+1041.877758040" observedRunningTime="2026-01-27 15:25:46.739724594 +0000 UTC m=+1042.912124375" watchObservedRunningTime="2026-01-27 15:25:47.599777364 +0000 UTC m=+1043.772177145" Jan 27 15:25:47 crc kubenswrapper[4697]: I0127 15:25:47.679024 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-s4rdx" event={"ID":"d930a939-ecb8-4955-88bf-274d35ed9e6a","Type":"ContainerStarted","Data":"d4f49600a0cb523cf7932debe71cc9e1cd97f8956a5f4e222f8a6f6fecab8a6e"} Jan 27 15:25:47 crc kubenswrapper[4697]: I0127 15:25:47.679961 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-s4rdx" Jan 27 15:25:47 crc kubenswrapper[4697]: I0127 15:25:47.681013 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-wppqr" event={"ID":"6be24454-9d04-4e38-a00e-d6f62e156bd0","Type":"ContainerStarted","Data":"86242666f0f6ebc6c04912c581647fe3c2451ea48329146fbb53253fdcfc2913"} Jan 27 15:25:47 crc kubenswrapper[4697]: I0127 15:25:47.681345 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-wppqr" Jan 27 15:25:47 crc kubenswrapper[4697]: I0127 15:25:47.697186 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-s4rdx" podStartSLOduration=4.192977627 podStartE2EDuration="40.697165175s" podCreationTimestamp="2026-01-27 15:25:07 +0000 UTC" firstStartedPulling="2026-01-27 15:25:09.993092595 +0000 UTC m=+1006.165492376" lastFinishedPulling="2026-01-27 15:25:46.497280143 +0000 UTC m=+1042.669679924" observedRunningTime="2026-01-27 15:25:47.694490528 +0000 UTC m=+1043.866890319" watchObservedRunningTime="2026-01-27 15:25:47.697165175 +0000 UTC m=+1043.869564956" Jan 27 15:25:47 crc kubenswrapper[4697]: I0127 15:25:47.711063 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-hv8n2" Jan 27 15:25:47 crc kubenswrapper[4697]: I0127 15:25:47.715884 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-wppqr" podStartSLOduration=3.313200728 podStartE2EDuration="40.715780385s" podCreationTimestamp="2026-01-27 15:25:07 +0000 UTC" firstStartedPulling="2026-01-27 15:25:09.262197282 +0000 UTC m=+1005.434597063" lastFinishedPulling="2026-01-27 15:25:46.664776939 +0000 UTC m=+1042.837176720" observedRunningTime="2026-01-27 15:25:47.712164496 +0000 UTC m=+1043.884564277" watchObservedRunningTime="2026-01-27 15:25:47.715780385 +0000 UTC m=+1043.888180166" Jan 27 15:25:47 crc kubenswrapper[4697]: I0127 15:25:47.780912 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-9nsp6" Jan 27 15:25:48 crc kubenswrapper[4697]: I0127 15:25:48.018866 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-575ffb885b-5h569" Jan 27 15:25:48 crc kubenswrapper[4697]: I0127 15:25:48.021336 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-575ffb885b-5h569" Jan 27 15:25:48 crc kubenswrapper[4697]: I0127 15:25:48.021382 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-9qhk4" Jan 27 15:25:48 crc kubenswrapper[4697]: I0127 15:25:48.126920 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-s2mqs" Jan 27 15:25:48 crc kubenswrapper[4697]: I0127 15:25:48.158381 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-7w8b9" Jan 27 15:25:48 crc kubenswrapper[4697]: I0127 15:25:48.302004 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-7875d7675-kvp8m" Jan 27 15:25:48 crc kubenswrapper[4697]: I0127 15:25:48.687908 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-6hdkv" Jan 27 15:25:48 crc kubenswrapper[4697]: I0127 15:25:48.707642 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-bzmfz" Jan 27 15:25:48 crc kubenswrapper[4697]: I0127 15:25:48.728518 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-ql7xq" Jan 27 15:25:48 crc kubenswrapper[4697]: I0127 15:25:48.794118 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-44hkp" Jan 27 15:25:48 crc kubenswrapper[4697]: I0127 15:25:48.806402 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-6c9bb4b66c-xktdh" Jan 27 15:25:48 crc kubenswrapper[4697]: I0127 15:25:48.843098 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-bkw8p" Jan 27 15:25:53 crc kubenswrapper[4697]: I0127 15:25:53.721473 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-2zk5c" Jan 27 15:25:54 crc kubenswrapper[4697]: I0127 15:25:54.243701 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854dgsgx" Jan 27 15:25:54 crc kubenswrapper[4697]: E0127 15:25:54.573576 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4wpgd" podUID="c74a171d-554d-4e80-ae59-cc340cad54be" Jan 27 15:25:54 crc kubenswrapper[4697]: I0127 15:25:54.675729 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-ff554fc88-js46k" Jan 27 15:25:54 crc kubenswrapper[4697]: I0127 15:25:54.775115 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-ddcbfd695-nx7cr" event={"ID":"c3d1f921-6d2e-4c30-9f75-14f206a1fb7e","Type":"ContainerStarted","Data":"5515c1752b92423500ab387c713d02654ff13a23d68be459d70d84eda8e02a34"} Jan 27 15:25:54 crc kubenswrapper[4697]: I0127 15:25:54.776074 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-ddcbfd695-nx7cr" Jan 27 15:25:57 crc kubenswrapper[4697]: I0127 15:25:57.586531 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-wppqr" Jan 27 15:25:57 crc kubenswrapper[4697]: I0127 15:25:57.607337 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-ddcbfd695-nx7cr" podStartSLOduration=6.470732832 podStartE2EDuration="50.607311162s" podCreationTimestamp="2026-01-27 15:25:07 +0000 UTC" firstStartedPulling="2026-01-27 15:25:09.945286572 +0000 UTC m=+1006.117686353" lastFinishedPulling="2026-01-27 15:25:54.081864902 +0000 UTC m=+1050.254264683" observedRunningTime="2026-01-27 15:25:54.797874726 +0000 UTC m=+1050.970274517" watchObservedRunningTime="2026-01-27 15:25:57.607311162 +0000 UTC m=+1053.779710973" Jan 27 15:25:58 crc kubenswrapper[4697]: I0127 15:25:58.067587 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-zppcc" Jan 27 15:25:58 crc kubenswrapper[4697]: I0127 15:25:58.107639 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-5frlr" Jan 27 15:25:58 crc kubenswrapper[4697]: I0127 15:25:58.755316 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-s4rdx" Jan 27 15:26:07 crc kubenswrapper[4697]: I0127 15:26:07.883488 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4wpgd" event={"ID":"c74a171d-554d-4e80-ae59-cc340cad54be","Type":"ContainerStarted","Data":"c991ceefc715c18663f5aa1713a4ac094823c6aca69d343f9cc4a7f45e76a689"} Jan 27 15:26:08 crc kubenswrapper[4697]: I0127 15:26:08.216586 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-ddcbfd695-nx7cr" Jan 27 15:26:08 crc kubenswrapper[4697]: I0127 15:26:08.251504 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4wpgd" podStartSLOduration=3.494812508 podStartE2EDuration="1m0.25147532s" podCreationTimestamp="2026-01-27 15:25:08 +0000 UTC" firstStartedPulling="2026-01-27 15:25:10.241191496 +0000 UTC m=+1006.413591277" lastFinishedPulling="2026-01-27 15:26:06.997854308 +0000 UTC m=+1063.170254089" observedRunningTime="2026-01-27 15:26:07.898234466 +0000 UTC m=+1064.070634267" watchObservedRunningTime="2026-01-27 15:26:08.25147532 +0000 UTC m=+1064.423875131" Jan 27 15:26:38 crc kubenswrapper[4697]: I0127 15:26:38.525242 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-lkhjf"] Jan 27 15:26:38 crc kubenswrapper[4697]: I0127 15:26:38.526609 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-lkhjf" Jan 27 15:26:38 crc kubenswrapper[4697]: I0127 15:26:38.531487 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 27 15:26:38 crc kubenswrapper[4697]: I0127 15:26:38.531623 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 27 15:26:38 crc kubenswrapper[4697]: I0127 15:26:38.531724 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-b7c5b" Jan 27 15:26:38 crc kubenswrapper[4697]: I0127 15:26:38.540370 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 27 15:26:38 crc kubenswrapper[4697]: I0127 15:26:38.545741 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-lkhjf"] Jan 27 15:26:38 crc kubenswrapper[4697]: I0127 15:26:38.647273 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56aa255c-36bb-42e5-bcaf-917928075db3-config\") pod \"dnsmasq-dns-675f4bcbfc-lkhjf\" (UID: \"56aa255c-36bb-42e5-bcaf-917928075db3\") " pod="openstack/dnsmasq-dns-675f4bcbfc-lkhjf" Jan 27 15:26:38 crc kubenswrapper[4697]: I0127 15:26:38.647574 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46gcr\" (UniqueName: \"kubernetes.io/projected/56aa255c-36bb-42e5-bcaf-917928075db3-kube-api-access-46gcr\") pod \"dnsmasq-dns-675f4bcbfc-lkhjf\" (UID: \"56aa255c-36bb-42e5-bcaf-917928075db3\") " pod="openstack/dnsmasq-dns-675f4bcbfc-lkhjf" Jan 27 15:26:38 crc kubenswrapper[4697]: I0127 15:26:38.682833 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-zvhbw"] Jan 27 15:26:38 crc kubenswrapper[4697]: I0127 15:26:38.683888 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-zvhbw" Jan 27 15:26:38 crc kubenswrapper[4697]: I0127 15:26:38.688277 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 27 15:26:38 crc kubenswrapper[4697]: I0127 15:26:38.749808 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-zvhbw"] Jan 27 15:26:38 crc kubenswrapper[4697]: I0127 15:26:38.750420 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56aa255c-36bb-42e5-bcaf-917928075db3-config\") pod \"dnsmasq-dns-675f4bcbfc-lkhjf\" (UID: \"56aa255c-36bb-42e5-bcaf-917928075db3\") " pod="openstack/dnsmasq-dns-675f4bcbfc-lkhjf" Jan 27 15:26:38 crc kubenswrapper[4697]: I0127 15:26:38.750455 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-46gcr\" (UniqueName: \"kubernetes.io/projected/56aa255c-36bb-42e5-bcaf-917928075db3-kube-api-access-46gcr\") pod \"dnsmasq-dns-675f4bcbfc-lkhjf\" (UID: \"56aa255c-36bb-42e5-bcaf-917928075db3\") " pod="openstack/dnsmasq-dns-675f4bcbfc-lkhjf" Jan 27 15:26:38 crc kubenswrapper[4697]: I0127 15:26:38.751491 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56aa255c-36bb-42e5-bcaf-917928075db3-config\") pod \"dnsmasq-dns-675f4bcbfc-lkhjf\" (UID: \"56aa255c-36bb-42e5-bcaf-917928075db3\") " pod="openstack/dnsmasq-dns-675f4bcbfc-lkhjf" Jan 27 15:26:38 crc kubenswrapper[4697]: I0127 15:26:38.807429 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-46gcr\" (UniqueName: \"kubernetes.io/projected/56aa255c-36bb-42e5-bcaf-917928075db3-kube-api-access-46gcr\") pod \"dnsmasq-dns-675f4bcbfc-lkhjf\" (UID: \"56aa255c-36bb-42e5-bcaf-917928075db3\") " pod="openstack/dnsmasq-dns-675f4bcbfc-lkhjf" Jan 27 15:26:38 crc kubenswrapper[4697]: I0127 15:26:38.851561 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/158c7f54-dce0-47cc-8239-76d257dba505-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-zvhbw\" (UID: \"158c7f54-dce0-47cc-8239-76d257dba505\") " pod="openstack/dnsmasq-dns-78dd6ddcc-zvhbw" Jan 27 15:26:38 crc kubenswrapper[4697]: I0127 15:26:38.851652 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/158c7f54-dce0-47cc-8239-76d257dba505-config\") pod \"dnsmasq-dns-78dd6ddcc-zvhbw\" (UID: \"158c7f54-dce0-47cc-8239-76d257dba505\") " pod="openstack/dnsmasq-dns-78dd6ddcc-zvhbw" Jan 27 15:26:38 crc kubenswrapper[4697]: I0127 15:26:38.851678 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnwjm\" (UniqueName: \"kubernetes.io/projected/158c7f54-dce0-47cc-8239-76d257dba505-kube-api-access-lnwjm\") pod \"dnsmasq-dns-78dd6ddcc-zvhbw\" (UID: \"158c7f54-dce0-47cc-8239-76d257dba505\") " pod="openstack/dnsmasq-dns-78dd6ddcc-zvhbw" Jan 27 15:26:38 crc kubenswrapper[4697]: I0127 15:26:38.856403 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-lkhjf" Jan 27 15:26:38 crc kubenswrapper[4697]: I0127 15:26:38.952648 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/158c7f54-dce0-47cc-8239-76d257dba505-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-zvhbw\" (UID: \"158c7f54-dce0-47cc-8239-76d257dba505\") " pod="openstack/dnsmasq-dns-78dd6ddcc-zvhbw" Jan 27 15:26:38 crc kubenswrapper[4697]: I0127 15:26:38.952732 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/158c7f54-dce0-47cc-8239-76d257dba505-config\") pod \"dnsmasq-dns-78dd6ddcc-zvhbw\" (UID: \"158c7f54-dce0-47cc-8239-76d257dba505\") " pod="openstack/dnsmasq-dns-78dd6ddcc-zvhbw" Jan 27 15:26:38 crc kubenswrapper[4697]: I0127 15:26:38.952760 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lnwjm\" (UniqueName: \"kubernetes.io/projected/158c7f54-dce0-47cc-8239-76d257dba505-kube-api-access-lnwjm\") pod \"dnsmasq-dns-78dd6ddcc-zvhbw\" (UID: \"158c7f54-dce0-47cc-8239-76d257dba505\") " pod="openstack/dnsmasq-dns-78dd6ddcc-zvhbw" Jan 27 15:26:38 crc kubenswrapper[4697]: I0127 15:26:38.953865 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/158c7f54-dce0-47cc-8239-76d257dba505-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-zvhbw\" (UID: \"158c7f54-dce0-47cc-8239-76d257dba505\") " pod="openstack/dnsmasq-dns-78dd6ddcc-zvhbw" Jan 27 15:26:38 crc kubenswrapper[4697]: I0127 15:26:38.954185 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/158c7f54-dce0-47cc-8239-76d257dba505-config\") pod \"dnsmasq-dns-78dd6ddcc-zvhbw\" (UID: \"158c7f54-dce0-47cc-8239-76d257dba505\") " pod="openstack/dnsmasq-dns-78dd6ddcc-zvhbw" Jan 27 15:26:38 crc kubenswrapper[4697]: I0127 15:26:38.993303 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lnwjm\" (UniqueName: \"kubernetes.io/projected/158c7f54-dce0-47cc-8239-76d257dba505-kube-api-access-lnwjm\") pod \"dnsmasq-dns-78dd6ddcc-zvhbw\" (UID: \"158c7f54-dce0-47cc-8239-76d257dba505\") " pod="openstack/dnsmasq-dns-78dd6ddcc-zvhbw" Jan 27 15:26:38 crc kubenswrapper[4697]: I0127 15:26:38.998169 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-zvhbw" Jan 27 15:26:39 crc kubenswrapper[4697]: I0127 15:26:39.335398 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-lkhjf"] Jan 27 15:26:39 crc kubenswrapper[4697]: I0127 15:26:39.455564 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-zvhbw"] Jan 27 15:26:39 crc kubenswrapper[4697]: W0127 15:26:39.461474 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod158c7f54_dce0_47cc_8239_76d257dba505.slice/crio-9b7f23c0b81fb398acc0f51908ee3f0e3357db23d05b4c1a3556f5f960f431e4 WatchSource:0}: Error finding container 9b7f23c0b81fb398acc0f51908ee3f0e3357db23d05b4c1a3556f5f960f431e4: Status 404 returned error can't find the container with id 9b7f23c0b81fb398acc0f51908ee3f0e3357db23d05b4c1a3556f5f960f431e4 Jan 27 15:26:40 crc kubenswrapper[4697]: I0127 15:26:40.204894 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-lkhjf" event={"ID":"56aa255c-36bb-42e5-bcaf-917928075db3","Type":"ContainerStarted","Data":"305b2003a43e00b5131ce2d08b2436e71a2b752c538c0c7a80c4843aefd511c7"} Jan 27 15:26:40 crc kubenswrapper[4697]: I0127 15:26:40.206755 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-zvhbw" event={"ID":"158c7f54-dce0-47cc-8239-76d257dba505","Type":"ContainerStarted","Data":"9b7f23c0b81fb398acc0f51908ee3f0e3357db23d05b4c1a3556f5f960f431e4"} Jan 27 15:26:41 crc kubenswrapper[4697]: I0127 15:26:41.350550 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-lkhjf"] Jan 27 15:26:41 crc kubenswrapper[4697]: I0127 15:26:41.385979 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-2plzf"] Jan 27 15:26:41 crc kubenswrapper[4697]: I0127 15:26:41.387347 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-2plzf" Jan 27 15:26:41 crc kubenswrapper[4697]: I0127 15:26:41.401433 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-2plzf"] Jan 27 15:26:41 crc kubenswrapper[4697]: I0127 15:26:41.506347 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6d1b13a-3022-452e-8343-2a3a6ad0856e-config\") pod \"dnsmasq-dns-666b6646f7-2plzf\" (UID: \"a6d1b13a-3022-452e-8343-2a3a6ad0856e\") " pod="openstack/dnsmasq-dns-666b6646f7-2plzf" Jan 27 15:26:41 crc kubenswrapper[4697]: I0127 15:26:41.506546 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sp72l\" (UniqueName: \"kubernetes.io/projected/a6d1b13a-3022-452e-8343-2a3a6ad0856e-kube-api-access-sp72l\") pod \"dnsmasq-dns-666b6646f7-2plzf\" (UID: \"a6d1b13a-3022-452e-8343-2a3a6ad0856e\") " pod="openstack/dnsmasq-dns-666b6646f7-2plzf" Jan 27 15:26:41 crc kubenswrapper[4697]: I0127 15:26:41.506577 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a6d1b13a-3022-452e-8343-2a3a6ad0856e-dns-svc\") pod \"dnsmasq-dns-666b6646f7-2plzf\" (UID: \"a6d1b13a-3022-452e-8343-2a3a6ad0856e\") " pod="openstack/dnsmasq-dns-666b6646f7-2plzf" Jan 27 15:26:41 crc kubenswrapper[4697]: I0127 15:26:41.608657 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sp72l\" (UniqueName: \"kubernetes.io/projected/a6d1b13a-3022-452e-8343-2a3a6ad0856e-kube-api-access-sp72l\") pod \"dnsmasq-dns-666b6646f7-2plzf\" (UID: \"a6d1b13a-3022-452e-8343-2a3a6ad0856e\") " pod="openstack/dnsmasq-dns-666b6646f7-2plzf" Jan 27 15:26:41 crc kubenswrapper[4697]: I0127 15:26:41.608699 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a6d1b13a-3022-452e-8343-2a3a6ad0856e-dns-svc\") pod \"dnsmasq-dns-666b6646f7-2plzf\" (UID: \"a6d1b13a-3022-452e-8343-2a3a6ad0856e\") " pod="openstack/dnsmasq-dns-666b6646f7-2plzf" Jan 27 15:26:41 crc kubenswrapper[4697]: I0127 15:26:41.608746 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6d1b13a-3022-452e-8343-2a3a6ad0856e-config\") pod \"dnsmasq-dns-666b6646f7-2plzf\" (UID: \"a6d1b13a-3022-452e-8343-2a3a6ad0856e\") " pod="openstack/dnsmasq-dns-666b6646f7-2plzf" Jan 27 15:26:41 crc kubenswrapper[4697]: I0127 15:26:41.609612 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6d1b13a-3022-452e-8343-2a3a6ad0856e-config\") pod \"dnsmasq-dns-666b6646f7-2plzf\" (UID: \"a6d1b13a-3022-452e-8343-2a3a6ad0856e\") " pod="openstack/dnsmasq-dns-666b6646f7-2plzf" Jan 27 15:26:41 crc kubenswrapper[4697]: I0127 15:26:41.609920 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a6d1b13a-3022-452e-8343-2a3a6ad0856e-dns-svc\") pod \"dnsmasq-dns-666b6646f7-2plzf\" (UID: \"a6d1b13a-3022-452e-8343-2a3a6ad0856e\") " pod="openstack/dnsmasq-dns-666b6646f7-2plzf" Jan 27 15:26:41 crc kubenswrapper[4697]: I0127 15:26:41.625836 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-zvhbw"] Jan 27 15:26:41 crc kubenswrapper[4697]: I0127 15:26:41.661893 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sp72l\" (UniqueName: \"kubernetes.io/projected/a6d1b13a-3022-452e-8343-2a3a6ad0856e-kube-api-access-sp72l\") pod \"dnsmasq-dns-666b6646f7-2plzf\" (UID: \"a6d1b13a-3022-452e-8343-2a3a6ad0856e\") " pod="openstack/dnsmasq-dns-666b6646f7-2plzf" Jan 27 15:26:41 crc kubenswrapper[4697]: I0127 15:26:41.673738 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-jhr52"] Jan 27 15:26:41 crc kubenswrapper[4697]: I0127 15:26:41.674811 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-jhr52" Jan 27 15:26:41 crc kubenswrapper[4697]: I0127 15:26:41.702396 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-jhr52"] Jan 27 15:26:41 crc kubenswrapper[4697]: I0127 15:26:41.711089 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-2plzf" Jan 27 15:26:41 crc kubenswrapper[4697]: I0127 15:26:41.814540 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46d05374-2c9c-4e44-ba35-8c9d784fd630-config\") pod \"dnsmasq-dns-57d769cc4f-jhr52\" (UID: \"46d05374-2c9c-4e44-ba35-8c9d784fd630\") " pod="openstack/dnsmasq-dns-57d769cc4f-jhr52" Jan 27 15:26:41 crc kubenswrapper[4697]: I0127 15:26:41.814672 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/46d05374-2c9c-4e44-ba35-8c9d784fd630-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-jhr52\" (UID: \"46d05374-2c9c-4e44-ba35-8c9d784fd630\") " pod="openstack/dnsmasq-dns-57d769cc4f-jhr52" Jan 27 15:26:41 crc kubenswrapper[4697]: I0127 15:26:41.814712 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnzv6\" (UniqueName: \"kubernetes.io/projected/46d05374-2c9c-4e44-ba35-8c9d784fd630-kube-api-access-nnzv6\") pod \"dnsmasq-dns-57d769cc4f-jhr52\" (UID: \"46d05374-2c9c-4e44-ba35-8c9d784fd630\") " pod="openstack/dnsmasq-dns-57d769cc4f-jhr52" Jan 27 15:26:41 crc kubenswrapper[4697]: I0127 15:26:41.917445 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/46d05374-2c9c-4e44-ba35-8c9d784fd630-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-jhr52\" (UID: \"46d05374-2c9c-4e44-ba35-8c9d784fd630\") " pod="openstack/dnsmasq-dns-57d769cc4f-jhr52" Jan 27 15:26:41 crc kubenswrapper[4697]: I0127 15:26:41.917489 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nnzv6\" (UniqueName: \"kubernetes.io/projected/46d05374-2c9c-4e44-ba35-8c9d784fd630-kube-api-access-nnzv6\") pod \"dnsmasq-dns-57d769cc4f-jhr52\" (UID: \"46d05374-2c9c-4e44-ba35-8c9d784fd630\") " pod="openstack/dnsmasq-dns-57d769cc4f-jhr52" Jan 27 15:26:41 crc kubenswrapper[4697]: I0127 15:26:41.917532 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46d05374-2c9c-4e44-ba35-8c9d784fd630-config\") pod \"dnsmasq-dns-57d769cc4f-jhr52\" (UID: \"46d05374-2c9c-4e44-ba35-8c9d784fd630\") " pod="openstack/dnsmasq-dns-57d769cc4f-jhr52" Jan 27 15:26:41 crc kubenswrapper[4697]: I0127 15:26:41.918433 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46d05374-2c9c-4e44-ba35-8c9d784fd630-config\") pod \"dnsmasq-dns-57d769cc4f-jhr52\" (UID: \"46d05374-2c9c-4e44-ba35-8c9d784fd630\") " pod="openstack/dnsmasq-dns-57d769cc4f-jhr52" Jan 27 15:26:41 crc kubenswrapper[4697]: I0127 15:26:41.918454 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/46d05374-2c9c-4e44-ba35-8c9d784fd630-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-jhr52\" (UID: \"46d05374-2c9c-4e44-ba35-8c9d784fd630\") " pod="openstack/dnsmasq-dns-57d769cc4f-jhr52" Jan 27 15:26:41 crc kubenswrapper[4697]: I0127 15:26:41.957391 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nnzv6\" (UniqueName: \"kubernetes.io/projected/46d05374-2c9c-4e44-ba35-8c9d784fd630-kube-api-access-nnzv6\") pod \"dnsmasq-dns-57d769cc4f-jhr52\" (UID: \"46d05374-2c9c-4e44-ba35-8c9d784fd630\") " pod="openstack/dnsmasq-dns-57d769cc4f-jhr52" Jan 27 15:26:42 crc kubenswrapper[4697]: I0127 15:26:42.013299 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-jhr52" Jan 27 15:26:42 crc kubenswrapper[4697]: I0127 15:26:42.104326 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-2plzf"] Jan 27 15:26:42 crc kubenswrapper[4697]: W0127 15:26:42.172674 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda6d1b13a_3022_452e_8343_2a3a6ad0856e.slice/crio-00debdfa4e38028dd025852bdaa4e5eafdb7146cd8d64c6008684df24706577c WatchSource:0}: Error finding container 00debdfa4e38028dd025852bdaa4e5eafdb7146cd8d64c6008684df24706577c: Status 404 returned error can't find the container with id 00debdfa4e38028dd025852bdaa4e5eafdb7146cd8d64c6008684df24706577c Jan 27 15:26:42 crc kubenswrapper[4697]: I0127 15:26:42.245351 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-2plzf" event={"ID":"a6d1b13a-3022-452e-8343-2a3a6ad0856e","Type":"ContainerStarted","Data":"00debdfa4e38028dd025852bdaa4e5eafdb7146cd8d64c6008684df24706577c"} Jan 27 15:26:42 crc kubenswrapper[4697]: I0127 15:26:42.521028 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 27 15:26:42 crc kubenswrapper[4697]: I0127 15:26:42.522213 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 27 15:26:42 crc kubenswrapper[4697]: I0127 15:26:42.525963 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-lclcv" Jan 27 15:26:42 crc kubenswrapper[4697]: I0127 15:26:42.526117 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 27 15:26:42 crc kubenswrapper[4697]: I0127 15:26:42.526122 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 27 15:26:42 crc kubenswrapper[4697]: I0127 15:26:42.534390 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 27 15:26:42 crc kubenswrapper[4697]: I0127 15:26:42.534611 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 27 15:26:42 crc kubenswrapper[4697]: I0127 15:26:42.534720 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 27 15:26:42 crc kubenswrapper[4697]: I0127 15:26:42.544464 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 27 15:26:42 crc kubenswrapper[4697]: I0127 15:26:42.586532 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 27 15:26:42 crc kubenswrapper[4697]: I0127 15:26:42.628142 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8npts\" (UniqueName: \"kubernetes.io/projected/abff1f2e-e0f3-4730-888c-2e2d8464f624-kube-api-access-8npts\") pod \"rabbitmq-server-0\" (UID: \"abff1f2e-e0f3-4730-888c-2e2d8464f624\") " pod="openstack/rabbitmq-server-0" Jan 27 15:26:42 crc kubenswrapper[4697]: I0127 15:26:42.628208 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/abff1f2e-e0f3-4730-888c-2e2d8464f624-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"abff1f2e-e0f3-4730-888c-2e2d8464f624\") " pod="openstack/rabbitmq-server-0" Jan 27 15:26:42 crc kubenswrapper[4697]: I0127 15:26:42.628240 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/abff1f2e-e0f3-4730-888c-2e2d8464f624-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"abff1f2e-e0f3-4730-888c-2e2d8464f624\") " pod="openstack/rabbitmq-server-0" Jan 27 15:26:42 crc kubenswrapper[4697]: I0127 15:26:42.628262 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/abff1f2e-e0f3-4730-888c-2e2d8464f624-config-data\") pod \"rabbitmq-server-0\" (UID: \"abff1f2e-e0f3-4730-888c-2e2d8464f624\") " pod="openstack/rabbitmq-server-0" Jan 27 15:26:42 crc kubenswrapper[4697]: I0127 15:26:42.628278 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/abff1f2e-e0f3-4730-888c-2e2d8464f624-pod-info\") pod \"rabbitmq-server-0\" (UID: \"abff1f2e-e0f3-4730-888c-2e2d8464f624\") " pod="openstack/rabbitmq-server-0" Jan 27 15:26:42 crc kubenswrapper[4697]: I0127 15:26:42.628298 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/abff1f2e-e0f3-4730-888c-2e2d8464f624-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"abff1f2e-e0f3-4730-888c-2e2d8464f624\") " pod="openstack/rabbitmq-server-0" Jan 27 15:26:42 crc kubenswrapper[4697]: I0127 15:26:42.628316 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/abff1f2e-e0f3-4730-888c-2e2d8464f624-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"abff1f2e-e0f3-4730-888c-2e2d8464f624\") " pod="openstack/rabbitmq-server-0" Jan 27 15:26:42 crc kubenswrapper[4697]: I0127 15:26:42.628334 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/abff1f2e-e0f3-4730-888c-2e2d8464f624-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"abff1f2e-e0f3-4730-888c-2e2d8464f624\") " pod="openstack/rabbitmq-server-0" Jan 27 15:26:42 crc kubenswrapper[4697]: I0127 15:26:42.628355 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/abff1f2e-e0f3-4730-888c-2e2d8464f624-server-conf\") pod \"rabbitmq-server-0\" (UID: \"abff1f2e-e0f3-4730-888c-2e2d8464f624\") " pod="openstack/rabbitmq-server-0" Jan 27 15:26:42 crc kubenswrapper[4697]: I0127 15:26:42.628390 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/abff1f2e-e0f3-4730-888c-2e2d8464f624-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"abff1f2e-e0f3-4730-888c-2e2d8464f624\") " pod="openstack/rabbitmq-server-0" Jan 27 15:26:42 crc kubenswrapper[4697]: I0127 15:26:42.628409 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"abff1f2e-e0f3-4730-888c-2e2d8464f624\") " pod="openstack/rabbitmq-server-0" Jan 27 15:26:42 crc kubenswrapper[4697]: I0127 15:26:42.671918 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-jhr52"] Jan 27 15:26:42 crc kubenswrapper[4697]: W0127 15:26:42.687346 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod46d05374_2c9c_4e44_ba35_8c9d784fd630.slice/crio-b4fa4221bfd1cf6f212fb9a62be06d7087eea94e23d02d7003d2ac841571e4a0 WatchSource:0}: Error finding container b4fa4221bfd1cf6f212fb9a62be06d7087eea94e23d02d7003d2ac841571e4a0: Status 404 returned error can't find the container with id b4fa4221bfd1cf6f212fb9a62be06d7087eea94e23d02d7003d2ac841571e4a0 Jan 27 15:26:42 crc kubenswrapper[4697]: I0127 15:26:42.731226 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/abff1f2e-e0f3-4730-888c-2e2d8464f624-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"abff1f2e-e0f3-4730-888c-2e2d8464f624\") " pod="openstack/rabbitmq-server-0" Jan 27 15:26:42 crc kubenswrapper[4697]: I0127 15:26:42.731292 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/abff1f2e-e0f3-4730-888c-2e2d8464f624-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"abff1f2e-e0f3-4730-888c-2e2d8464f624\") " pod="openstack/rabbitmq-server-0" Jan 27 15:26:42 crc kubenswrapper[4697]: I0127 15:26:42.731316 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/abff1f2e-e0f3-4730-888c-2e2d8464f624-config-data\") pod \"rabbitmq-server-0\" (UID: \"abff1f2e-e0f3-4730-888c-2e2d8464f624\") " pod="openstack/rabbitmq-server-0" Jan 27 15:26:42 crc kubenswrapper[4697]: I0127 15:26:42.731337 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/abff1f2e-e0f3-4730-888c-2e2d8464f624-pod-info\") pod \"rabbitmq-server-0\" (UID: \"abff1f2e-e0f3-4730-888c-2e2d8464f624\") " pod="openstack/rabbitmq-server-0" Jan 27 15:26:42 crc kubenswrapper[4697]: I0127 15:26:42.731358 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/abff1f2e-e0f3-4730-888c-2e2d8464f624-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"abff1f2e-e0f3-4730-888c-2e2d8464f624\") " pod="openstack/rabbitmq-server-0" Jan 27 15:26:42 crc kubenswrapper[4697]: I0127 15:26:42.731373 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/abff1f2e-e0f3-4730-888c-2e2d8464f624-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"abff1f2e-e0f3-4730-888c-2e2d8464f624\") " pod="openstack/rabbitmq-server-0" Jan 27 15:26:42 crc kubenswrapper[4697]: I0127 15:26:42.731388 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/abff1f2e-e0f3-4730-888c-2e2d8464f624-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"abff1f2e-e0f3-4730-888c-2e2d8464f624\") " pod="openstack/rabbitmq-server-0" Jan 27 15:26:42 crc kubenswrapper[4697]: I0127 15:26:42.731404 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/abff1f2e-e0f3-4730-888c-2e2d8464f624-server-conf\") pod \"rabbitmq-server-0\" (UID: \"abff1f2e-e0f3-4730-888c-2e2d8464f624\") " pod="openstack/rabbitmq-server-0" Jan 27 15:26:42 crc kubenswrapper[4697]: I0127 15:26:42.731437 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/abff1f2e-e0f3-4730-888c-2e2d8464f624-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"abff1f2e-e0f3-4730-888c-2e2d8464f624\") " pod="openstack/rabbitmq-server-0" Jan 27 15:26:42 crc kubenswrapper[4697]: I0127 15:26:42.731455 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"abff1f2e-e0f3-4730-888c-2e2d8464f624\") " pod="openstack/rabbitmq-server-0" Jan 27 15:26:42 crc kubenswrapper[4697]: I0127 15:26:42.731477 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8npts\" (UniqueName: \"kubernetes.io/projected/abff1f2e-e0f3-4730-888c-2e2d8464f624-kube-api-access-8npts\") pod \"rabbitmq-server-0\" (UID: \"abff1f2e-e0f3-4730-888c-2e2d8464f624\") " pod="openstack/rabbitmq-server-0" Jan 27 15:26:42 crc kubenswrapper[4697]: I0127 15:26:42.735704 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/abff1f2e-e0f3-4730-888c-2e2d8464f624-config-data\") pod \"rabbitmq-server-0\" (UID: \"abff1f2e-e0f3-4730-888c-2e2d8464f624\") " pod="openstack/rabbitmq-server-0" Jan 27 15:26:42 crc kubenswrapper[4697]: I0127 15:26:42.736463 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/abff1f2e-e0f3-4730-888c-2e2d8464f624-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"abff1f2e-e0f3-4730-888c-2e2d8464f624\") " pod="openstack/rabbitmq-server-0" Jan 27 15:26:42 crc kubenswrapper[4697]: I0127 15:26:42.736474 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/abff1f2e-e0f3-4730-888c-2e2d8464f624-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"abff1f2e-e0f3-4730-888c-2e2d8464f624\") " pod="openstack/rabbitmq-server-0" Jan 27 15:26:42 crc kubenswrapper[4697]: I0127 15:26:42.736743 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/abff1f2e-e0f3-4730-888c-2e2d8464f624-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"abff1f2e-e0f3-4730-888c-2e2d8464f624\") " pod="openstack/rabbitmq-server-0" Jan 27 15:26:42 crc kubenswrapper[4697]: I0127 15:26:42.736973 4697 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"abff1f2e-e0f3-4730-888c-2e2d8464f624\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/rabbitmq-server-0" Jan 27 15:26:42 crc kubenswrapper[4697]: I0127 15:26:42.737385 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/abff1f2e-e0f3-4730-888c-2e2d8464f624-server-conf\") pod \"rabbitmq-server-0\" (UID: \"abff1f2e-e0f3-4730-888c-2e2d8464f624\") " pod="openstack/rabbitmq-server-0" Jan 27 15:26:42 crc kubenswrapper[4697]: I0127 15:26:42.744083 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/abff1f2e-e0f3-4730-888c-2e2d8464f624-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"abff1f2e-e0f3-4730-888c-2e2d8464f624\") " pod="openstack/rabbitmq-server-0" Jan 27 15:26:42 crc kubenswrapper[4697]: I0127 15:26:42.750733 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/abff1f2e-e0f3-4730-888c-2e2d8464f624-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"abff1f2e-e0f3-4730-888c-2e2d8464f624\") " pod="openstack/rabbitmq-server-0" Jan 27 15:26:42 crc kubenswrapper[4697]: I0127 15:26:42.751945 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/abff1f2e-e0f3-4730-888c-2e2d8464f624-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"abff1f2e-e0f3-4730-888c-2e2d8464f624\") " pod="openstack/rabbitmq-server-0" Jan 27 15:26:42 crc kubenswrapper[4697]: I0127 15:26:42.753013 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/abff1f2e-e0f3-4730-888c-2e2d8464f624-pod-info\") pod \"rabbitmq-server-0\" (UID: \"abff1f2e-e0f3-4730-888c-2e2d8464f624\") " pod="openstack/rabbitmq-server-0" Jan 27 15:26:42 crc kubenswrapper[4697]: I0127 15:26:42.757693 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8npts\" (UniqueName: \"kubernetes.io/projected/abff1f2e-e0f3-4730-888c-2e2d8464f624-kube-api-access-8npts\") pod \"rabbitmq-server-0\" (UID: \"abff1f2e-e0f3-4730-888c-2e2d8464f624\") " pod="openstack/rabbitmq-server-0" Jan 27 15:26:42 crc kubenswrapper[4697]: I0127 15:26:42.764500 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"abff1f2e-e0f3-4730-888c-2e2d8464f624\") " pod="openstack/rabbitmq-server-0" Jan 27 15:26:42 crc kubenswrapper[4697]: I0127 15:26:42.805999 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 27 15:26:42 crc kubenswrapper[4697]: I0127 15:26:42.812109 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 27 15:26:42 crc kubenswrapper[4697]: I0127 15:26:42.816751 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 27 15:26:42 crc kubenswrapper[4697]: I0127 15:26:42.817099 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 27 15:26:42 crc kubenswrapper[4697]: I0127 15:26:42.817243 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 27 15:26:42 crc kubenswrapper[4697]: I0127 15:26:42.817377 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 27 15:26:42 crc kubenswrapper[4697]: I0127 15:26:42.817501 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 27 15:26:42 crc kubenswrapper[4697]: I0127 15:26:42.817608 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-k6rbz" Jan 27 15:26:42 crc kubenswrapper[4697]: I0127 15:26:42.819109 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 27 15:26:42 crc kubenswrapper[4697]: I0127 15:26:42.833650 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 27 15:26:42 crc kubenswrapper[4697]: I0127 15:26:42.873721 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 27 15:26:42 crc kubenswrapper[4697]: I0127 15:26:42.936181 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/eda501db-ef38-4c1f-b2d6-3e009fe24e40-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"eda501db-ef38-4c1f-b2d6-3e009fe24e40\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 15:26:42 crc kubenswrapper[4697]: I0127 15:26:42.936238 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/eda501db-ef38-4c1f-b2d6-3e009fe24e40-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"eda501db-ef38-4c1f-b2d6-3e009fe24e40\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 15:26:42 crc kubenswrapper[4697]: I0127 15:26:42.936255 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/eda501db-ef38-4c1f-b2d6-3e009fe24e40-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"eda501db-ef38-4c1f-b2d6-3e009fe24e40\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 15:26:42 crc kubenswrapper[4697]: I0127 15:26:42.936276 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"eda501db-ef38-4c1f-b2d6-3e009fe24e40\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 15:26:42 crc kubenswrapper[4697]: I0127 15:26:42.936291 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/eda501db-ef38-4c1f-b2d6-3e009fe24e40-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"eda501db-ef38-4c1f-b2d6-3e009fe24e40\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 15:26:42 crc kubenswrapper[4697]: I0127 15:26:42.936311 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/eda501db-ef38-4c1f-b2d6-3e009fe24e40-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"eda501db-ef38-4c1f-b2d6-3e009fe24e40\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 15:26:42 crc kubenswrapper[4697]: I0127 15:26:42.936343 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/eda501db-ef38-4c1f-b2d6-3e009fe24e40-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"eda501db-ef38-4c1f-b2d6-3e009fe24e40\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 15:26:42 crc kubenswrapper[4697]: I0127 15:26:42.936367 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/eda501db-ef38-4c1f-b2d6-3e009fe24e40-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"eda501db-ef38-4c1f-b2d6-3e009fe24e40\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 15:26:42 crc kubenswrapper[4697]: I0127 15:26:42.936386 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhpwh\" (UniqueName: \"kubernetes.io/projected/eda501db-ef38-4c1f-b2d6-3e009fe24e40-kube-api-access-lhpwh\") pod \"rabbitmq-cell1-server-0\" (UID: \"eda501db-ef38-4c1f-b2d6-3e009fe24e40\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 15:26:42 crc kubenswrapper[4697]: I0127 15:26:42.936408 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/eda501db-ef38-4c1f-b2d6-3e009fe24e40-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"eda501db-ef38-4c1f-b2d6-3e009fe24e40\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 15:26:42 crc kubenswrapper[4697]: I0127 15:26:42.936434 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/eda501db-ef38-4c1f-b2d6-3e009fe24e40-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"eda501db-ef38-4c1f-b2d6-3e009fe24e40\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 15:26:43 crc kubenswrapper[4697]: I0127 15:26:43.037885 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/eda501db-ef38-4c1f-b2d6-3e009fe24e40-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"eda501db-ef38-4c1f-b2d6-3e009fe24e40\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 15:26:43 crc kubenswrapper[4697]: I0127 15:26:43.037937 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/eda501db-ef38-4c1f-b2d6-3e009fe24e40-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"eda501db-ef38-4c1f-b2d6-3e009fe24e40\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 15:26:43 crc kubenswrapper[4697]: I0127 15:26:43.037980 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/eda501db-ef38-4c1f-b2d6-3e009fe24e40-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"eda501db-ef38-4c1f-b2d6-3e009fe24e40\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 15:26:43 crc kubenswrapper[4697]: I0127 15:26:43.038007 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/eda501db-ef38-4c1f-b2d6-3e009fe24e40-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"eda501db-ef38-4c1f-b2d6-3e009fe24e40\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 15:26:43 crc kubenswrapper[4697]: I0127 15:26:43.038025 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/eda501db-ef38-4c1f-b2d6-3e009fe24e40-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"eda501db-ef38-4c1f-b2d6-3e009fe24e40\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 15:26:43 crc kubenswrapper[4697]: I0127 15:26:43.038044 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"eda501db-ef38-4c1f-b2d6-3e009fe24e40\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 15:26:43 crc kubenswrapper[4697]: I0127 15:26:43.038060 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/eda501db-ef38-4c1f-b2d6-3e009fe24e40-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"eda501db-ef38-4c1f-b2d6-3e009fe24e40\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 15:26:43 crc kubenswrapper[4697]: I0127 15:26:43.038084 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/eda501db-ef38-4c1f-b2d6-3e009fe24e40-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"eda501db-ef38-4c1f-b2d6-3e009fe24e40\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 15:26:43 crc kubenswrapper[4697]: I0127 15:26:43.038118 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/eda501db-ef38-4c1f-b2d6-3e009fe24e40-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"eda501db-ef38-4c1f-b2d6-3e009fe24e40\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 15:26:43 crc kubenswrapper[4697]: I0127 15:26:43.038133 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/eda501db-ef38-4c1f-b2d6-3e009fe24e40-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"eda501db-ef38-4c1f-b2d6-3e009fe24e40\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 15:26:43 crc kubenswrapper[4697]: I0127 15:26:43.038150 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lhpwh\" (UniqueName: \"kubernetes.io/projected/eda501db-ef38-4c1f-b2d6-3e009fe24e40-kube-api-access-lhpwh\") pod \"rabbitmq-cell1-server-0\" (UID: \"eda501db-ef38-4c1f-b2d6-3e009fe24e40\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 15:26:43 crc kubenswrapper[4697]: I0127 15:26:43.039089 4697 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"eda501db-ef38-4c1f-b2d6-3e009fe24e40\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/rabbitmq-cell1-server-0" Jan 27 15:26:43 crc kubenswrapper[4697]: I0127 15:26:43.043750 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/eda501db-ef38-4c1f-b2d6-3e009fe24e40-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"eda501db-ef38-4c1f-b2d6-3e009fe24e40\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 15:26:43 crc kubenswrapper[4697]: I0127 15:26:43.044503 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/eda501db-ef38-4c1f-b2d6-3e009fe24e40-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"eda501db-ef38-4c1f-b2d6-3e009fe24e40\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 15:26:43 crc kubenswrapper[4697]: I0127 15:26:43.054063 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/eda501db-ef38-4c1f-b2d6-3e009fe24e40-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"eda501db-ef38-4c1f-b2d6-3e009fe24e40\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 15:26:43 crc kubenswrapper[4697]: I0127 15:26:43.056181 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/eda501db-ef38-4c1f-b2d6-3e009fe24e40-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"eda501db-ef38-4c1f-b2d6-3e009fe24e40\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 15:26:43 crc kubenswrapper[4697]: I0127 15:26:43.056373 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/eda501db-ef38-4c1f-b2d6-3e009fe24e40-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"eda501db-ef38-4c1f-b2d6-3e009fe24e40\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 15:26:43 crc kubenswrapper[4697]: I0127 15:26:43.057624 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lhpwh\" (UniqueName: \"kubernetes.io/projected/eda501db-ef38-4c1f-b2d6-3e009fe24e40-kube-api-access-lhpwh\") pod \"rabbitmq-cell1-server-0\" (UID: \"eda501db-ef38-4c1f-b2d6-3e009fe24e40\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 15:26:43 crc kubenswrapper[4697]: I0127 15:26:43.057886 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/eda501db-ef38-4c1f-b2d6-3e009fe24e40-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"eda501db-ef38-4c1f-b2d6-3e009fe24e40\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 15:26:43 crc kubenswrapper[4697]: I0127 15:26:43.059553 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/eda501db-ef38-4c1f-b2d6-3e009fe24e40-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"eda501db-ef38-4c1f-b2d6-3e009fe24e40\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 15:26:43 crc kubenswrapper[4697]: I0127 15:26:43.062643 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/eda501db-ef38-4c1f-b2d6-3e009fe24e40-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"eda501db-ef38-4c1f-b2d6-3e009fe24e40\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 15:26:43 crc kubenswrapper[4697]: I0127 15:26:43.072674 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/eda501db-ef38-4c1f-b2d6-3e009fe24e40-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"eda501db-ef38-4c1f-b2d6-3e009fe24e40\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 15:26:43 crc kubenswrapper[4697]: I0127 15:26:43.080529 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"eda501db-ef38-4c1f-b2d6-3e009fe24e40\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 15:26:43 crc kubenswrapper[4697]: I0127 15:26:43.156153 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 27 15:26:43 crc kubenswrapper[4697]: I0127 15:26:43.273247 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-jhr52" event={"ID":"46d05374-2c9c-4e44-ba35-8c9d784fd630","Type":"ContainerStarted","Data":"b4fa4221bfd1cf6f212fb9a62be06d7087eea94e23d02d7003d2ac841571e4a0"} Jan 27 15:26:43 crc kubenswrapper[4697]: I0127 15:26:43.389015 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 27 15:26:43 crc kubenswrapper[4697]: I0127 15:26:43.714814 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 27 15:26:43 crc kubenswrapper[4697]: W0127 15:26:43.739127 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeda501db_ef38_4c1f_b2d6_3e009fe24e40.slice/crio-9a6535abaa4529826966095518341edcfcb18c0fa2342438e3d28ce02fc465b4 WatchSource:0}: Error finding container 9a6535abaa4529826966095518341edcfcb18c0fa2342438e3d28ce02fc465b4: Status 404 returned error can't find the container with id 9a6535abaa4529826966095518341edcfcb18c0fa2342438e3d28ce02fc465b4 Jan 27 15:26:44 crc kubenswrapper[4697]: I0127 15:26:44.014274 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Jan 27 15:26:44 crc kubenswrapper[4697]: I0127 15:26:44.015370 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 27 15:26:44 crc kubenswrapper[4697]: I0127 15:26:44.019265 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 27 15:26:44 crc kubenswrapper[4697]: I0127 15:26:44.019581 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 27 15:26:44 crc kubenswrapper[4697]: I0127 15:26:44.019825 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 27 15:26:44 crc kubenswrapper[4697]: I0127 15:26:44.020014 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-pc7dd" Jan 27 15:26:44 crc kubenswrapper[4697]: I0127 15:26:44.024076 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 27 15:26:44 crc kubenswrapper[4697]: I0127 15:26:44.042670 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 27 15:26:44 crc kubenswrapper[4697]: I0127 15:26:44.053461 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7xj7\" (UniqueName: \"kubernetes.io/projected/b1f75076-2324-44ff-9a33-e083e3de3c02-kube-api-access-z7xj7\") pod \"openstack-galera-0\" (UID: \"b1f75076-2324-44ff-9a33-e083e3de3c02\") " pod="openstack/openstack-galera-0" Jan 27 15:26:44 crc kubenswrapper[4697]: I0127 15:26:44.053515 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/b1f75076-2324-44ff-9a33-e083e3de3c02-config-data-default\") pod \"openstack-galera-0\" (UID: \"b1f75076-2324-44ff-9a33-e083e3de3c02\") " pod="openstack/openstack-galera-0" Jan 27 15:26:44 crc kubenswrapper[4697]: I0127 15:26:44.053551 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b1f75076-2324-44ff-9a33-e083e3de3c02-operator-scripts\") pod \"openstack-galera-0\" (UID: \"b1f75076-2324-44ff-9a33-e083e3de3c02\") " pod="openstack/openstack-galera-0" Jan 27 15:26:44 crc kubenswrapper[4697]: I0127 15:26:44.053580 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/b1f75076-2324-44ff-9a33-e083e3de3c02-config-data-generated\") pod \"openstack-galera-0\" (UID: \"b1f75076-2324-44ff-9a33-e083e3de3c02\") " pod="openstack/openstack-galera-0" Jan 27 15:26:44 crc kubenswrapper[4697]: I0127 15:26:44.053629 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1f75076-2324-44ff-9a33-e083e3de3c02-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"b1f75076-2324-44ff-9a33-e083e3de3c02\") " pod="openstack/openstack-galera-0" Jan 27 15:26:44 crc kubenswrapper[4697]: I0127 15:26:44.053657 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-galera-0\" (UID: \"b1f75076-2324-44ff-9a33-e083e3de3c02\") " pod="openstack/openstack-galera-0" Jan 27 15:26:44 crc kubenswrapper[4697]: I0127 15:26:44.053730 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/b1f75076-2324-44ff-9a33-e083e3de3c02-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"b1f75076-2324-44ff-9a33-e083e3de3c02\") " pod="openstack/openstack-galera-0" Jan 27 15:26:44 crc kubenswrapper[4697]: I0127 15:26:44.053771 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b1f75076-2324-44ff-9a33-e083e3de3c02-kolla-config\") pod \"openstack-galera-0\" (UID: \"b1f75076-2324-44ff-9a33-e083e3de3c02\") " pod="openstack/openstack-galera-0" Jan 27 15:26:44 crc kubenswrapper[4697]: I0127 15:26:44.155584 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1f75076-2324-44ff-9a33-e083e3de3c02-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"b1f75076-2324-44ff-9a33-e083e3de3c02\") " pod="openstack/openstack-galera-0" Jan 27 15:26:44 crc kubenswrapper[4697]: I0127 15:26:44.155639 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-galera-0\" (UID: \"b1f75076-2324-44ff-9a33-e083e3de3c02\") " pod="openstack/openstack-galera-0" Jan 27 15:26:44 crc kubenswrapper[4697]: I0127 15:26:44.155708 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/b1f75076-2324-44ff-9a33-e083e3de3c02-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"b1f75076-2324-44ff-9a33-e083e3de3c02\") " pod="openstack/openstack-galera-0" Jan 27 15:26:44 crc kubenswrapper[4697]: I0127 15:26:44.155747 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b1f75076-2324-44ff-9a33-e083e3de3c02-kolla-config\") pod \"openstack-galera-0\" (UID: \"b1f75076-2324-44ff-9a33-e083e3de3c02\") " pod="openstack/openstack-galera-0" Jan 27 15:26:44 crc kubenswrapper[4697]: I0127 15:26:44.155826 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7xj7\" (UniqueName: \"kubernetes.io/projected/b1f75076-2324-44ff-9a33-e083e3de3c02-kube-api-access-z7xj7\") pod \"openstack-galera-0\" (UID: \"b1f75076-2324-44ff-9a33-e083e3de3c02\") " pod="openstack/openstack-galera-0" Jan 27 15:26:44 crc kubenswrapper[4697]: I0127 15:26:44.155854 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/b1f75076-2324-44ff-9a33-e083e3de3c02-config-data-default\") pod \"openstack-galera-0\" (UID: \"b1f75076-2324-44ff-9a33-e083e3de3c02\") " pod="openstack/openstack-galera-0" Jan 27 15:26:44 crc kubenswrapper[4697]: I0127 15:26:44.155882 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b1f75076-2324-44ff-9a33-e083e3de3c02-operator-scripts\") pod \"openstack-galera-0\" (UID: \"b1f75076-2324-44ff-9a33-e083e3de3c02\") " pod="openstack/openstack-galera-0" Jan 27 15:26:44 crc kubenswrapper[4697]: I0127 15:26:44.155906 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/b1f75076-2324-44ff-9a33-e083e3de3c02-config-data-generated\") pod \"openstack-galera-0\" (UID: \"b1f75076-2324-44ff-9a33-e083e3de3c02\") " pod="openstack/openstack-galera-0" Jan 27 15:26:44 crc kubenswrapper[4697]: I0127 15:26:44.157187 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/b1f75076-2324-44ff-9a33-e083e3de3c02-config-data-default\") pod \"openstack-galera-0\" (UID: \"b1f75076-2324-44ff-9a33-e083e3de3c02\") " pod="openstack/openstack-galera-0" Jan 27 15:26:44 crc kubenswrapper[4697]: I0127 15:26:44.157575 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b1f75076-2324-44ff-9a33-e083e3de3c02-kolla-config\") pod \"openstack-galera-0\" (UID: \"b1f75076-2324-44ff-9a33-e083e3de3c02\") " pod="openstack/openstack-galera-0" Jan 27 15:26:44 crc kubenswrapper[4697]: I0127 15:26:44.157860 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/b1f75076-2324-44ff-9a33-e083e3de3c02-config-data-generated\") pod \"openstack-galera-0\" (UID: \"b1f75076-2324-44ff-9a33-e083e3de3c02\") " pod="openstack/openstack-galera-0" Jan 27 15:26:44 crc kubenswrapper[4697]: I0127 15:26:44.157962 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b1f75076-2324-44ff-9a33-e083e3de3c02-operator-scripts\") pod \"openstack-galera-0\" (UID: \"b1f75076-2324-44ff-9a33-e083e3de3c02\") " pod="openstack/openstack-galera-0" Jan 27 15:26:44 crc kubenswrapper[4697]: I0127 15:26:44.159768 4697 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-galera-0\" (UID: \"b1f75076-2324-44ff-9a33-e083e3de3c02\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/openstack-galera-0" Jan 27 15:26:44 crc kubenswrapper[4697]: I0127 15:26:44.180732 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1f75076-2324-44ff-9a33-e083e3de3c02-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"b1f75076-2324-44ff-9a33-e083e3de3c02\") " pod="openstack/openstack-galera-0" Jan 27 15:26:44 crc kubenswrapper[4697]: I0127 15:26:44.186924 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7xj7\" (UniqueName: \"kubernetes.io/projected/b1f75076-2324-44ff-9a33-e083e3de3c02-kube-api-access-z7xj7\") pod \"openstack-galera-0\" (UID: \"b1f75076-2324-44ff-9a33-e083e3de3c02\") " pod="openstack/openstack-galera-0" Jan 27 15:26:44 crc kubenswrapper[4697]: I0127 15:26:44.209954 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/b1f75076-2324-44ff-9a33-e083e3de3c02-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"b1f75076-2324-44ff-9a33-e083e3de3c02\") " pod="openstack/openstack-galera-0" Jan 27 15:26:44 crc kubenswrapper[4697]: I0127 15:26:44.215855 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-galera-0\" (UID: \"b1f75076-2324-44ff-9a33-e083e3de3c02\") " pod="openstack/openstack-galera-0" Jan 27 15:26:44 crc kubenswrapper[4697]: I0127 15:26:44.300223 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"eda501db-ef38-4c1f-b2d6-3e009fe24e40","Type":"ContainerStarted","Data":"9a6535abaa4529826966095518341edcfcb18c0fa2342438e3d28ce02fc465b4"} Jan 27 15:26:44 crc kubenswrapper[4697]: I0127 15:26:44.303393 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"abff1f2e-e0f3-4730-888c-2e2d8464f624","Type":"ContainerStarted","Data":"e75716073c3165834ccf2316d02758f3237b18de338a5c2bafcbcc880dff5652"} Jan 27 15:26:44 crc kubenswrapper[4697]: I0127 15:26:44.340917 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 27 15:26:44 crc kubenswrapper[4697]: I0127 15:26:44.791186 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 27 15:26:45 crc kubenswrapper[4697]: I0127 15:26:45.299311 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 27 15:26:45 crc kubenswrapper[4697]: I0127 15:26:45.300611 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 27 15:26:45 crc kubenswrapper[4697]: I0127 15:26:45.305617 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-7dnjs" Jan 27 15:26:45 crc kubenswrapper[4697]: I0127 15:26:45.305677 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 27 15:26:45 crc kubenswrapper[4697]: I0127 15:26:45.306069 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 27 15:26:45 crc kubenswrapper[4697]: I0127 15:26:45.307475 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 27 15:26:45 crc kubenswrapper[4697]: I0127 15:26:45.326405 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 27 15:26:45 crc kubenswrapper[4697]: I0127 15:26:45.337228 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"b1f75076-2324-44ff-9a33-e083e3de3c02","Type":"ContainerStarted","Data":"76d93192642ca8b2c1db3e937bdf743a49d0365387c30c36db5228af81cc0842"} Jan 27 15:26:45 crc kubenswrapper[4697]: I0127 15:26:45.376724 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/a07684ab-be65-430a-89ff-7e3503304f07-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"a07684ab-be65-430a-89ff-7e3503304f07\") " pod="openstack/openstack-cell1-galera-0" Jan 27 15:26:45 crc kubenswrapper[4697]: I0127 15:26:45.376761 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/a07684ab-be65-430a-89ff-7e3503304f07-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"a07684ab-be65-430a-89ff-7e3503304f07\") " pod="openstack/openstack-cell1-galera-0" Jan 27 15:26:45 crc kubenswrapper[4697]: I0127 15:26:45.376827 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a07684ab-be65-430a-89ff-7e3503304f07-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"a07684ab-be65-430a-89ff-7e3503304f07\") " pod="openstack/openstack-cell1-galera-0" Jan 27 15:26:45 crc kubenswrapper[4697]: I0127 15:26:45.376875 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/a07684ab-be65-430a-89ff-7e3503304f07-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"a07684ab-be65-430a-89ff-7e3503304f07\") " pod="openstack/openstack-cell1-galera-0" Jan 27 15:26:45 crc kubenswrapper[4697]: I0127 15:26:45.376911 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a07684ab-be65-430a-89ff-7e3503304f07-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"a07684ab-be65-430a-89ff-7e3503304f07\") " pod="openstack/openstack-cell1-galera-0" Jan 27 15:26:45 crc kubenswrapper[4697]: I0127 15:26:45.376927 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnssx\" (UniqueName: \"kubernetes.io/projected/a07684ab-be65-430a-89ff-7e3503304f07-kube-api-access-rnssx\") pod \"openstack-cell1-galera-0\" (UID: \"a07684ab-be65-430a-89ff-7e3503304f07\") " pod="openstack/openstack-cell1-galera-0" Jan 27 15:26:45 crc kubenswrapper[4697]: I0127 15:26:45.376954 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-cell1-galera-0\" (UID: \"a07684ab-be65-430a-89ff-7e3503304f07\") " pod="openstack/openstack-cell1-galera-0" Jan 27 15:26:45 crc kubenswrapper[4697]: I0127 15:26:45.377097 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a07684ab-be65-430a-89ff-7e3503304f07-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"a07684ab-be65-430a-89ff-7e3503304f07\") " pod="openstack/openstack-cell1-galera-0" Jan 27 15:26:45 crc kubenswrapper[4697]: I0127 15:26:45.478386 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/a07684ab-be65-430a-89ff-7e3503304f07-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"a07684ab-be65-430a-89ff-7e3503304f07\") " pod="openstack/openstack-cell1-galera-0" Jan 27 15:26:45 crc kubenswrapper[4697]: I0127 15:26:45.478443 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/a07684ab-be65-430a-89ff-7e3503304f07-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"a07684ab-be65-430a-89ff-7e3503304f07\") " pod="openstack/openstack-cell1-galera-0" Jan 27 15:26:45 crc kubenswrapper[4697]: I0127 15:26:45.478475 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a07684ab-be65-430a-89ff-7e3503304f07-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"a07684ab-be65-430a-89ff-7e3503304f07\") " pod="openstack/openstack-cell1-galera-0" Jan 27 15:26:45 crc kubenswrapper[4697]: I0127 15:26:45.478517 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/a07684ab-be65-430a-89ff-7e3503304f07-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"a07684ab-be65-430a-89ff-7e3503304f07\") " pod="openstack/openstack-cell1-galera-0" Jan 27 15:26:45 crc kubenswrapper[4697]: I0127 15:26:45.478559 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a07684ab-be65-430a-89ff-7e3503304f07-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"a07684ab-be65-430a-89ff-7e3503304f07\") " pod="openstack/openstack-cell1-galera-0" Jan 27 15:26:45 crc kubenswrapper[4697]: I0127 15:26:45.478600 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rnssx\" (UniqueName: \"kubernetes.io/projected/a07684ab-be65-430a-89ff-7e3503304f07-kube-api-access-rnssx\") pod \"openstack-cell1-galera-0\" (UID: \"a07684ab-be65-430a-89ff-7e3503304f07\") " pod="openstack/openstack-cell1-galera-0" Jan 27 15:26:45 crc kubenswrapper[4697]: I0127 15:26:45.478633 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-cell1-galera-0\" (UID: \"a07684ab-be65-430a-89ff-7e3503304f07\") " pod="openstack/openstack-cell1-galera-0" Jan 27 15:26:45 crc kubenswrapper[4697]: I0127 15:26:45.478702 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a07684ab-be65-430a-89ff-7e3503304f07-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"a07684ab-be65-430a-89ff-7e3503304f07\") " pod="openstack/openstack-cell1-galera-0" Jan 27 15:26:45 crc kubenswrapper[4697]: I0127 15:26:45.482760 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/a07684ab-be65-430a-89ff-7e3503304f07-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"a07684ab-be65-430a-89ff-7e3503304f07\") " pod="openstack/openstack-cell1-galera-0" Jan 27 15:26:45 crc kubenswrapper[4697]: I0127 15:26:45.483057 4697 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-cell1-galera-0\" (UID: \"a07684ab-be65-430a-89ff-7e3503304f07\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/openstack-cell1-galera-0" Jan 27 15:26:45 crc kubenswrapper[4697]: I0127 15:26:45.483163 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/a07684ab-be65-430a-89ff-7e3503304f07-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"a07684ab-be65-430a-89ff-7e3503304f07\") " pod="openstack/openstack-cell1-galera-0" Jan 27 15:26:45 crc kubenswrapper[4697]: I0127 15:26:45.483348 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a07684ab-be65-430a-89ff-7e3503304f07-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"a07684ab-be65-430a-89ff-7e3503304f07\") " pod="openstack/openstack-cell1-galera-0" Jan 27 15:26:45 crc kubenswrapper[4697]: I0127 15:26:45.484565 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a07684ab-be65-430a-89ff-7e3503304f07-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"a07684ab-be65-430a-89ff-7e3503304f07\") " pod="openstack/openstack-cell1-galera-0" Jan 27 15:26:45 crc kubenswrapper[4697]: I0127 15:26:45.502717 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a07684ab-be65-430a-89ff-7e3503304f07-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"a07684ab-be65-430a-89ff-7e3503304f07\") " pod="openstack/openstack-cell1-galera-0" Jan 27 15:26:45 crc kubenswrapper[4697]: I0127 15:26:45.510837 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/a07684ab-be65-430a-89ff-7e3503304f07-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"a07684ab-be65-430a-89ff-7e3503304f07\") " pod="openstack/openstack-cell1-galera-0" Jan 27 15:26:45 crc kubenswrapper[4697]: I0127 15:26:45.520453 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-cell1-galera-0\" (UID: \"a07684ab-be65-430a-89ff-7e3503304f07\") " pod="openstack/openstack-cell1-galera-0" Jan 27 15:26:45 crc kubenswrapper[4697]: I0127 15:26:45.521604 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rnssx\" (UniqueName: \"kubernetes.io/projected/a07684ab-be65-430a-89ff-7e3503304f07-kube-api-access-rnssx\") pod \"openstack-cell1-galera-0\" (UID: \"a07684ab-be65-430a-89ff-7e3503304f07\") " pod="openstack/openstack-cell1-galera-0" Jan 27 15:26:45 crc kubenswrapper[4697]: I0127 15:26:45.639339 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 27 15:26:45 crc kubenswrapper[4697]: I0127 15:26:45.664695 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Jan 27 15:26:45 crc kubenswrapper[4697]: I0127 15:26:45.671756 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 27 15:26:45 crc kubenswrapper[4697]: I0127 15:26:45.682858 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Jan 27 15:26:45 crc kubenswrapper[4697]: I0127 15:26:45.683330 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-675jp" Jan 27 15:26:45 crc kubenswrapper[4697]: I0127 15:26:45.683975 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 27 15:26:45 crc kubenswrapper[4697]: I0127 15:26:45.685724 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39bb5256-76f5-4ada-8803-c88ee4ccd881-combined-ca-bundle\") pod \"memcached-0\" (UID: \"39bb5256-76f5-4ada-8803-c88ee4ccd881\") " pod="openstack/memcached-0" Jan 27 15:26:45 crc kubenswrapper[4697]: I0127 15:26:45.685885 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lc48g\" (UniqueName: \"kubernetes.io/projected/39bb5256-76f5-4ada-8803-c88ee4ccd881-kube-api-access-lc48g\") pod \"memcached-0\" (UID: \"39bb5256-76f5-4ada-8803-c88ee4ccd881\") " pod="openstack/memcached-0" Jan 27 15:26:45 crc kubenswrapper[4697]: I0127 15:26:45.685934 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/39bb5256-76f5-4ada-8803-c88ee4ccd881-memcached-tls-certs\") pod \"memcached-0\" (UID: \"39bb5256-76f5-4ada-8803-c88ee4ccd881\") " pod="openstack/memcached-0" Jan 27 15:26:45 crc kubenswrapper[4697]: I0127 15:26:45.685992 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/39bb5256-76f5-4ada-8803-c88ee4ccd881-config-data\") pod \"memcached-0\" (UID: \"39bb5256-76f5-4ada-8803-c88ee4ccd881\") " pod="openstack/memcached-0" Jan 27 15:26:45 crc kubenswrapper[4697]: I0127 15:26:45.686106 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/39bb5256-76f5-4ada-8803-c88ee4ccd881-kolla-config\") pod \"memcached-0\" (UID: \"39bb5256-76f5-4ada-8803-c88ee4ccd881\") " pod="openstack/memcached-0" Jan 27 15:26:45 crc kubenswrapper[4697]: I0127 15:26:45.708778 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 27 15:26:45 crc kubenswrapper[4697]: I0127 15:26:45.819627 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lc48g\" (UniqueName: \"kubernetes.io/projected/39bb5256-76f5-4ada-8803-c88ee4ccd881-kube-api-access-lc48g\") pod \"memcached-0\" (UID: \"39bb5256-76f5-4ada-8803-c88ee4ccd881\") " pod="openstack/memcached-0" Jan 27 15:26:45 crc kubenswrapper[4697]: I0127 15:26:45.819699 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/39bb5256-76f5-4ada-8803-c88ee4ccd881-memcached-tls-certs\") pod \"memcached-0\" (UID: \"39bb5256-76f5-4ada-8803-c88ee4ccd881\") " pod="openstack/memcached-0" Jan 27 15:26:45 crc kubenswrapper[4697]: I0127 15:26:45.819746 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/39bb5256-76f5-4ada-8803-c88ee4ccd881-config-data\") pod \"memcached-0\" (UID: \"39bb5256-76f5-4ada-8803-c88ee4ccd881\") " pod="openstack/memcached-0" Jan 27 15:26:45 crc kubenswrapper[4697]: I0127 15:26:45.819777 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/39bb5256-76f5-4ada-8803-c88ee4ccd881-kolla-config\") pod \"memcached-0\" (UID: \"39bb5256-76f5-4ada-8803-c88ee4ccd881\") " pod="openstack/memcached-0" Jan 27 15:26:45 crc kubenswrapper[4697]: I0127 15:26:45.819820 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39bb5256-76f5-4ada-8803-c88ee4ccd881-combined-ca-bundle\") pod \"memcached-0\" (UID: \"39bb5256-76f5-4ada-8803-c88ee4ccd881\") " pod="openstack/memcached-0" Jan 27 15:26:45 crc kubenswrapper[4697]: I0127 15:26:45.822327 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/39bb5256-76f5-4ada-8803-c88ee4ccd881-kolla-config\") pod \"memcached-0\" (UID: \"39bb5256-76f5-4ada-8803-c88ee4ccd881\") " pod="openstack/memcached-0" Jan 27 15:26:45 crc kubenswrapper[4697]: I0127 15:26:45.822939 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/39bb5256-76f5-4ada-8803-c88ee4ccd881-config-data\") pod \"memcached-0\" (UID: \"39bb5256-76f5-4ada-8803-c88ee4ccd881\") " pod="openstack/memcached-0" Jan 27 15:26:45 crc kubenswrapper[4697]: I0127 15:26:45.825955 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39bb5256-76f5-4ada-8803-c88ee4ccd881-combined-ca-bundle\") pod \"memcached-0\" (UID: \"39bb5256-76f5-4ada-8803-c88ee4ccd881\") " pod="openstack/memcached-0" Jan 27 15:26:45 crc kubenswrapper[4697]: I0127 15:26:45.846634 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/39bb5256-76f5-4ada-8803-c88ee4ccd881-memcached-tls-certs\") pod \"memcached-0\" (UID: \"39bb5256-76f5-4ada-8803-c88ee4ccd881\") " pod="openstack/memcached-0" Jan 27 15:26:45 crc kubenswrapper[4697]: I0127 15:26:45.855861 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lc48g\" (UniqueName: \"kubernetes.io/projected/39bb5256-76f5-4ada-8803-c88ee4ccd881-kube-api-access-lc48g\") pod \"memcached-0\" (UID: \"39bb5256-76f5-4ada-8803-c88ee4ccd881\") " pod="openstack/memcached-0" Jan 27 15:26:46 crc kubenswrapper[4697]: I0127 15:26:46.014144 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 27 15:26:47 crc kubenswrapper[4697]: I0127 15:26:47.045537 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 27 15:26:47 crc kubenswrapper[4697]: W0127 15:26:47.103666 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda07684ab_be65_430a_89ff_7e3503304f07.slice/crio-00c8ffba33998e4e15be87af1bdb005a844f1d54c70ef3743ba7caf9a25f5721 WatchSource:0}: Error finding container 00c8ffba33998e4e15be87af1bdb005a844f1d54c70ef3743ba7caf9a25f5721: Status 404 returned error can't find the container with id 00c8ffba33998e4e15be87af1bdb005a844f1d54c70ef3743ba7caf9a25f5721 Jan 27 15:26:47 crc kubenswrapper[4697]: I0127 15:26:47.154414 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 27 15:26:47 crc kubenswrapper[4697]: I0127 15:26:47.359920 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"a07684ab-be65-430a-89ff-7e3503304f07","Type":"ContainerStarted","Data":"00c8ffba33998e4e15be87af1bdb005a844f1d54c70ef3743ba7caf9a25f5721"} Jan 27 15:26:47 crc kubenswrapper[4697]: I0127 15:26:47.363665 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"39bb5256-76f5-4ada-8803-c88ee4ccd881","Type":"ContainerStarted","Data":"164f90d7c99900ea63ae39c698b49fc423f14cab0da087ab7abcc34a669b242a"} Jan 27 15:26:47 crc kubenswrapper[4697]: I0127 15:26:47.752077 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 27 15:26:47 crc kubenswrapper[4697]: I0127 15:26:47.753186 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 27 15:26:47 crc kubenswrapper[4697]: I0127 15:26:47.756223 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-l82mg" Jan 27 15:26:47 crc kubenswrapper[4697]: I0127 15:26:47.784107 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 27 15:26:47 crc kubenswrapper[4697]: I0127 15:26:47.871653 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9kzx\" (UniqueName: \"kubernetes.io/projected/ea92e796-e6f5-458e-a47f-b7d34100f837-kube-api-access-f9kzx\") pod \"kube-state-metrics-0\" (UID: \"ea92e796-e6f5-458e-a47f-b7d34100f837\") " pod="openstack/kube-state-metrics-0" Jan 27 15:26:47 crc kubenswrapper[4697]: I0127 15:26:47.973298 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f9kzx\" (UniqueName: \"kubernetes.io/projected/ea92e796-e6f5-458e-a47f-b7d34100f837-kube-api-access-f9kzx\") pod \"kube-state-metrics-0\" (UID: \"ea92e796-e6f5-458e-a47f-b7d34100f837\") " pod="openstack/kube-state-metrics-0" Jan 27 15:26:47 crc kubenswrapper[4697]: I0127 15:26:47.991929 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9kzx\" (UniqueName: \"kubernetes.io/projected/ea92e796-e6f5-458e-a47f-b7d34100f837-kube-api-access-f9kzx\") pod \"kube-state-metrics-0\" (UID: \"ea92e796-e6f5-458e-a47f-b7d34100f837\") " pod="openstack/kube-state-metrics-0" Jan 27 15:26:48 crc kubenswrapper[4697]: I0127 15:26:48.091565 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 27 15:26:51 crc kubenswrapper[4697]: I0127 15:26:51.043316 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-6sgqx"] Jan 27 15:26:51 crc kubenswrapper[4697]: I0127 15:26:51.046433 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-6sgqx" Jan 27 15:26:51 crc kubenswrapper[4697]: I0127 15:26:51.049327 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Jan 27 15:26:51 crc kubenswrapper[4697]: I0127 15:26:51.049491 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-brbmq" Jan 27 15:26:51 crc kubenswrapper[4697]: I0127 15:26:51.049686 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Jan 27 15:26:51 crc kubenswrapper[4697]: I0127 15:26:51.055529 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-6sgqx"] Jan 27 15:26:51 crc kubenswrapper[4697]: I0127 15:26:51.141419 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-p278q"] Jan 27 15:26:51 crc kubenswrapper[4697]: I0127 15:26:51.143393 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-p278q" Jan 27 15:26:51 crc kubenswrapper[4697]: I0127 15:26:51.144484 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/72f31a1f-c388-4fed-9842-13f65cf91e9b-var-log-ovn\") pod \"ovn-controller-6sgqx\" (UID: \"72f31a1f-c388-4fed-9842-13f65cf91e9b\") " pod="openstack/ovn-controller-6sgqx" Jan 27 15:26:51 crc kubenswrapper[4697]: I0127 15:26:51.144543 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/72f31a1f-c388-4fed-9842-13f65cf91e9b-scripts\") pod \"ovn-controller-6sgqx\" (UID: \"72f31a1f-c388-4fed-9842-13f65cf91e9b\") " pod="openstack/ovn-controller-6sgqx" Jan 27 15:26:51 crc kubenswrapper[4697]: I0127 15:26:51.144595 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrt7z\" (UniqueName: \"kubernetes.io/projected/72f31a1f-c388-4fed-9842-13f65cf91e9b-kube-api-access-mrt7z\") pod \"ovn-controller-6sgqx\" (UID: \"72f31a1f-c388-4fed-9842-13f65cf91e9b\") " pod="openstack/ovn-controller-6sgqx" Jan 27 15:26:51 crc kubenswrapper[4697]: I0127 15:26:51.144616 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/72f31a1f-c388-4fed-9842-13f65cf91e9b-var-run-ovn\") pod \"ovn-controller-6sgqx\" (UID: \"72f31a1f-c388-4fed-9842-13f65cf91e9b\") " pod="openstack/ovn-controller-6sgqx" Jan 27 15:26:51 crc kubenswrapper[4697]: I0127 15:26:51.144640 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72f31a1f-c388-4fed-9842-13f65cf91e9b-combined-ca-bundle\") pod \"ovn-controller-6sgqx\" (UID: \"72f31a1f-c388-4fed-9842-13f65cf91e9b\") " pod="openstack/ovn-controller-6sgqx" Jan 27 15:26:51 crc kubenswrapper[4697]: I0127 15:26:51.144666 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/72f31a1f-c388-4fed-9842-13f65cf91e9b-ovn-controller-tls-certs\") pod \"ovn-controller-6sgqx\" (UID: \"72f31a1f-c388-4fed-9842-13f65cf91e9b\") " pod="openstack/ovn-controller-6sgqx" Jan 27 15:26:51 crc kubenswrapper[4697]: I0127 15:26:51.144685 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/72f31a1f-c388-4fed-9842-13f65cf91e9b-var-run\") pod \"ovn-controller-6sgqx\" (UID: \"72f31a1f-c388-4fed-9842-13f65cf91e9b\") " pod="openstack/ovn-controller-6sgqx" Jan 27 15:26:51 crc kubenswrapper[4697]: I0127 15:26:51.217329 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-p278q"] Jan 27 15:26:51 crc kubenswrapper[4697]: I0127 15:26:51.245647 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrt7z\" (UniqueName: \"kubernetes.io/projected/72f31a1f-c388-4fed-9842-13f65cf91e9b-kube-api-access-mrt7z\") pod \"ovn-controller-6sgqx\" (UID: \"72f31a1f-c388-4fed-9842-13f65cf91e9b\") " pod="openstack/ovn-controller-6sgqx" Jan 27 15:26:51 crc kubenswrapper[4697]: I0127 15:26:51.245705 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/72f31a1f-c388-4fed-9842-13f65cf91e9b-var-run-ovn\") pod \"ovn-controller-6sgqx\" (UID: \"72f31a1f-c388-4fed-9842-13f65cf91e9b\") " pod="openstack/ovn-controller-6sgqx" Jan 27 15:26:51 crc kubenswrapper[4697]: I0127 15:26:51.245739 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72f31a1f-c388-4fed-9842-13f65cf91e9b-combined-ca-bundle\") pod \"ovn-controller-6sgqx\" (UID: \"72f31a1f-c388-4fed-9842-13f65cf91e9b\") " pod="openstack/ovn-controller-6sgqx" Jan 27 15:26:51 crc kubenswrapper[4697]: I0127 15:26:51.245777 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/72f31a1f-c388-4fed-9842-13f65cf91e9b-ovn-controller-tls-certs\") pod \"ovn-controller-6sgqx\" (UID: \"72f31a1f-c388-4fed-9842-13f65cf91e9b\") " pod="openstack/ovn-controller-6sgqx" Jan 27 15:26:51 crc kubenswrapper[4697]: I0127 15:26:51.245828 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/72f31a1f-c388-4fed-9842-13f65cf91e9b-var-run\") pod \"ovn-controller-6sgqx\" (UID: \"72f31a1f-c388-4fed-9842-13f65cf91e9b\") " pod="openstack/ovn-controller-6sgqx" Jan 27 15:26:51 crc kubenswrapper[4697]: I0127 15:26:51.245856 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/9cb679cc-394c-4c45-8712-058fad1090e7-etc-ovs\") pod \"ovn-controller-ovs-p278q\" (UID: \"9cb679cc-394c-4c45-8712-058fad1090e7\") " pod="openstack/ovn-controller-ovs-p278q" Jan 27 15:26:51 crc kubenswrapper[4697]: I0127 15:26:51.245880 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9cb679cc-394c-4c45-8712-058fad1090e7-scripts\") pod \"ovn-controller-ovs-p278q\" (UID: \"9cb679cc-394c-4c45-8712-058fad1090e7\") " pod="openstack/ovn-controller-ovs-p278q" Jan 27 15:26:51 crc kubenswrapper[4697]: I0127 15:26:51.245907 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/72f31a1f-c388-4fed-9842-13f65cf91e9b-var-log-ovn\") pod \"ovn-controller-6sgqx\" (UID: \"72f31a1f-c388-4fed-9842-13f65cf91e9b\") " pod="openstack/ovn-controller-6sgqx" Jan 27 15:26:51 crc kubenswrapper[4697]: I0127 15:26:51.245941 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/9cb679cc-394c-4c45-8712-058fad1090e7-var-lib\") pod \"ovn-controller-ovs-p278q\" (UID: \"9cb679cc-394c-4c45-8712-058fad1090e7\") " pod="openstack/ovn-controller-ovs-p278q" Jan 27 15:26:51 crc kubenswrapper[4697]: I0127 15:26:51.245970 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2l6x9\" (UniqueName: \"kubernetes.io/projected/9cb679cc-394c-4c45-8712-058fad1090e7-kube-api-access-2l6x9\") pod \"ovn-controller-ovs-p278q\" (UID: \"9cb679cc-394c-4c45-8712-058fad1090e7\") " pod="openstack/ovn-controller-ovs-p278q" Jan 27 15:26:51 crc kubenswrapper[4697]: I0127 15:26:51.245994 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/72f31a1f-c388-4fed-9842-13f65cf91e9b-scripts\") pod \"ovn-controller-6sgqx\" (UID: \"72f31a1f-c388-4fed-9842-13f65cf91e9b\") " pod="openstack/ovn-controller-6sgqx" Jan 27 15:26:51 crc kubenswrapper[4697]: I0127 15:26:51.246059 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/9cb679cc-394c-4c45-8712-058fad1090e7-var-log\") pod \"ovn-controller-ovs-p278q\" (UID: \"9cb679cc-394c-4c45-8712-058fad1090e7\") " pod="openstack/ovn-controller-ovs-p278q" Jan 27 15:26:51 crc kubenswrapper[4697]: I0127 15:26:51.246086 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9cb679cc-394c-4c45-8712-058fad1090e7-var-run\") pod \"ovn-controller-ovs-p278q\" (UID: \"9cb679cc-394c-4c45-8712-058fad1090e7\") " pod="openstack/ovn-controller-ovs-p278q" Jan 27 15:26:51 crc kubenswrapper[4697]: I0127 15:26:51.246496 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/72f31a1f-c388-4fed-9842-13f65cf91e9b-var-log-ovn\") pod \"ovn-controller-6sgqx\" (UID: \"72f31a1f-c388-4fed-9842-13f65cf91e9b\") " pod="openstack/ovn-controller-6sgqx" Jan 27 15:26:51 crc kubenswrapper[4697]: I0127 15:26:51.246621 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/72f31a1f-c388-4fed-9842-13f65cf91e9b-var-run\") pod \"ovn-controller-6sgqx\" (UID: \"72f31a1f-c388-4fed-9842-13f65cf91e9b\") " pod="openstack/ovn-controller-6sgqx" Jan 27 15:26:51 crc kubenswrapper[4697]: I0127 15:26:51.246689 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/72f31a1f-c388-4fed-9842-13f65cf91e9b-var-run-ovn\") pod \"ovn-controller-6sgqx\" (UID: \"72f31a1f-c388-4fed-9842-13f65cf91e9b\") " pod="openstack/ovn-controller-6sgqx" Jan 27 15:26:51 crc kubenswrapper[4697]: I0127 15:26:51.248639 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/72f31a1f-c388-4fed-9842-13f65cf91e9b-scripts\") pod \"ovn-controller-6sgqx\" (UID: \"72f31a1f-c388-4fed-9842-13f65cf91e9b\") " pod="openstack/ovn-controller-6sgqx" Jan 27 15:26:51 crc kubenswrapper[4697]: I0127 15:26:51.251375 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/72f31a1f-c388-4fed-9842-13f65cf91e9b-ovn-controller-tls-certs\") pod \"ovn-controller-6sgqx\" (UID: \"72f31a1f-c388-4fed-9842-13f65cf91e9b\") " pod="openstack/ovn-controller-6sgqx" Jan 27 15:26:51 crc kubenswrapper[4697]: I0127 15:26:51.256400 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72f31a1f-c388-4fed-9842-13f65cf91e9b-combined-ca-bundle\") pod \"ovn-controller-6sgqx\" (UID: \"72f31a1f-c388-4fed-9842-13f65cf91e9b\") " pod="openstack/ovn-controller-6sgqx" Jan 27 15:26:51 crc kubenswrapper[4697]: I0127 15:26:51.262165 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrt7z\" (UniqueName: \"kubernetes.io/projected/72f31a1f-c388-4fed-9842-13f65cf91e9b-kube-api-access-mrt7z\") pod \"ovn-controller-6sgqx\" (UID: \"72f31a1f-c388-4fed-9842-13f65cf91e9b\") " pod="openstack/ovn-controller-6sgqx" Jan 27 15:26:51 crc kubenswrapper[4697]: I0127 15:26:51.349344 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/9cb679cc-394c-4c45-8712-058fad1090e7-var-log\") pod \"ovn-controller-ovs-p278q\" (UID: \"9cb679cc-394c-4c45-8712-058fad1090e7\") " pod="openstack/ovn-controller-ovs-p278q" Jan 27 15:26:51 crc kubenswrapper[4697]: I0127 15:26:51.349384 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9cb679cc-394c-4c45-8712-058fad1090e7-var-run\") pod \"ovn-controller-ovs-p278q\" (UID: \"9cb679cc-394c-4c45-8712-058fad1090e7\") " pod="openstack/ovn-controller-ovs-p278q" Jan 27 15:26:51 crc kubenswrapper[4697]: I0127 15:26:51.349466 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/9cb679cc-394c-4c45-8712-058fad1090e7-etc-ovs\") pod \"ovn-controller-ovs-p278q\" (UID: \"9cb679cc-394c-4c45-8712-058fad1090e7\") " pod="openstack/ovn-controller-ovs-p278q" Jan 27 15:26:51 crc kubenswrapper[4697]: I0127 15:26:51.349490 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9cb679cc-394c-4c45-8712-058fad1090e7-scripts\") pod \"ovn-controller-ovs-p278q\" (UID: \"9cb679cc-394c-4c45-8712-058fad1090e7\") " pod="openstack/ovn-controller-ovs-p278q" Jan 27 15:26:51 crc kubenswrapper[4697]: I0127 15:26:51.349529 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/9cb679cc-394c-4c45-8712-058fad1090e7-var-lib\") pod \"ovn-controller-ovs-p278q\" (UID: \"9cb679cc-394c-4c45-8712-058fad1090e7\") " pod="openstack/ovn-controller-ovs-p278q" Jan 27 15:26:51 crc kubenswrapper[4697]: I0127 15:26:51.349559 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2l6x9\" (UniqueName: \"kubernetes.io/projected/9cb679cc-394c-4c45-8712-058fad1090e7-kube-api-access-2l6x9\") pod \"ovn-controller-ovs-p278q\" (UID: \"9cb679cc-394c-4c45-8712-058fad1090e7\") " pod="openstack/ovn-controller-ovs-p278q" Jan 27 15:26:51 crc kubenswrapper[4697]: I0127 15:26:51.351157 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/9cb679cc-394c-4c45-8712-058fad1090e7-var-log\") pod \"ovn-controller-ovs-p278q\" (UID: \"9cb679cc-394c-4c45-8712-058fad1090e7\") " pod="openstack/ovn-controller-ovs-p278q" Jan 27 15:26:51 crc kubenswrapper[4697]: I0127 15:26:51.351324 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9cb679cc-394c-4c45-8712-058fad1090e7-var-run\") pod \"ovn-controller-ovs-p278q\" (UID: \"9cb679cc-394c-4c45-8712-058fad1090e7\") " pod="openstack/ovn-controller-ovs-p278q" Jan 27 15:26:51 crc kubenswrapper[4697]: I0127 15:26:51.351443 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/9cb679cc-394c-4c45-8712-058fad1090e7-etc-ovs\") pod \"ovn-controller-ovs-p278q\" (UID: \"9cb679cc-394c-4c45-8712-058fad1090e7\") " pod="openstack/ovn-controller-ovs-p278q" Jan 27 15:26:51 crc kubenswrapper[4697]: I0127 15:26:51.353873 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9cb679cc-394c-4c45-8712-058fad1090e7-scripts\") pod \"ovn-controller-ovs-p278q\" (UID: \"9cb679cc-394c-4c45-8712-058fad1090e7\") " pod="openstack/ovn-controller-ovs-p278q" Jan 27 15:26:51 crc kubenswrapper[4697]: I0127 15:26:51.354073 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/9cb679cc-394c-4c45-8712-058fad1090e7-var-lib\") pod \"ovn-controller-ovs-p278q\" (UID: \"9cb679cc-394c-4c45-8712-058fad1090e7\") " pod="openstack/ovn-controller-ovs-p278q" Jan 27 15:26:51 crc kubenswrapper[4697]: I0127 15:26:51.362068 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 27 15:26:51 crc kubenswrapper[4697]: I0127 15:26:51.363859 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 27 15:26:51 crc kubenswrapper[4697]: I0127 15:26:51.369691 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 27 15:26:51 crc kubenswrapper[4697]: I0127 15:26:51.369899 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Jan 27 15:26:51 crc kubenswrapper[4697]: I0127 15:26:51.374389 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2l6x9\" (UniqueName: \"kubernetes.io/projected/9cb679cc-394c-4c45-8712-058fad1090e7-kube-api-access-2l6x9\") pod \"ovn-controller-ovs-p278q\" (UID: \"9cb679cc-394c-4c45-8712-058fad1090e7\") " pod="openstack/ovn-controller-ovs-p278q" Jan 27 15:26:51 crc kubenswrapper[4697]: I0127 15:26:51.374393 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 27 15:26:51 crc kubenswrapper[4697]: I0127 15:26:51.375105 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 27 15:26:51 crc kubenswrapper[4697]: I0127 15:26:51.376006 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-4cqsk" Jan 27 15:26:51 crc kubenswrapper[4697]: I0127 15:26:51.376194 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Jan 27 15:26:51 crc kubenswrapper[4697]: I0127 15:26:51.383594 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-6sgqx" Jan 27 15:26:51 crc kubenswrapper[4697]: I0127 15:26:51.450786 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-nb-0\" (UID: \"0e5111c0-346b-4994-822c-c86f4ee166bc\") " pod="openstack/ovsdbserver-nb-0" Jan 27 15:26:51 crc kubenswrapper[4697]: I0127 15:26:51.450892 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e5111c0-346b-4994-822c-c86f4ee166bc-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"0e5111c0-346b-4994-822c-c86f4ee166bc\") " pod="openstack/ovsdbserver-nb-0" Jan 27 15:26:51 crc kubenswrapper[4697]: I0127 15:26:51.450948 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0e5111c0-346b-4994-822c-c86f4ee166bc-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"0e5111c0-346b-4994-822c-c86f4ee166bc\") " pod="openstack/ovsdbserver-nb-0" Jan 27 15:26:51 crc kubenswrapper[4697]: I0127 15:26:51.451053 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e5111c0-346b-4994-822c-c86f4ee166bc-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"0e5111c0-346b-4994-822c-c86f4ee166bc\") " pod="openstack/ovsdbserver-nb-0" Jan 27 15:26:51 crc kubenswrapper[4697]: I0127 15:26:51.451144 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e5111c0-346b-4994-822c-c86f4ee166bc-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"0e5111c0-346b-4994-822c-c86f4ee166bc\") " pod="openstack/ovsdbserver-nb-0" Jan 27 15:26:51 crc kubenswrapper[4697]: I0127 15:26:51.451215 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e5111c0-346b-4994-822c-c86f4ee166bc-config\") pod \"ovsdbserver-nb-0\" (UID: \"0e5111c0-346b-4994-822c-c86f4ee166bc\") " pod="openstack/ovsdbserver-nb-0" Jan 27 15:26:51 crc kubenswrapper[4697]: I0127 15:26:51.451261 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwsjh\" (UniqueName: \"kubernetes.io/projected/0e5111c0-346b-4994-822c-c86f4ee166bc-kube-api-access-kwsjh\") pod \"ovsdbserver-nb-0\" (UID: \"0e5111c0-346b-4994-822c-c86f4ee166bc\") " pod="openstack/ovsdbserver-nb-0" Jan 27 15:26:51 crc kubenswrapper[4697]: I0127 15:26:51.451396 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0e5111c0-346b-4994-822c-c86f4ee166bc-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"0e5111c0-346b-4994-822c-c86f4ee166bc\") " pod="openstack/ovsdbserver-nb-0" Jan 27 15:26:51 crc kubenswrapper[4697]: I0127 15:26:51.461188 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-p278q" Jan 27 15:26:51 crc kubenswrapper[4697]: I0127 15:26:51.552617 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e5111c0-346b-4994-822c-c86f4ee166bc-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"0e5111c0-346b-4994-822c-c86f4ee166bc\") " pod="openstack/ovsdbserver-nb-0" Jan 27 15:26:51 crc kubenswrapper[4697]: I0127 15:26:51.552658 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e5111c0-346b-4994-822c-c86f4ee166bc-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"0e5111c0-346b-4994-822c-c86f4ee166bc\") " pod="openstack/ovsdbserver-nb-0" Jan 27 15:26:51 crc kubenswrapper[4697]: I0127 15:26:51.552682 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e5111c0-346b-4994-822c-c86f4ee166bc-config\") pod \"ovsdbserver-nb-0\" (UID: \"0e5111c0-346b-4994-822c-c86f4ee166bc\") " pod="openstack/ovsdbserver-nb-0" Jan 27 15:26:51 crc kubenswrapper[4697]: I0127 15:26:51.552707 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwsjh\" (UniqueName: \"kubernetes.io/projected/0e5111c0-346b-4994-822c-c86f4ee166bc-kube-api-access-kwsjh\") pod \"ovsdbserver-nb-0\" (UID: \"0e5111c0-346b-4994-822c-c86f4ee166bc\") " pod="openstack/ovsdbserver-nb-0" Jan 27 15:26:51 crc kubenswrapper[4697]: I0127 15:26:51.552737 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0e5111c0-346b-4994-822c-c86f4ee166bc-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"0e5111c0-346b-4994-822c-c86f4ee166bc\") " pod="openstack/ovsdbserver-nb-0" Jan 27 15:26:51 crc kubenswrapper[4697]: I0127 15:26:51.552760 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-nb-0\" (UID: \"0e5111c0-346b-4994-822c-c86f4ee166bc\") " pod="openstack/ovsdbserver-nb-0" Jan 27 15:26:51 crc kubenswrapper[4697]: I0127 15:26:51.552798 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e5111c0-346b-4994-822c-c86f4ee166bc-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"0e5111c0-346b-4994-822c-c86f4ee166bc\") " pod="openstack/ovsdbserver-nb-0" Jan 27 15:26:51 crc kubenswrapper[4697]: I0127 15:26:51.552837 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0e5111c0-346b-4994-822c-c86f4ee166bc-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"0e5111c0-346b-4994-822c-c86f4ee166bc\") " pod="openstack/ovsdbserver-nb-0" Jan 27 15:26:51 crc kubenswrapper[4697]: I0127 15:26:51.553268 4697 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-nb-0\" (UID: \"0e5111c0-346b-4994-822c-c86f4ee166bc\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/ovsdbserver-nb-0" Jan 27 15:26:51 crc kubenswrapper[4697]: I0127 15:26:51.553329 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0e5111c0-346b-4994-822c-c86f4ee166bc-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"0e5111c0-346b-4994-822c-c86f4ee166bc\") " pod="openstack/ovsdbserver-nb-0" Jan 27 15:26:51 crc kubenswrapper[4697]: I0127 15:26:51.555707 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0e5111c0-346b-4994-822c-c86f4ee166bc-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"0e5111c0-346b-4994-822c-c86f4ee166bc\") " pod="openstack/ovsdbserver-nb-0" Jan 27 15:26:51 crc kubenswrapper[4697]: I0127 15:26:51.559848 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e5111c0-346b-4994-822c-c86f4ee166bc-config\") pod \"ovsdbserver-nb-0\" (UID: \"0e5111c0-346b-4994-822c-c86f4ee166bc\") " pod="openstack/ovsdbserver-nb-0" Jan 27 15:26:51 crc kubenswrapper[4697]: I0127 15:26:51.570232 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e5111c0-346b-4994-822c-c86f4ee166bc-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"0e5111c0-346b-4994-822c-c86f4ee166bc\") " pod="openstack/ovsdbserver-nb-0" Jan 27 15:26:51 crc kubenswrapper[4697]: I0127 15:26:51.575660 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e5111c0-346b-4994-822c-c86f4ee166bc-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"0e5111c0-346b-4994-822c-c86f4ee166bc\") " pod="openstack/ovsdbserver-nb-0" Jan 27 15:26:51 crc kubenswrapper[4697]: I0127 15:26:51.588459 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwsjh\" (UniqueName: \"kubernetes.io/projected/0e5111c0-346b-4994-822c-c86f4ee166bc-kube-api-access-kwsjh\") pod \"ovsdbserver-nb-0\" (UID: \"0e5111c0-346b-4994-822c-c86f4ee166bc\") " pod="openstack/ovsdbserver-nb-0" Jan 27 15:26:51 crc kubenswrapper[4697]: I0127 15:26:51.592735 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e5111c0-346b-4994-822c-c86f4ee166bc-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"0e5111c0-346b-4994-822c-c86f4ee166bc\") " pod="openstack/ovsdbserver-nb-0" Jan 27 15:26:51 crc kubenswrapper[4697]: I0127 15:26:51.609105 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-nb-0\" (UID: \"0e5111c0-346b-4994-822c-c86f4ee166bc\") " pod="openstack/ovsdbserver-nb-0" Jan 27 15:26:51 crc kubenswrapper[4697]: I0127 15:26:51.712011 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 27 15:26:54 crc kubenswrapper[4697]: I0127 15:26:54.516354 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 27 15:26:54 crc kubenswrapper[4697]: I0127 15:26:54.522515 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 27 15:26:54 crc kubenswrapper[4697]: I0127 15:26:54.522614 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 27 15:26:54 crc kubenswrapper[4697]: I0127 15:26:54.525211 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 27 15:26:54 crc kubenswrapper[4697]: I0127 15:26:54.525378 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 27 15:26:54 crc kubenswrapper[4697]: I0127 15:26:54.525492 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Jan 27 15:26:54 crc kubenswrapper[4697]: I0127 15:26:54.525682 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-259vf" Jan 27 15:26:54 crc kubenswrapper[4697]: I0127 15:26:54.611189 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0c5d2358-058b-4d32-86b7-20228aff9677-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"0c5d2358-058b-4d32-86b7-20228aff9677\") " pod="openstack/ovsdbserver-sb-0" Jan 27 15:26:54 crc kubenswrapper[4697]: I0127 15:26:54.611248 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c5d2358-058b-4d32-86b7-20228aff9677-config\") pod \"ovsdbserver-sb-0\" (UID: \"0c5d2358-058b-4d32-86b7-20228aff9677\") " pod="openstack/ovsdbserver-sb-0" Jan 27 15:26:54 crc kubenswrapper[4697]: I0127 15:26:54.611271 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4j7rj\" (UniqueName: \"kubernetes.io/projected/0c5d2358-058b-4d32-86b7-20228aff9677-kube-api-access-4j7rj\") pod \"ovsdbserver-sb-0\" (UID: \"0c5d2358-058b-4d32-86b7-20228aff9677\") " pod="openstack/ovsdbserver-sb-0" Jan 27 15:26:54 crc kubenswrapper[4697]: I0127 15:26:54.611400 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0c5d2358-058b-4d32-86b7-20228aff9677-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"0c5d2358-058b-4d32-86b7-20228aff9677\") " pod="openstack/ovsdbserver-sb-0" Jan 27 15:26:54 crc kubenswrapper[4697]: I0127 15:26:54.611454 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0c5d2358-058b-4d32-86b7-20228aff9677-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"0c5d2358-058b-4d32-86b7-20228aff9677\") " pod="openstack/ovsdbserver-sb-0" Jan 27 15:26:54 crc kubenswrapper[4697]: I0127 15:26:54.611639 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0c5d2358-058b-4d32-86b7-20228aff9677-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"0c5d2358-058b-4d32-86b7-20228aff9677\") " pod="openstack/ovsdbserver-sb-0" Jan 27 15:26:54 crc kubenswrapper[4697]: I0127 15:26:54.611691 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-sb-0\" (UID: \"0c5d2358-058b-4d32-86b7-20228aff9677\") " pod="openstack/ovsdbserver-sb-0" Jan 27 15:26:54 crc kubenswrapper[4697]: I0127 15:26:54.611725 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c5d2358-058b-4d32-86b7-20228aff9677-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"0c5d2358-058b-4d32-86b7-20228aff9677\") " pod="openstack/ovsdbserver-sb-0" Jan 27 15:26:54 crc kubenswrapper[4697]: I0127 15:26:54.713520 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-sb-0\" (UID: \"0c5d2358-058b-4d32-86b7-20228aff9677\") " pod="openstack/ovsdbserver-sb-0" Jan 27 15:26:54 crc kubenswrapper[4697]: I0127 15:26:54.713590 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c5d2358-058b-4d32-86b7-20228aff9677-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"0c5d2358-058b-4d32-86b7-20228aff9677\") " pod="openstack/ovsdbserver-sb-0" Jan 27 15:26:54 crc kubenswrapper[4697]: I0127 15:26:54.713683 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0c5d2358-058b-4d32-86b7-20228aff9677-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"0c5d2358-058b-4d32-86b7-20228aff9677\") " pod="openstack/ovsdbserver-sb-0" Jan 27 15:26:54 crc kubenswrapper[4697]: I0127 15:26:54.713708 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c5d2358-058b-4d32-86b7-20228aff9677-config\") pod \"ovsdbserver-sb-0\" (UID: \"0c5d2358-058b-4d32-86b7-20228aff9677\") " pod="openstack/ovsdbserver-sb-0" Jan 27 15:26:54 crc kubenswrapper[4697]: I0127 15:26:54.713725 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4j7rj\" (UniqueName: \"kubernetes.io/projected/0c5d2358-058b-4d32-86b7-20228aff9677-kube-api-access-4j7rj\") pod \"ovsdbserver-sb-0\" (UID: \"0c5d2358-058b-4d32-86b7-20228aff9677\") " pod="openstack/ovsdbserver-sb-0" Jan 27 15:26:54 crc kubenswrapper[4697]: I0127 15:26:54.713769 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0c5d2358-058b-4d32-86b7-20228aff9677-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"0c5d2358-058b-4d32-86b7-20228aff9677\") " pod="openstack/ovsdbserver-sb-0" Jan 27 15:26:54 crc kubenswrapper[4697]: I0127 15:26:54.713814 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0c5d2358-058b-4d32-86b7-20228aff9677-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"0c5d2358-058b-4d32-86b7-20228aff9677\") " pod="openstack/ovsdbserver-sb-0" Jan 27 15:26:54 crc kubenswrapper[4697]: I0127 15:26:54.713870 4697 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-sb-0\" (UID: \"0c5d2358-058b-4d32-86b7-20228aff9677\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/ovsdbserver-sb-0" Jan 27 15:26:54 crc kubenswrapper[4697]: I0127 15:26:54.715563 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c5d2358-058b-4d32-86b7-20228aff9677-config\") pod \"ovsdbserver-sb-0\" (UID: \"0c5d2358-058b-4d32-86b7-20228aff9677\") " pod="openstack/ovsdbserver-sb-0" Jan 27 15:26:54 crc kubenswrapper[4697]: I0127 15:26:54.715950 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0c5d2358-058b-4d32-86b7-20228aff9677-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"0c5d2358-058b-4d32-86b7-20228aff9677\") " pod="openstack/ovsdbserver-sb-0" Jan 27 15:26:54 crc kubenswrapper[4697]: I0127 15:26:54.713880 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0c5d2358-058b-4d32-86b7-20228aff9677-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"0c5d2358-058b-4d32-86b7-20228aff9677\") " pod="openstack/ovsdbserver-sb-0" Jan 27 15:26:54 crc kubenswrapper[4697]: I0127 15:26:54.717833 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0c5d2358-058b-4d32-86b7-20228aff9677-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"0c5d2358-058b-4d32-86b7-20228aff9677\") " pod="openstack/ovsdbserver-sb-0" Jan 27 15:26:54 crc kubenswrapper[4697]: I0127 15:26:54.721544 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0c5d2358-058b-4d32-86b7-20228aff9677-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"0c5d2358-058b-4d32-86b7-20228aff9677\") " pod="openstack/ovsdbserver-sb-0" Jan 27 15:26:54 crc kubenswrapper[4697]: I0127 15:26:54.728733 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c5d2358-058b-4d32-86b7-20228aff9677-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"0c5d2358-058b-4d32-86b7-20228aff9677\") " pod="openstack/ovsdbserver-sb-0" Jan 27 15:26:54 crc kubenswrapper[4697]: I0127 15:26:54.733635 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0c5d2358-058b-4d32-86b7-20228aff9677-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"0c5d2358-058b-4d32-86b7-20228aff9677\") " pod="openstack/ovsdbserver-sb-0" Jan 27 15:26:54 crc kubenswrapper[4697]: I0127 15:26:54.734395 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4j7rj\" (UniqueName: \"kubernetes.io/projected/0c5d2358-058b-4d32-86b7-20228aff9677-kube-api-access-4j7rj\") pod \"ovsdbserver-sb-0\" (UID: \"0c5d2358-058b-4d32-86b7-20228aff9677\") " pod="openstack/ovsdbserver-sb-0" Jan 27 15:26:54 crc kubenswrapper[4697]: I0127 15:26:54.757607 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-sb-0\" (UID: \"0c5d2358-058b-4d32-86b7-20228aff9677\") " pod="openstack/ovsdbserver-sb-0" Jan 27 15:26:54 crc kubenswrapper[4697]: I0127 15:26:54.852455 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 27 15:26:55 crc kubenswrapper[4697]: I0127 15:26:55.109546 4697 patch_prober.go:28] interesting pod/machine-config-daemon-wz495 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 15:26:55 crc kubenswrapper[4697]: I0127 15:26:55.109958 4697 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 15:27:07 crc kubenswrapper[4697]: E0127 15:27:07.512680 4697 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Jan 27 15:27:07 crc kubenswrapper[4697]: E0127 15:27:07.513467 4697 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z7xj7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-galera-0_openstack(b1f75076-2324-44ff-9a33-e083e3de3c02): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 15:27:07 crc kubenswrapper[4697]: E0127 15:27:07.514811 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-galera-0" podUID="b1f75076-2324-44ff-9a33-e083e3de3c02" Jan 27 15:27:07 crc kubenswrapper[4697]: E0127 15:27:07.705205 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="openstack/openstack-galera-0" podUID="b1f75076-2324-44ff-9a33-e083e3de3c02" Jan 27 15:27:08 crc kubenswrapper[4697]: E0127 15:27:08.033682 4697 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Jan 27 15:27:08 crc kubenswrapper[4697]: E0127 15:27:08.033894 4697 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lhpwh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cell1-server-0_openstack(eda501db-ef38-4c1f-b2d6-3e009fe24e40): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 15:27:08 crc kubenswrapper[4697]: E0127 15:27:08.035014 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-cell1-server-0" podUID="eda501db-ef38-4c1f-b2d6-3e009fe24e40" Jan 27 15:27:08 crc kubenswrapper[4697]: E0127 15:27:08.536963 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-cell1-server-0" podUID="eda501db-ef38-4c1f-b2d6-3e009fe24e40" Jan 27 15:27:12 crc kubenswrapper[4697]: E0127 15:27:12.968844 4697 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-memcached:current-podified" Jan 27 15:27:12 crc kubenswrapper[4697]: E0127 15:27:12.970130 4697 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:memcached,Image:quay.io/podified-antelope-centos9/openstack-memcached:current-podified,Command:[/usr/bin/dumb-init -- /usr/local/bin/kolla_start],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:memcached,HostPort:0,ContainerPort:11211,Protocol:TCP,HostIP:,},ContainerPort{Name:memcached-tls,HostPort:0,ContainerPort:11212,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:POD_IPS,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIPs,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:CONFIG_HASH,Value:n567hf4h59dh657hc8hf5h694h5f6h5dbh67dh69h86h554h654h664h5cdhc9h8fhc9h646h9h697hd6h548h558h58fh68chd7h578h54bh7bhc9q,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/src,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/certs/memcached.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/private/memcached.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lc48g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42457,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42457,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod memcached-0_openstack(39bb5256-76f5-4ada-8803-c88ee4ccd881): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 15:27:12 crc kubenswrapper[4697]: E0127 15:27:12.971392 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/memcached-0" podUID="39bb5256-76f5-4ada-8803-c88ee4ccd881" Jan 27 15:27:13 crc kubenswrapper[4697]: E0127 15:27:13.567121 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-memcached:current-podified\\\"\"" pod="openstack/memcached-0" podUID="39bb5256-76f5-4ada-8803-c88ee4ccd881" Jan 27 15:27:13 crc kubenswrapper[4697]: E0127 15:27:13.761892 4697 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 27 15:27:13 crc kubenswrapper[4697]: E0127 15:27:13.762450 4697 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sp72l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-2plzf_openstack(a6d1b13a-3022-452e-8343-2a3a6ad0856e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 15:27:13 crc kubenswrapper[4697]: E0127 15:27:13.764620 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-666b6646f7-2plzf" podUID="a6d1b13a-3022-452e-8343-2a3a6ad0856e" Jan 27 15:27:13 crc kubenswrapper[4697]: E0127 15:27:13.766325 4697 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 27 15:27:13 crc kubenswrapper[4697]: E0127 15:27:13.766516 4697 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lnwjm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-zvhbw_openstack(158c7f54-dce0-47cc-8239-76d257dba505): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 15:27:13 crc kubenswrapper[4697]: E0127 15:27:13.767837 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-zvhbw" podUID="158c7f54-dce0-47cc-8239-76d257dba505" Jan 27 15:27:13 crc kubenswrapper[4697]: E0127 15:27:13.817766 4697 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 27 15:27:13 crc kubenswrapper[4697]: E0127 15:27:13.817951 4697 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nnzv6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-jhr52_openstack(46d05374-2c9c-4e44-ba35-8c9d784fd630): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 15:27:13 crc kubenswrapper[4697]: E0127 15:27:13.823214 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-jhr52" podUID="46d05374-2c9c-4e44-ba35-8c9d784fd630" Jan 27 15:27:13 crc kubenswrapper[4697]: E0127 15:27:13.882881 4697 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 27 15:27:13 crc kubenswrapper[4697]: E0127 15:27:13.883099 4697 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-46gcr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-lkhjf_openstack(56aa255c-36bb-42e5-bcaf-917928075db3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 15:27:13 crc kubenswrapper[4697]: E0127 15:27:13.885979 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-lkhjf" podUID="56aa255c-36bb-42e5-bcaf-917928075db3" Jan 27 15:27:14 crc kubenswrapper[4697]: I0127 15:27:14.317010 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-6sgqx"] Jan 27 15:27:14 crc kubenswrapper[4697]: I0127 15:27:14.499774 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 27 15:27:14 crc kubenswrapper[4697]: W0127 15:27:14.501889 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podea92e796_e6f5_458e_a47f_b7d34100f837.slice/crio-c1f29118d462bffab6a5dee92e2ef85537d6518473f2db018ddb652d3aff6a36 WatchSource:0}: Error finding container c1f29118d462bffab6a5dee92e2ef85537d6518473f2db018ddb652d3aff6a36: Status 404 returned error can't find the container with id c1f29118d462bffab6a5dee92e2ef85537d6518473f2db018ddb652d3aff6a36 Jan 27 15:27:14 crc kubenswrapper[4697]: E0127 15:27:14.583979 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-666b6646f7-2plzf" podUID="a6d1b13a-3022-452e-8343-2a3a6ad0856e" Jan 27 15:27:14 crc kubenswrapper[4697]: I0127 15:27:14.585377 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"a07684ab-be65-430a-89ff-7e3503304f07","Type":"ContainerStarted","Data":"6bdd83a5c1a844b950bb6684ea523fc6bda9d4db83fbbca7cf88c456a644467c"} Jan 27 15:27:14 crc kubenswrapper[4697]: I0127 15:27:14.585410 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 27 15:27:14 crc kubenswrapper[4697]: I0127 15:27:14.585426 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"ea92e796-e6f5-458e-a47f-b7d34100f837","Type":"ContainerStarted","Data":"c1f29118d462bffab6a5dee92e2ef85537d6518473f2db018ddb652d3aff6a36"} Jan 27 15:27:14 crc kubenswrapper[4697]: I0127 15:27:14.585437 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-6sgqx" event={"ID":"72f31a1f-c388-4fed-9842-13f65cf91e9b","Type":"ContainerStarted","Data":"16e2ae0aeb06e44fee2fbfe04f3d46969f67aab1b2360900a1dc6ad8dd929954"} Jan 27 15:27:14 crc kubenswrapper[4697]: E0127 15:27:14.587123 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-jhr52" podUID="46d05374-2c9c-4e44-ba35-8c9d784fd630" Jan 27 15:27:14 crc kubenswrapper[4697]: W0127 15:27:14.665915 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0e5111c0_346b_4994_822c_c86f4ee166bc.slice/crio-f3def0fa946baff23a99d33d5f76db788cab3cff84a2988bd59646dd4ea1d849 WatchSource:0}: Error finding container f3def0fa946baff23a99d33d5f76db788cab3cff84a2988bd59646dd4ea1d849: Status 404 returned error can't find the container with id f3def0fa946baff23a99d33d5f76db788cab3cff84a2988bd59646dd4ea1d849 Jan 27 15:27:14 crc kubenswrapper[4697]: I0127 15:27:14.943491 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-zvhbw" Jan 27 15:27:15 crc kubenswrapper[4697]: I0127 15:27:15.005819 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-lkhjf" Jan 27 15:27:15 crc kubenswrapper[4697]: I0127 15:27:15.054164 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/158c7f54-dce0-47cc-8239-76d257dba505-dns-svc\") pod \"158c7f54-dce0-47cc-8239-76d257dba505\" (UID: \"158c7f54-dce0-47cc-8239-76d257dba505\") " Jan 27 15:27:15 crc kubenswrapper[4697]: I0127 15:27:15.054213 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/158c7f54-dce0-47cc-8239-76d257dba505-config\") pod \"158c7f54-dce0-47cc-8239-76d257dba505\" (UID: \"158c7f54-dce0-47cc-8239-76d257dba505\") " Jan 27 15:27:15 crc kubenswrapper[4697]: I0127 15:27:15.054256 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lnwjm\" (UniqueName: \"kubernetes.io/projected/158c7f54-dce0-47cc-8239-76d257dba505-kube-api-access-lnwjm\") pod \"158c7f54-dce0-47cc-8239-76d257dba505\" (UID: \"158c7f54-dce0-47cc-8239-76d257dba505\") " Jan 27 15:27:15 crc kubenswrapper[4697]: I0127 15:27:15.055611 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/158c7f54-dce0-47cc-8239-76d257dba505-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "158c7f54-dce0-47cc-8239-76d257dba505" (UID: "158c7f54-dce0-47cc-8239-76d257dba505"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:27:15 crc kubenswrapper[4697]: I0127 15:27:15.055835 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/158c7f54-dce0-47cc-8239-76d257dba505-config" (OuterVolumeSpecName: "config") pod "158c7f54-dce0-47cc-8239-76d257dba505" (UID: "158c7f54-dce0-47cc-8239-76d257dba505"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:27:15 crc kubenswrapper[4697]: I0127 15:27:15.061089 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/158c7f54-dce0-47cc-8239-76d257dba505-kube-api-access-lnwjm" (OuterVolumeSpecName: "kube-api-access-lnwjm") pod "158c7f54-dce0-47cc-8239-76d257dba505" (UID: "158c7f54-dce0-47cc-8239-76d257dba505"). InnerVolumeSpecName "kube-api-access-lnwjm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:27:15 crc kubenswrapper[4697]: I0127 15:27:15.155598 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-46gcr\" (UniqueName: \"kubernetes.io/projected/56aa255c-36bb-42e5-bcaf-917928075db3-kube-api-access-46gcr\") pod \"56aa255c-36bb-42e5-bcaf-917928075db3\" (UID: \"56aa255c-36bb-42e5-bcaf-917928075db3\") " Jan 27 15:27:15 crc kubenswrapper[4697]: I0127 15:27:15.155704 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56aa255c-36bb-42e5-bcaf-917928075db3-config\") pod \"56aa255c-36bb-42e5-bcaf-917928075db3\" (UID: \"56aa255c-36bb-42e5-bcaf-917928075db3\") " Jan 27 15:27:15 crc kubenswrapper[4697]: I0127 15:27:15.157417 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56aa255c-36bb-42e5-bcaf-917928075db3-config" (OuterVolumeSpecName: "config") pod "56aa255c-36bb-42e5-bcaf-917928075db3" (UID: "56aa255c-36bb-42e5-bcaf-917928075db3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:27:15 crc kubenswrapper[4697]: I0127 15:27:15.157536 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lnwjm\" (UniqueName: \"kubernetes.io/projected/158c7f54-dce0-47cc-8239-76d257dba505-kube-api-access-lnwjm\") on node \"crc\" DevicePath \"\"" Jan 27 15:27:15 crc kubenswrapper[4697]: I0127 15:27:15.157550 4697 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/158c7f54-dce0-47cc-8239-76d257dba505-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 15:27:15 crc kubenswrapper[4697]: I0127 15:27:15.157559 4697 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/158c7f54-dce0-47cc-8239-76d257dba505-config\") on node \"crc\" DevicePath \"\"" Jan 27 15:27:15 crc kubenswrapper[4697]: I0127 15:27:15.159428 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56aa255c-36bb-42e5-bcaf-917928075db3-kube-api-access-46gcr" (OuterVolumeSpecName: "kube-api-access-46gcr") pod "56aa255c-36bb-42e5-bcaf-917928075db3" (UID: "56aa255c-36bb-42e5-bcaf-917928075db3"). InnerVolumeSpecName "kube-api-access-46gcr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:27:15 crc kubenswrapper[4697]: I0127 15:27:15.259172 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-46gcr\" (UniqueName: \"kubernetes.io/projected/56aa255c-36bb-42e5-bcaf-917928075db3-kube-api-access-46gcr\") on node \"crc\" DevicePath \"\"" Jan 27 15:27:15 crc kubenswrapper[4697]: I0127 15:27:15.259211 4697 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56aa255c-36bb-42e5-bcaf-917928075db3-config\") on node \"crc\" DevicePath \"\"" Jan 27 15:27:15 crc kubenswrapper[4697]: I0127 15:27:15.295330 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-p278q"] Jan 27 15:27:15 crc kubenswrapper[4697]: I0127 15:27:15.435640 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 27 15:27:15 crc kubenswrapper[4697]: W0127 15:27:15.533368 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9cb679cc_394c_4c45_8712_058fad1090e7.slice/crio-81af26bb5b00a5fc54336baaa72297377346a3cfcefb1718f3f94aeadf454e81 WatchSource:0}: Error finding container 81af26bb5b00a5fc54336baaa72297377346a3cfcefb1718f3f94aeadf454e81: Status 404 returned error can't find the container with id 81af26bb5b00a5fc54336baaa72297377346a3cfcefb1718f3f94aeadf454e81 Jan 27 15:27:15 crc kubenswrapper[4697]: W0127 15:27:15.537543 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0c5d2358_058b_4d32_86b7_20228aff9677.slice/crio-3580b20a78fb70df830048d2b797c61506396a209df7922ab9a815962326e46a WatchSource:0}: Error finding container 3580b20a78fb70df830048d2b797c61506396a209df7922ab9a815962326e46a: Status 404 returned error can't find the container with id 3580b20a78fb70df830048d2b797c61506396a209df7922ab9a815962326e46a Jan 27 15:27:15 crc kubenswrapper[4697]: I0127 15:27:15.594267 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"0c5d2358-058b-4d32-86b7-20228aff9677","Type":"ContainerStarted","Data":"3580b20a78fb70df830048d2b797c61506396a209df7922ab9a815962326e46a"} Jan 27 15:27:15 crc kubenswrapper[4697]: I0127 15:27:15.596039 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"abff1f2e-e0f3-4730-888c-2e2d8464f624","Type":"ContainerStarted","Data":"e08609f1f8f84cd80d5406d5f5667af27d378fc1d71f72faca38aca992d8af76"} Jan 27 15:27:15 crc kubenswrapper[4697]: I0127 15:27:15.598558 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"0e5111c0-346b-4994-822c-c86f4ee166bc","Type":"ContainerStarted","Data":"f3def0fa946baff23a99d33d5f76db788cab3cff84a2988bd59646dd4ea1d849"} Jan 27 15:27:15 crc kubenswrapper[4697]: I0127 15:27:15.600374 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-lkhjf" event={"ID":"56aa255c-36bb-42e5-bcaf-917928075db3","Type":"ContainerDied","Data":"305b2003a43e00b5131ce2d08b2436e71a2b752c538c0c7a80c4843aefd511c7"} Jan 27 15:27:15 crc kubenswrapper[4697]: I0127 15:27:15.600460 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-lkhjf" Jan 27 15:27:15 crc kubenswrapper[4697]: I0127 15:27:15.602899 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-p278q" event={"ID":"9cb679cc-394c-4c45-8712-058fad1090e7","Type":"ContainerStarted","Data":"81af26bb5b00a5fc54336baaa72297377346a3cfcefb1718f3f94aeadf454e81"} Jan 27 15:27:15 crc kubenswrapper[4697]: I0127 15:27:15.606931 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-zvhbw" Jan 27 15:27:15 crc kubenswrapper[4697]: I0127 15:27:15.607845 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-zvhbw" event={"ID":"158c7f54-dce0-47cc-8239-76d257dba505","Type":"ContainerDied","Data":"9b7f23c0b81fb398acc0f51908ee3f0e3357db23d05b4c1a3556f5f960f431e4"} Jan 27 15:27:15 crc kubenswrapper[4697]: I0127 15:27:15.682313 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-lkhjf"] Jan 27 15:27:15 crc kubenswrapper[4697]: I0127 15:27:15.713923 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-lkhjf"] Jan 27 15:27:15 crc kubenswrapper[4697]: I0127 15:27:15.757906 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-zvhbw"] Jan 27 15:27:15 crc kubenswrapper[4697]: I0127 15:27:15.763320 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-zvhbw"] Jan 27 15:27:16 crc kubenswrapper[4697]: I0127 15:27:16.577329 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="158c7f54-dce0-47cc-8239-76d257dba505" path="/var/lib/kubelet/pods/158c7f54-dce0-47cc-8239-76d257dba505/volumes" Jan 27 15:27:16 crc kubenswrapper[4697]: I0127 15:27:16.577701 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56aa255c-36bb-42e5-bcaf-917928075db3" path="/var/lib/kubelet/pods/56aa255c-36bb-42e5-bcaf-917928075db3/volumes" Jan 27 15:27:18 crc kubenswrapper[4697]: I0127 15:27:18.629219 4697 generic.go:334] "Generic (PLEG): container finished" podID="a07684ab-be65-430a-89ff-7e3503304f07" containerID="6bdd83a5c1a844b950bb6684ea523fc6bda9d4db83fbbca7cf88c456a644467c" exitCode=0 Jan 27 15:27:18 crc kubenswrapper[4697]: I0127 15:27:18.629481 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"a07684ab-be65-430a-89ff-7e3503304f07","Type":"ContainerDied","Data":"6bdd83a5c1a844b950bb6684ea523fc6bda9d4db83fbbca7cf88c456a644467c"} Jan 27 15:27:19 crc kubenswrapper[4697]: I0127 15:27:19.638691 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"0e5111c0-346b-4994-822c-c86f4ee166bc","Type":"ContainerStarted","Data":"647dfc92d70bb59a68e295e5985eead19251739361634e89e925d7b3aa90716c"} Jan 27 15:27:19 crc kubenswrapper[4697]: I0127 15:27:19.640831 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-p278q" event={"ID":"9cb679cc-394c-4c45-8712-058fad1090e7","Type":"ContainerStarted","Data":"b6347687c7dde0772ea62142acbb3699f4cac542487ed08e35837153dcc7bcb6"} Jan 27 15:27:19 crc kubenswrapper[4697]: I0127 15:27:19.643995 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"0c5d2358-058b-4d32-86b7-20228aff9677","Type":"ContainerStarted","Data":"1ab3660d159108db7f8815ea0955b2e569a3016a6a64e2bfc483d6c9d636832d"} Jan 27 15:27:19 crc kubenswrapper[4697]: I0127 15:27:19.647041 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"a07684ab-be65-430a-89ff-7e3503304f07","Type":"ContainerStarted","Data":"9e29f9acc59696a56ab6777031c6899588fc53bd47408b63963ba7827782fc77"} Jan 27 15:27:19 crc kubenswrapper[4697]: I0127 15:27:19.653524 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"ea92e796-e6f5-458e-a47f-b7d34100f837","Type":"ContainerStarted","Data":"7c1379f719177b3e46f07e56f9214d8deef213a4eee982fdfd7188d710e672a0"} Jan 27 15:27:19 crc kubenswrapper[4697]: I0127 15:27:19.654230 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 27 15:27:19 crc kubenswrapper[4697]: I0127 15:27:19.690056 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=27.890801248 podStartE2EDuration="32.69002951s" podCreationTimestamp="2026-01-27 15:26:47 +0000 UTC" firstStartedPulling="2026-01-27 15:27:14.503867009 +0000 UTC m=+1130.676266790" lastFinishedPulling="2026-01-27 15:27:19.303095271 +0000 UTC m=+1135.475495052" observedRunningTime="2026-01-27 15:27:19.685317833 +0000 UTC m=+1135.857717614" watchObservedRunningTime="2026-01-27 15:27:19.69002951 +0000 UTC m=+1135.862429291" Jan 27 15:27:19 crc kubenswrapper[4697]: I0127 15:27:19.709162 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=9.129876422 podStartE2EDuration="35.709143932s" podCreationTimestamp="2026-01-27 15:26:44 +0000 UTC" firstStartedPulling="2026-01-27 15:26:47.112345332 +0000 UTC m=+1103.284745113" lastFinishedPulling="2026-01-27 15:27:13.691612842 +0000 UTC m=+1129.864012623" observedRunningTime="2026-01-27 15:27:19.705485052 +0000 UTC m=+1135.877884833" watchObservedRunningTime="2026-01-27 15:27:19.709143932 +0000 UTC m=+1135.881543713" Jan 27 15:27:20 crc kubenswrapper[4697]: I0127 15:27:20.682551 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-6sgqx" event={"ID":"72f31a1f-c388-4fed-9842-13f65cf91e9b","Type":"ContainerStarted","Data":"80021a4c8001b854705d4a4ce0148bed3550fdafc1596988af6f67a57f5f1a52"} Jan 27 15:27:20 crc kubenswrapper[4697]: I0127 15:27:20.684190 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-6sgqx" Jan 27 15:27:20 crc kubenswrapper[4697]: I0127 15:27:20.688960 4697 generic.go:334] "Generic (PLEG): container finished" podID="9cb679cc-394c-4c45-8712-058fad1090e7" containerID="b6347687c7dde0772ea62142acbb3699f4cac542487ed08e35837153dcc7bcb6" exitCode=0 Jan 27 15:27:20 crc kubenswrapper[4697]: I0127 15:27:20.690040 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-p278q" event={"ID":"9cb679cc-394c-4c45-8712-058fad1090e7","Type":"ContainerDied","Data":"b6347687c7dde0772ea62142acbb3699f4cac542487ed08e35837153dcc7bcb6"} Jan 27 15:27:20 crc kubenswrapper[4697]: I0127 15:27:20.705877 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-6sgqx" podStartSLOduration=24.737857206 podStartE2EDuration="29.705861686s" podCreationTimestamp="2026-01-27 15:26:51 +0000 UTC" firstStartedPulling="2026-01-27 15:27:14.330513058 +0000 UTC m=+1130.502912849" lastFinishedPulling="2026-01-27 15:27:19.298517538 +0000 UTC m=+1135.470917329" observedRunningTime="2026-01-27 15:27:20.70443576 +0000 UTC m=+1136.876835541" watchObservedRunningTime="2026-01-27 15:27:20.705861686 +0000 UTC m=+1136.878261467" Jan 27 15:27:21 crc kubenswrapper[4697]: I0127 15:27:21.697732 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-p278q" event={"ID":"9cb679cc-394c-4c45-8712-058fad1090e7","Type":"ContainerStarted","Data":"f85306697d1a337211d82766b33c051a0f33786d557b695c5707ba71f9e10a1b"} Jan 27 15:27:21 crc kubenswrapper[4697]: I0127 15:27:21.698214 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-p278q" Jan 27 15:27:21 crc kubenswrapper[4697]: I0127 15:27:21.698228 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-p278q" event={"ID":"9cb679cc-394c-4c45-8712-058fad1090e7","Type":"ContainerStarted","Data":"866e3a5906a3823b4e9b79dc403fdbb1c986fafd820e044fb56a8668ea418d1b"} Jan 27 15:27:21 crc kubenswrapper[4697]: I0127 15:27:21.698242 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-p278q" Jan 27 15:27:21 crc kubenswrapper[4697]: I0127 15:27:21.701471 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"b1f75076-2324-44ff-9a33-e083e3de3c02","Type":"ContainerStarted","Data":"9f180ddaac5724ef22b2e5bdc441899f472b208d58db6c12d95c4b6dbc44c38b"} Jan 27 15:27:21 crc kubenswrapper[4697]: I0127 15:27:21.731819 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-p278q" podStartSLOduration=27.001785888 podStartE2EDuration="30.731770002s" podCreationTimestamp="2026-01-27 15:26:51 +0000 UTC" firstStartedPulling="2026-01-27 15:27:15.535288861 +0000 UTC m=+1131.707688642" lastFinishedPulling="2026-01-27 15:27:19.265272975 +0000 UTC m=+1135.437672756" observedRunningTime="2026-01-27 15:27:21.724118432 +0000 UTC m=+1137.896518233" watchObservedRunningTime="2026-01-27 15:27:21.731770002 +0000 UTC m=+1137.904169783" Jan 27 15:27:25 crc kubenswrapper[4697]: I0127 15:27:25.109076 4697 patch_prober.go:28] interesting pod/machine-config-daemon-wz495 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 15:27:25 crc kubenswrapper[4697]: I0127 15:27:25.109638 4697 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 15:27:25 crc kubenswrapper[4697]: I0127 15:27:25.642052 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 27 15:27:25 crc kubenswrapper[4697]: I0127 15:27:25.642232 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 27 15:27:25 crc kubenswrapper[4697]: I0127 15:27:25.731042 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 27 15:27:25 crc kubenswrapper[4697]: I0127 15:27:25.743540 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"eda501db-ef38-4c1f-b2d6-3e009fe24e40","Type":"ContainerStarted","Data":"840022435db03186405d1974a393a665897eadcc1c7df67f122cbcc886b3f4cc"} Jan 27 15:27:25 crc kubenswrapper[4697]: I0127 15:27:25.746090 4697 generic.go:334] "Generic (PLEG): container finished" podID="b1f75076-2324-44ff-9a33-e083e3de3c02" containerID="9f180ddaac5724ef22b2e5bdc441899f472b208d58db6c12d95c4b6dbc44c38b" exitCode=0 Jan 27 15:27:25 crc kubenswrapper[4697]: I0127 15:27:25.746479 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"b1f75076-2324-44ff-9a33-e083e3de3c02","Type":"ContainerDied","Data":"9f180ddaac5724ef22b2e5bdc441899f472b208d58db6c12d95c4b6dbc44c38b"} Jan 27 15:27:25 crc kubenswrapper[4697]: I0127 15:27:25.868065 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 27 15:27:28 crc kubenswrapper[4697]: I0127 15:27:28.100332 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 27 15:27:29 crc kubenswrapper[4697]: I0127 15:27:29.772849 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-jhr52" event={"ID":"46d05374-2c9c-4e44-ba35-8c9d784fd630","Type":"ContainerStarted","Data":"c1a6c6b86a2b0ac4300c79e22422ef3e018b8eab2a2367e6f0025d979310524d"} Jan 27 15:27:29 crc kubenswrapper[4697]: I0127 15:27:29.776025 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"39bb5256-76f5-4ada-8803-c88ee4ccd881","Type":"ContainerStarted","Data":"2e2de8a7542311c5fe02a8d3f67f2593e310af9a7474e1fbf014aee64040edd5"} Jan 27 15:27:29 crc kubenswrapper[4697]: I0127 15:27:29.776559 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Jan 27 15:27:29 crc kubenswrapper[4697]: I0127 15:27:29.778443 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"b1f75076-2324-44ff-9a33-e083e3de3c02","Type":"ContainerStarted","Data":"4c45de44e6271571fcbdba10a46f5bebd63b4fb6dd0cf4d6124375e836f613da"} Jan 27 15:27:29 crc kubenswrapper[4697]: I0127 15:27:29.780388 4697 generic.go:334] "Generic (PLEG): container finished" podID="a6d1b13a-3022-452e-8343-2a3a6ad0856e" containerID="0f5e82f0e82b0311a0b81486017b9abd08e32b6bc9f7024716c67e18eeb598e5" exitCode=0 Jan 27 15:27:29 crc kubenswrapper[4697]: I0127 15:27:29.780442 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-2plzf" event={"ID":"a6d1b13a-3022-452e-8343-2a3a6ad0856e","Type":"ContainerDied","Data":"0f5e82f0e82b0311a0b81486017b9abd08e32b6bc9f7024716c67e18eeb598e5"} Jan 27 15:27:29 crc kubenswrapper[4697]: I0127 15:27:29.791830 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"0c5d2358-058b-4d32-86b7-20228aff9677","Type":"ContainerStarted","Data":"a86c730e7dee03f323534de03f61bbed1ac66a26ab7fdf81ca43d38851d79cb5"} Jan 27 15:27:29 crc kubenswrapper[4697]: I0127 15:27:29.798518 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"0e5111c0-346b-4994-822c-c86f4ee166bc","Type":"ContainerStarted","Data":"b00fec345edb3a7ad2c98f70200d83895d0ea00e49e3a55f88eaca99c38983b6"} Jan 27 15:27:29 crc kubenswrapper[4697]: I0127 15:27:29.844838 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=-9223371989.009958 podStartE2EDuration="47.844817774s" podCreationTimestamp="2026-01-27 15:26:42 +0000 UTC" firstStartedPulling="2026-01-27 15:26:44.805411865 +0000 UTC m=+1100.977811646" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:27:29.844559758 +0000 UTC m=+1146.016959559" watchObservedRunningTime="2026-01-27 15:27:29.844817774 +0000 UTC m=+1146.017217555" Jan 27 15:27:29 crc kubenswrapper[4697]: I0127 15:27:29.853472 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Jan 27 15:27:29 crc kubenswrapper[4697]: I0127 15:27:29.866120 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=2.46329319 podStartE2EDuration="44.866098081s" podCreationTimestamp="2026-01-27 15:26:45 +0000 UTC" firstStartedPulling="2026-01-27 15:26:47.19871161 +0000 UTC m=+1103.371111391" lastFinishedPulling="2026-01-27 15:27:29.601516501 +0000 UTC m=+1145.773916282" observedRunningTime="2026-01-27 15:27:29.861163499 +0000 UTC m=+1146.033563290" watchObservedRunningTime="2026-01-27 15:27:29.866098081 +0000 UTC m=+1146.038497862" Jan 27 15:27:29 crc kubenswrapper[4697]: I0127 15:27:29.889500 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=23.396145271 podStartE2EDuration="36.88947772s" podCreationTimestamp="2026-01-27 15:26:53 +0000 UTC" firstStartedPulling="2026-01-27 15:27:15.539991178 +0000 UTC m=+1131.712390959" lastFinishedPulling="2026-01-27 15:27:29.033323627 +0000 UTC m=+1145.205723408" observedRunningTime="2026-01-27 15:27:29.881482902 +0000 UTC m=+1146.053882683" watchObservedRunningTime="2026-01-27 15:27:29.88947772 +0000 UTC m=+1146.061877511" Jan 27 15:27:29 crc kubenswrapper[4697]: I0127 15:27:29.912618 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=28.920366448 podStartE2EDuration="39.912597533s" podCreationTimestamp="2026-01-27 15:26:50 +0000 UTC" firstStartedPulling="2026-01-27 15:27:14.669084219 +0000 UTC m=+1130.841483990" lastFinishedPulling="2026-01-27 15:27:25.661315294 +0000 UTC m=+1141.833715075" observedRunningTime="2026-01-27 15:27:29.906272156 +0000 UTC m=+1146.078671947" watchObservedRunningTime="2026-01-27 15:27:29.912597533 +0000 UTC m=+1146.084997314" Jan 27 15:27:30 crc kubenswrapper[4697]: I0127 15:27:30.712541 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Jan 27 15:27:30 crc kubenswrapper[4697]: I0127 15:27:30.763945 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Jan 27 15:27:30 crc kubenswrapper[4697]: I0127 15:27:30.811332 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-2plzf" event={"ID":"a6d1b13a-3022-452e-8343-2a3a6ad0856e","Type":"ContainerStarted","Data":"8ed138a800e5f047de7389994db571c8ad5550d2c2d7c3913e74ea2b493f9441"} Jan 27 15:27:30 crc kubenswrapper[4697]: I0127 15:27:30.812717 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-666b6646f7-2plzf" Jan 27 15:27:30 crc kubenswrapper[4697]: I0127 15:27:30.815968 4697 generic.go:334] "Generic (PLEG): container finished" podID="46d05374-2c9c-4e44-ba35-8c9d784fd630" containerID="c1a6c6b86a2b0ac4300c79e22422ef3e018b8eab2a2367e6f0025d979310524d" exitCode=0 Jan 27 15:27:30 crc kubenswrapper[4697]: I0127 15:27:30.817175 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-jhr52" event={"ID":"46d05374-2c9c-4e44-ba35-8c9d784fd630","Type":"ContainerDied","Data":"c1a6c6b86a2b0ac4300c79e22422ef3e018b8eab2a2367e6f0025d979310524d"} Jan 27 15:27:30 crc kubenswrapper[4697]: I0127 15:27:30.817235 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Jan 27 15:27:30 crc kubenswrapper[4697]: I0127 15:27:30.850226 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-666b6646f7-2plzf" podStartSLOduration=2.996895831 podStartE2EDuration="49.850197952s" podCreationTimestamp="2026-01-27 15:26:41 +0000 UTC" firstStartedPulling="2026-01-27 15:26:42.181371039 +0000 UTC m=+1098.353770820" lastFinishedPulling="2026-01-27 15:27:29.03467317 +0000 UTC m=+1145.207072941" observedRunningTime="2026-01-27 15:27:30.843752273 +0000 UTC m=+1147.016152094" watchObservedRunningTime="2026-01-27 15:27:30.850197952 +0000 UTC m=+1147.022597773" Jan 27 15:27:30 crc kubenswrapper[4697]: I0127 15:27:30.852657 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Jan 27 15:27:30 crc kubenswrapper[4697]: I0127 15:27:30.936582 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Jan 27 15:27:30 crc kubenswrapper[4697]: I0127 15:27:30.954342 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Jan 27 15:27:31 crc kubenswrapper[4697]: I0127 15:27:31.149001 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-jhr52"] Jan 27 15:27:31 crc kubenswrapper[4697]: I0127 15:27:31.180922 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-rpfpm"] Jan 27 15:27:31 crc kubenswrapper[4697]: I0127 15:27:31.182196 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-rpfpm" Jan 27 15:27:31 crc kubenswrapper[4697]: I0127 15:27:31.187618 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 27 15:27:31 crc kubenswrapper[4697]: I0127 15:27:31.203117 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-rpfpm"] Jan 27 15:27:31 crc kubenswrapper[4697]: I0127 15:27:31.260366 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nm7qm\" (UniqueName: \"kubernetes.io/projected/a20f2903-bfe4-4b89-8333-09e1adbd4605-kube-api-access-nm7qm\") pod \"dnsmasq-dns-7fd796d7df-rpfpm\" (UID: \"a20f2903-bfe4-4b89-8333-09e1adbd4605\") " pod="openstack/dnsmasq-dns-7fd796d7df-rpfpm" Jan 27 15:27:31 crc kubenswrapper[4697]: I0127 15:27:31.260448 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a20f2903-bfe4-4b89-8333-09e1adbd4605-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-rpfpm\" (UID: \"a20f2903-bfe4-4b89-8333-09e1adbd4605\") " pod="openstack/dnsmasq-dns-7fd796d7df-rpfpm" Jan 27 15:27:31 crc kubenswrapper[4697]: I0127 15:27:31.260540 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a20f2903-bfe4-4b89-8333-09e1adbd4605-config\") pod \"dnsmasq-dns-7fd796d7df-rpfpm\" (UID: \"a20f2903-bfe4-4b89-8333-09e1adbd4605\") " pod="openstack/dnsmasq-dns-7fd796d7df-rpfpm" Jan 27 15:27:31 crc kubenswrapper[4697]: I0127 15:27:31.260558 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a20f2903-bfe4-4b89-8333-09e1adbd4605-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-rpfpm\" (UID: \"a20f2903-bfe4-4b89-8333-09e1adbd4605\") " pod="openstack/dnsmasq-dns-7fd796d7df-rpfpm" Jan 27 15:27:31 crc kubenswrapper[4697]: I0127 15:27:31.298344 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-mfkdn"] Jan 27 15:27:31 crc kubenswrapper[4697]: I0127 15:27:31.299541 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-mfkdn" Jan 27 15:27:31 crc kubenswrapper[4697]: I0127 15:27:31.301345 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Jan 27 15:27:31 crc kubenswrapper[4697]: I0127 15:27:31.313586 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-mfkdn"] Jan 27 15:27:31 crc kubenswrapper[4697]: I0127 15:27:31.362623 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nm7qm\" (UniqueName: \"kubernetes.io/projected/a20f2903-bfe4-4b89-8333-09e1adbd4605-kube-api-access-nm7qm\") pod \"dnsmasq-dns-7fd796d7df-rpfpm\" (UID: \"a20f2903-bfe4-4b89-8333-09e1adbd4605\") " pod="openstack/dnsmasq-dns-7fd796d7df-rpfpm" Jan 27 15:27:31 crc kubenswrapper[4697]: I0127 15:27:31.362695 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a20f2903-bfe4-4b89-8333-09e1adbd4605-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-rpfpm\" (UID: \"a20f2903-bfe4-4b89-8333-09e1adbd4605\") " pod="openstack/dnsmasq-dns-7fd796d7df-rpfpm" Jan 27 15:27:31 crc kubenswrapper[4697]: I0127 15:27:31.362778 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a20f2903-bfe4-4b89-8333-09e1adbd4605-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-rpfpm\" (UID: \"a20f2903-bfe4-4b89-8333-09e1adbd4605\") " pod="openstack/dnsmasq-dns-7fd796d7df-rpfpm" Jan 27 15:27:31 crc kubenswrapper[4697]: I0127 15:27:31.362814 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a20f2903-bfe4-4b89-8333-09e1adbd4605-config\") pod \"dnsmasq-dns-7fd796d7df-rpfpm\" (UID: \"a20f2903-bfe4-4b89-8333-09e1adbd4605\") " pod="openstack/dnsmasq-dns-7fd796d7df-rpfpm" Jan 27 15:27:31 crc kubenswrapper[4697]: I0127 15:27:31.363543 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a20f2903-bfe4-4b89-8333-09e1adbd4605-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-rpfpm\" (UID: \"a20f2903-bfe4-4b89-8333-09e1adbd4605\") " pod="openstack/dnsmasq-dns-7fd796d7df-rpfpm" Jan 27 15:27:31 crc kubenswrapper[4697]: I0127 15:27:31.363799 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a20f2903-bfe4-4b89-8333-09e1adbd4605-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-rpfpm\" (UID: \"a20f2903-bfe4-4b89-8333-09e1adbd4605\") " pod="openstack/dnsmasq-dns-7fd796d7df-rpfpm" Jan 27 15:27:31 crc kubenswrapper[4697]: I0127 15:27:31.363841 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a20f2903-bfe4-4b89-8333-09e1adbd4605-config\") pod \"dnsmasq-dns-7fd796d7df-rpfpm\" (UID: \"a20f2903-bfe4-4b89-8333-09e1adbd4605\") " pod="openstack/dnsmasq-dns-7fd796d7df-rpfpm" Jan 27 15:27:31 crc kubenswrapper[4697]: I0127 15:27:31.384474 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nm7qm\" (UniqueName: \"kubernetes.io/projected/a20f2903-bfe4-4b89-8333-09e1adbd4605-kube-api-access-nm7qm\") pod \"dnsmasq-dns-7fd796d7df-rpfpm\" (UID: \"a20f2903-bfe4-4b89-8333-09e1adbd4605\") " pod="openstack/dnsmasq-dns-7fd796d7df-rpfpm" Jan 27 15:27:31 crc kubenswrapper[4697]: I0127 15:27:31.464563 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7853fb5-d995-44ae-b1b5-c4c38fcadbd2-config\") pod \"ovn-controller-metrics-mfkdn\" (UID: \"e7853fb5-d995-44ae-b1b5-c4c38fcadbd2\") " pod="openstack/ovn-controller-metrics-mfkdn" Jan 27 15:27:31 crc kubenswrapper[4697]: I0127 15:27:31.464758 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7853fb5-d995-44ae-b1b5-c4c38fcadbd2-combined-ca-bundle\") pod \"ovn-controller-metrics-mfkdn\" (UID: \"e7853fb5-d995-44ae-b1b5-c4c38fcadbd2\") " pod="openstack/ovn-controller-metrics-mfkdn" Jan 27 15:27:31 crc kubenswrapper[4697]: I0127 15:27:31.464820 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/e7853fb5-d995-44ae-b1b5-c4c38fcadbd2-ovs-rundir\") pod \"ovn-controller-metrics-mfkdn\" (UID: \"e7853fb5-d995-44ae-b1b5-c4c38fcadbd2\") " pod="openstack/ovn-controller-metrics-mfkdn" Jan 27 15:27:31 crc kubenswrapper[4697]: I0127 15:27:31.464849 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e7853fb5-d995-44ae-b1b5-c4c38fcadbd2-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-mfkdn\" (UID: \"e7853fb5-d995-44ae-b1b5-c4c38fcadbd2\") " pod="openstack/ovn-controller-metrics-mfkdn" Jan 27 15:27:31 crc kubenswrapper[4697]: I0127 15:27:31.465193 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/e7853fb5-d995-44ae-b1b5-c4c38fcadbd2-ovn-rundir\") pod \"ovn-controller-metrics-mfkdn\" (UID: \"e7853fb5-d995-44ae-b1b5-c4c38fcadbd2\") " pod="openstack/ovn-controller-metrics-mfkdn" Jan 27 15:27:31 crc kubenswrapper[4697]: I0127 15:27:31.465280 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfnt8\" (UniqueName: \"kubernetes.io/projected/e7853fb5-d995-44ae-b1b5-c4c38fcadbd2-kube-api-access-tfnt8\") pod \"ovn-controller-metrics-mfkdn\" (UID: \"e7853fb5-d995-44ae-b1b5-c4c38fcadbd2\") " pod="openstack/ovn-controller-metrics-mfkdn" Jan 27 15:27:31 crc kubenswrapper[4697]: I0127 15:27:31.505334 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-rpfpm" Jan 27 15:27:31 crc kubenswrapper[4697]: I0127 15:27:31.567089 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/e7853fb5-d995-44ae-b1b5-c4c38fcadbd2-ovn-rundir\") pod \"ovn-controller-metrics-mfkdn\" (UID: \"e7853fb5-d995-44ae-b1b5-c4c38fcadbd2\") " pod="openstack/ovn-controller-metrics-mfkdn" Jan 27 15:27:31 crc kubenswrapper[4697]: I0127 15:27:31.567330 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfnt8\" (UniqueName: \"kubernetes.io/projected/e7853fb5-d995-44ae-b1b5-c4c38fcadbd2-kube-api-access-tfnt8\") pod \"ovn-controller-metrics-mfkdn\" (UID: \"e7853fb5-d995-44ae-b1b5-c4c38fcadbd2\") " pod="openstack/ovn-controller-metrics-mfkdn" Jan 27 15:27:31 crc kubenswrapper[4697]: I0127 15:27:31.567356 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7853fb5-d995-44ae-b1b5-c4c38fcadbd2-config\") pod \"ovn-controller-metrics-mfkdn\" (UID: \"e7853fb5-d995-44ae-b1b5-c4c38fcadbd2\") " pod="openstack/ovn-controller-metrics-mfkdn" Jan 27 15:27:31 crc kubenswrapper[4697]: I0127 15:27:31.567382 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7853fb5-d995-44ae-b1b5-c4c38fcadbd2-combined-ca-bundle\") pod \"ovn-controller-metrics-mfkdn\" (UID: \"e7853fb5-d995-44ae-b1b5-c4c38fcadbd2\") " pod="openstack/ovn-controller-metrics-mfkdn" Jan 27 15:27:31 crc kubenswrapper[4697]: I0127 15:27:31.567398 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/e7853fb5-d995-44ae-b1b5-c4c38fcadbd2-ovs-rundir\") pod \"ovn-controller-metrics-mfkdn\" (UID: \"e7853fb5-d995-44ae-b1b5-c4c38fcadbd2\") " pod="openstack/ovn-controller-metrics-mfkdn" Jan 27 15:27:31 crc kubenswrapper[4697]: I0127 15:27:31.567412 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e7853fb5-d995-44ae-b1b5-c4c38fcadbd2-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-mfkdn\" (UID: \"e7853fb5-d995-44ae-b1b5-c4c38fcadbd2\") " pod="openstack/ovn-controller-metrics-mfkdn" Jan 27 15:27:31 crc kubenswrapper[4697]: I0127 15:27:31.568214 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/e7853fb5-d995-44ae-b1b5-c4c38fcadbd2-ovn-rundir\") pod \"ovn-controller-metrics-mfkdn\" (UID: \"e7853fb5-d995-44ae-b1b5-c4c38fcadbd2\") " pod="openstack/ovn-controller-metrics-mfkdn" Jan 27 15:27:31 crc kubenswrapper[4697]: I0127 15:27:31.568561 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/e7853fb5-d995-44ae-b1b5-c4c38fcadbd2-ovs-rundir\") pod \"ovn-controller-metrics-mfkdn\" (UID: \"e7853fb5-d995-44ae-b1b5-c4c38fcadbd2\") " pod="openstack/ovn-controller-metrics-mfkdn" Jan 27 15:27:31 crc kubenswrapper[4697]: I0127 15:27:31.576646 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e7853fb5-d995-44ae-b1b5-c4c38fcadbd2-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-mfkdn\" (UID: \"e7853fb5-d995-44ae-b1b5-c4c38fcadbd2\") " pod="openstack/ovn-controller-metrics-mfkdn" Jan 27 15:27:31 crc kubenswrapper[4697]: I0127 15:27:31.577164 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7853fb5-d995-44ae-b1b5-c4c38fcadbd2-combined-ca-bundle\") pod \"ovn-controller-metrics-mfkdn\" (UID: \"e7853fb5-d995-44ae-b1b5-c4c38fcadbd2\") " pod="openstack/ovn-controller-metrics-mfkdn" Jan 27 15:27:31 crc kubenswrapper[4697]: I0127 15:27:31.591950 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7853fb5-d995-44ae-b1b5-c4c38fcadbd2-config\") pod \"ovn-controller-metrics-mfkdn\" (UID: \"e7853fb5-d995-44ae-b1b5-c4c38fcadbd2\") " pod="openstack/ovn-controller-metrics-mfkdn" Jan 27 15:27:31 crc kubenswrapper[4697]: I0127 15:27:31.601585 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-2plzf"] Jan 27 15:27:31 crc kubenswrapper[4697]: I0127 15:27:31.611455 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tfnt8\" (UniqueName: \"kubernetes.io/projected/e7853fb5-d995-44ae-b1b5-c4c38fcadbd2-kube-api-access-tfnt8\") pod \"ovn-controller-metrics-mfkdn\" (UID: \"e7853fb5-d995-44ae-b1b5-c4c38fcadbd2\") " pod="openstack/ovn-controller-metrics-mfkdn" Jan 27 15:27:31 crc kubenswrapper[4697]: I0127 15:27:31.616262 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-mfkdn" Jan 27 15:27:31 crc kubenswrapper[4697]: I0127 15:27:31.632772 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-5fwsh"] Jan 27 15:27:31 crc kubenswrapper[4697]: I0127 15:27:31.642098 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-5fwsh" Jan 27 15:27:31 crc kubenswrapper[4697]: I0127 15:27:31.645025 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 27 15:27:31 crc kubenswrapper[4697]: I0127 15:27:31.670027 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-5fwsh"] Jan 27 15:27:31 crc kubenswrapper[4697]: I0127 15:27:31.770723 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/17d130bb-85ab-4a76-a5cb-09370b6165b7-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-5fwsh\" (UID: \"17d130bb-85ab-4a76-a5cb-09370b6165b7\") " pod="openstack/dnsmasq-dns-86db49b7ff-5fwsh" Jan 27 15:27:31 crc kubenswrapper[4697]: I0127 15:27:31.771139 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/17d130bb-85ab-4a76-a5cb-09370b6165b7-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-5fwsh\" (UID: \"17d130bb-85ab-4a76-a5cb-09370b6165b7\") " pod="openstack/dnsmasq-dns-86db49b7ff-5fwsh" Jan 27 15:27:31 crc kubenswrapper[4697]: I0127 15:27:31.771178 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srcz6\" (UniqueName: \"kubernetes.io/projected/17d130bb-85ab-4a76-a5cb-09370b6165b7-kube-api-access-srcz6\") pod \"dnsmasq-dns-86db49b7ff-5fwsh\" (UID: \"17d130bb-85ab-4a76-a5cb-09370b6165b7\") " pod="openstack/dnsmasq-dns-86db49b7ff-5fwsh" Jan 27 15:27:31 crc kubenswrapper[4697]: I0127 15:27:31.771212 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/17d130bb-85ab-4a76-a5cb-09370b6165b7-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-5fwsh\" (UID: \"17d130bb-85ab-4a76-a5cb-09370b6165b7\") " pod="openstack/dnsmasq-dns-86db49b7ff-5fwsh" Jan 27 15:27:31 crc kubenswrapper[4697]: I0127 15:27:31.771269 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17d130bb-85ab-4a76-a5cb-09370b6165b7-config\") pod \"dnsmasq-dns-86db49b7ff-5fwsh\" (UID: \"17d130bb-85ab-4a76-a5cb-09370b6165b7\") " pod="openstack/dnsmasq-dns-86db49b7ff-5fwsh" Jan 27 15:27:31 crc kubenswrapper[4697]: I0127 15:27:31.832711 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d769cc4f-jhr52" podUID="46d05374-2c9c-4e44-ba35-8c9d784fd630" containerName="dnsmasq-dns" containerID="cri-o://f8939cf2f841d3d04a50fab3bff6e1aae62d17cae48ff33582fd1f69d1c1679d" gracePeriod=10 Jan 27 15:27:31 crc kubenswrapper[4697]: I0127 15:27:31.833146 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-jhr52" event={"ID":"46d05374-2c9c-4e44-ba35-8c9d784fd630","Type":"ContainerStarted","Data":"f8939cf2f841d3d04a50fab3bff6e1aae62d17cae48ff33582fd1f69d1c1679d"} Jan 27 15:27:31 crc kubenswrapper[4697]: I0127 15:27:31.833198 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d769cc4f-jhr52" Jan 27 15:27:31 crc kubenswrapper[4697]: I0127 15:27:31.856538 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d769cc4f-jhr52" podStartSLOduration=3.952125278 podStartE2EDuration="50.856523033s" podCreationTimestamp="2026-01-27 15:26:41 +0000 UTC" firstStartedPulling="2026-01-27 15:26:42.694256835 +0000 UTC m=+1098.866656616" lastFinishedPulling="2026-01-27 15:27:29.59865459 +0000 UTC m=+1145.771054371" observedRunningTime="2026-01-27 15:27:31.851403166 +0000 UTC m=+1148.023802947" watchObservedRunningTime="2026-01-27 15:27:31.856523033 +0000 UTC m=+1148.028922814" Jan 27 15:27:31 crc kubenswrapper[4697]: I0127 15:27:31.874662 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/17d130bb-85ab-4a76-a5cb-09370b6165b7-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-5fwsh\" (UID: \"17d130bb-85ab-4a76-a5cb-09370b6165b7\") " pod="openstack/dnsmasq-dns-86db49b7ff-5fwsh" Jan 27 15:27:31 crc kubenswrapper[4697]: I0127 15:27:31.874746 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17d130bb-85ab-4a76-a5cb-09370b6165b7-config\") pod \"dnsmasq-dns-86db49b7ff-5fwsh\" (UID: \"17d130bb-85ab-4a76-a5cb-09370b6165b7\") " pod="openstack/dnsmasq-dns-86db49b7ff-5fwsh" Jan 27 15:27:31 crc kubenswrapper[4697]: I0127 15:27:31.874807 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/17d130bb-85ab-4a76-a5cb-09370b6165b7-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-5fwsh\" (UID: \"17d130bb-85ab-4a76-a5cb-09370b6165b7\") " pod="openstack/dnsmasq-dns-86db49b7ff-5fwsh" Jan 27 15:27:31 crc kubenswrapper[4697]: I0127 15:27:31.874856 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/17d130bb-85ab-4a76-a5cb-09370b6165b7-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-5fwsh\" (UID: \"17d130bb-85ab-4a76-a5cb-09370b6165b7\") " pod="openstack/dnsmasq-dns-86db49b7ff-5fwsh" Jan 27 15:27:31 crc kubenswrapper[4697]: I0127 15:27:31.874884 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-srcz6\" (UniqueName: \"kubernetes.io/projected/17d130bb-85ab-4a76-a5cb-09370b6165b7-kube-api-access-srcz6\") pod \"dnsmasq-dns-86db49b7ff-5fwsh\" (UID: \"17d130bb-85ab-4a76-a5cb-09370b6165b7\") " pod="openstack/dnsmasq-dns-86db49b7ff-5fwsh" Jan 27 15:27:31 crc kubenswrapper[4697]: I0127 15:27:31.876086 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/17d130bb-85ab-4a76-a5cb-09370b6165b7-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-5fwsh\" (UID: \"17d130bb-85ab-4a76-a5cb-09370b6165b7\") " pod="openstack/dnsmasq-dns-86db49b7ff-5fwsh" Jan 27 15:27:31 crc kubenswrapper[4697]: I0127 15:27:31.876135 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/17d130bb-85ab-4a76-a5cb-09370b6165b7-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-5fwsh\" (UID: \"17d130bb-85ab-4a76-a5cb-09370b6165b7\") " pod="openstack/dnsmasq-dns-86db49b7ff-5fwsh" Jan 27 15:27:31 crc kubenswrapper[4697]: I0127 15:27:31.876631 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/17d130bb-85ab-4a76-a5cb-09370b6165b7-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-5fwsh\" (UID: \"17d130bb-85ab-4a76-a5cb-09370b6165b7\") " pod="openstack/dnsmasq-dns-86db49b7ff-5fwsh" Jan 27 15:27:31 crc kubenswrapper[4697]: I0127 15:27:31.878019 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17d130bb-85ab-4a76-a5cb-09370b6165b7-config\") pod \"dnsmasq-dns-86db49b7ff-5fwsh\" (UID: \"17d130bb-85ab-4a76-a5cb-09370b6165b7\") " pod="openstack/dnsmasq-dns-86db49b7ff-5fwsh" Jan 27 15:27:31 crc kubenswrapper[4697]: I0127 15:27:31.900749 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-srcz6\" (UniqueName: \"kubernetes.io/projected/17d130bb-85ab-4a76-a5cb-09370b6165b7-kube-api-access-srcz6\") pod \"dnsmasq-dns-86db49b7ff-5fwsh\" (UID: \"17d130bb-85ab-4a76-a5cb-09370b6165b7\") " pod="openstack/dnsmasq-dns-86db49b7ff-5fwsh" Jan 27 15:27:31 crc kubenswrapper[4697]: I0127 15:27:31.905398 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Jan 27 15:27:32 crc kubenswrapper[4697]: I0127 15:27:32.002155 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-rpfpm"] Jan 27 15:27:32 crc kubenswrapper[4697]: I0127 15:27:32.002514 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-5fwsh" Jan 27 15:27:32 crc kubenswrapper[4697]: I0127 15:27:32.123641 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-mfkdn"] Jan 27 15:27:32 crc kubenswrapper[4697]: I0127 15:27:32.323250 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Jan 27 15:27:32 crc kubenswrapper[4697]: I0127 15:27:32.347579 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 27 15:27:32 crc kubenswrapper[4697]: I0127 15:27:32.357805 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 27 15:27:32 crc kubenswrapper[4697]: I0127 15:27:32.358052 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 27 15:27:32 crc kubenswrapper[4697]: I0127 15:27:32.358206 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-vqg6t" Jan 27 15:27:32 crc kubenswrapper[4697]: I0127 15:27:32.358344 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Jan 27 15:27:32 crc kubenswrapper[4697]: I0127 15:27:32.367528 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 27 15:27:32 crc kubenswrapper[4697]: I0127 15:27:32.496912 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/93a06b1b-be54-4517-a12a-83c9a4f91367-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"93a06b1b-be54-4517-a12a-83c9a4f91367\") " pod="openstack/ovn-northd-0" Jan 27 15:27:32 crc kubenswrapper[4697]: I0127 15:27:32.496953 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/93a06b1b-be54-4517-a12a-83c9a4f91367-scripts\") pod \"ovn-northd-0\" (UID: \"93a06b1b-be54-4517-a12a-83c9a4f91367\") " pod="openstack/ovn-northd-0" Jan 27 15:27:32 crc kubenswrapper[4697]: I0127 15:27:32.496981 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2zxr\" (UniqueName: \"kubernetes.io/projected/93a06b1b-be54-4517-a12a-83c9a4f91367-kube-api-access-f2zxr\") pod \"ovn-northd-0\" (UID: \"93a06b1b-be54-4517-a12a-83c9a4f91367\") " pod="openstack/ovn-northd-0" Jan 27 15:27:32 crc kubenswrapper[4697]: I0127 15:27:32.496995 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/93a06b1b-be54-4517-a12a-83c9a4f91367-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"93a06b1b-be54-4517-a12a-83c9a4f91367\") " pod="openstack/ovn-northd-0" Jan 27 15:27:32 crc kubenswrapper[4697]: I0127 15:27:32.497011 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/93a06b1b-be54-4517-a12a-83c9a4f91367-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"93a06b1b-be54-4517-a12a-83c9a4f91367\") " pod="openstack/ovn-northd-0" Jan 27 15:27:32 crc kubenswrapper[4697]: I0127 15:27:32.497033 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93a06b1b-be54-4517-a12a-83c9a4f91367-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"93a06b1b-be54-4517-a12a-83c9a4f91367\") " pod="openstack/ovn-northd-0" Jan 27 15:27:32 crc kubenswrapper[4697]: I0127 15:27:32.497070 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93a06b1b-be54-4517-a12a-83c9a4f91367-config\") pod \"ovn-northd-0\" (UID: \"93a06b1b-be54-4517-a12a-83c9a4f91367\") " pod="openstack/ovn-northd-0" Jan 27 15:27:32 crc kubenswrapper[4697]: I0127 15:27:32.599121 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93a06b1b-be54-4517-a12a-83c9a4f91367-config\") pod \"ovn-northd-0\" (UID: \"93a06b1b-be54-4517-a12a-83c9a4f91367\") " pod="openstack/ovn-northd-0" Jan 27 15:27:32 crc kubenswrapper[4697]: I0127 15:27:32.599510 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/93a06b1b-be54-4517-a12a-83c9a4f91367-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"93a06b1b-be54-4517-a12a-83c9a4f91367\") " pod="openstack/ovn-northd-0" Jan 27 15:27:32 crc kubenswrapper[4697]: I0127 15:27:32.599531 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/93a06b1b-be54-4517-a12a-83c9a4f91367-scripts\") pod \"ovn-northd-0\" (UID: \"93a06b1b-be54-4517-a12a-83c9a4f91367\") " pod="openstack/ovn-northd-0" Jan 27 15:27:32 crc kubenswrapper[4697]: I0127 15:27:32.599548 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f2zxr\" (UniqueName: \"kubernetes.io/projected/93a06b1b-be54-4517-a12a-83c9a4f91367-kube-api-access-f2zxr\") pod \"ovn-northd-0\" (UID: \"93a06b1b-be54-4517-a12a-83c9a4f91367\") " pod="openstack/ovn-northd-0" Jan 27 15:27:32 crc kubenswrapper[4697]: I0127 15:27:32.599564 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/93a06b1b-be54-4517-a12a-83c9a4f91367-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"93a06b1b-be54-4517-a12a-83c9a4f91367\") " pod="openstack/ovn-northd-0" Jan 27 15:27:32 crc kubenswrapper[4697]: I0127 15:27:32.599581 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/93a06b1b-be54-4517-a12a-83c9a4f91367-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"93a06b1b-be54-4517-a12a-83c9a4f91367\") " pod="openstack/ovn-northd-0" Jan 27 15:27:32 crc kubenswrapper[4697]: I0127 15:27:32.599605 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93a06b1b-be54-4517-a12a-83c9a4f91367-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"93a06b1b-be54-4517-a12a-83c9a4f91367\") " pod="openstack/ovn-northd-0" Jan 27 15:27:32 crc kubenswrapper[4697]: I0127 15:27:32.600287 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93a06b1b-be54-4517-a12a-83c9a4f91367-config\") pod \"ovn-northd-0\" (UID: \"93a06b1b-be54-4517-a12a-83c9a4f91367\") " pod="openstack/ovn-northd-0" Jan 27 15:27:32 crc kubenswrapper[4697]: I0127 15:27:32.600560 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/93a06b1b-be54-4517-a12a-83c9a4f91367-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"93a06b1b-be54-4517-a12a-83c9a4f91367\") " pod="openstack/ovn-northd-0" Jan 27 15:27:32 crc kubenswrapper[4697]: I0127 15:27:32.601170 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/93a06b1b-be54-4517-a12a-83c9a4f91367-scripts\") pod \"ovn-northd-0\" (UID: \"93a06b1b-be54-4517-a12a-83c9a4f91367\") " pod="openstack/ovn-northd-0" Jan 27 15:27:32 crc kubenswrapper[4697]: I0127 15:27:32.619483 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/93a06b1b-be54-4517-a12a-83c9a4f91367-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"93a06b1b-be54-4517-a12a-83c9a4f91367\") " pod="openstack/ovn-northd-0" Jan 27 15:27:32 crc kubenswrapper[4697]: I0127 15:27:32.622292 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/93a06b1b-be54-4517-a12a-83c9a4f91367-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"93a06b1b-be54-4517-a12a-83c9a4f91367\") " pod="openstack/ovn-northd-0" Jan 27 15:27:32 crc kubenswrapper[4697]: I0127 15:27:32.622620 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93a06b1b-be54-4517-a12a-83c9a4f91367-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"93a06b1b-be54-4517-a12a-83c9a4f91367\") " pod="openstack/ovn-northd-0" Jan 27 15:27:32 crc kubenswrapper[4697]: I0127 15:27:32.628643 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2zxr\" (UniqueName: \"kubernetes.io/projected/93a06b1b-be54-4517-a12a-83c9a4f91367-kube-api-access-f2zxr\") pod \"ovn-northd-0\" (UID: \"93a06b1b-be54-4517-a12a-83c9a4f91367\") " pod="openstack/ovn-northd-0" Jan 27 15:27:32 crc kubenswrapper[4697]: I0127 15:27:32.679539 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 27 15:27:32 crc kubenswrapper[4697]: I0127 15:27:32.785129 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-5fwsh"] Jan 27 15:27:32 crc kubenswrapper[4697]: I0127 15:27:32.821599 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-jhr52" Jan 27 15:27:32 crc kubenswrapper[4697]: I0127 15:27:32.890572 4697 generic.go:334] "Generic (PLEG): container finished" podID="a20f2903-bfe4-4b89-8333-09e1adbd4605" containerID="1d590c93adf75103f50007e1c4095f12338e543b14efd3d6e8821c3aed97c7a2" exitCode=0 Jan 27 15:27:32 crc kubenswrapper[4697]: I0127 15:27:32.890696 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-rpfpm" event={"ID":"a20f2903-bfe4-4b89-8333-09e1adbd4605","Type":"ContainerDied","Data":"1d590c93adf75103f50007e1c4095f12338e543b14efd3d6e8821c3aed97c7a2"} Jan 27 15:27:32 crc kubenswrapper[4697]: I0127 15:27:32.892063 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-rpfpm" event={"ID":"a20f2903-bfe4-4b89-8333-09e1adbd4605","Type":"ContainerStarted","Data":"49c28a7f86c20d9c166a2458901d474d480bf30a7e15885cc46e6c5337ecfd5d"} Jan 27 15:27:32 crc kubenswrapper[4697]: I0127 15:27:32.910622 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/46d05374-2c9c-4e44-ba35-8c9d784fd630-dns-svc\") pod \"46d05374-2c9c-4e44-ba35-8c9d784fd630\" (UID: \"46d05374-2c9c-4e44-ba35-8c9d784fd630\") " Jan 27 15:27:32 crc kubenswrapper[4697]: I0127 15:27:32.910683 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46d05374-2c9c-4e44-ba35-8c9d784fd630-config\") pod \"46d05374-2c9c-4e44-ba35-8c9d784fd630\" (UID: \"46d05374-2c9c-4e44-ba35-8c9d784fd630\") " Jan 27 15:27:32 crc kubenswrapper[4697]: I0127 15:27:32.910742 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nnzv6\" (UniqueName: \"kubernetes.io/projected/46d05374-2c9c-4e44-ba35-8c9d784fd630-kube-api-access-nnzv6\") pod \"46d05374-2c9c-4e44-ba35-8c9d784fd630\" (UID: \"46d05374-2c9c-4e44-ba35-8c9d784fd630\") " Jan 27 15:27:32 crc kubenswrapper[4697]: I0127 15:27:32.918936 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-mfkdn" event={"ID":"e7853fb5-d995-44ae-b1b5-c4c38fcadbd2","Type":"ContainerStarted","Data":"c4a961f2b27839ed5d10da442b134d702d21ad72e9a941d0fa2e619dd184d189"} Jan 27 15:27:32 crc kubenswrapper[4697]: I0127 15:27:32.918982 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-mfkdn" event={"ID":"e7853fb5-d995-44ae-b1b5-c4c38fcadbd2","Type":"ContainerStarted","Data":"1cbf28596e45d7e11e9ac8bfed4920637cff80f7307191d533dbd04234ad41e7"} Jan 27 15:27:32 crc kubenswrapper[4697]: I0127 15:27:32.927137 4697 generic.go:334] "Generic (PLEG): container finished" podID="46d05374-2c9c-4e44-ba35-8c9d784fd630" containerID="f8939cf2f841d3d04a50fab3bff6e1aae62d17cae48ff33582fd1f69d1c1679d" exitCode=0 Jan 27 15:27:32 crc kubenswrapper[4697]: I0127 15:27:32.927255 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-jhr52" event={"ID":"46d05374-2c9c-4e44-ba35-8c9d784fd630","Type":"ContainerDied","Data":"f8939cf2f841d3d04a50fab3bff6e1aae62d17cae48ff33582fd1f69d1c1679d"} Jan 27 15:27:32 crc kubenswrapper[4697]: I0127 15:27:32.927283 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-jhr52" event={"ID":"46d05374-2c9c-4e44-ba35-8c9d784fd630","Type":"ContainerDied","Data":"b4fa4221bfd1cf6f212fb9a62be06d7087eea94e23d02d7003d2ac841571e4a0"} Jan 27 15:27:32 crc kubenswrapper[4697]: I0127 15:27:32.927299 4697 scope.go:117] "RemoveContainer" containerID="f8939cf2f841d3d04a50fab3bff6e1aae62d17cae48ff33582fd1f69d1c1679d" Jan 27 15:27:32 crc kubenswrapper[4697]: I0127 15:27:32.927440 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-jhr52" Jan 27 15:27:32 crc kubenswrapper[4697]: I0127 15:27:32.945766 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46d05374-2c9c-4e44-ba35-8c9d784fd630-kube-api-access-nnzv6" (OuterVolumeSpecName: "kube-api-access-nnzv6") pod "46d05374-2c9c-4e44-ba35-8c9d784fd630" (UID: "46d05374-2c9c-4e44-ba35-8c9d784fd630"). InnerVolumeSpecName "kube-api-access-nnzv6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:27:32 crc kubenswrapper[4697]: I0127 15:27:32.960914 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-5fwsh" event={"ID":"17d130bb-85ab-4a76-a5cb-09370b6165b7","Type":"ContainerStarted","Data":"62464c3fc48999ffbe1abee787184fb879e303cc591700757f2116ce663c8786"} Jan 27 15:27:32 crc kubenswrapper[4697]: I0127 15:27:32.961643 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-666b6646f7-2plzf" podUID="a6d1b13a-3022-452e-8343-2a3a6ad0856e" containerName="dnsmasq-dns" containerID="cri-o://8ed138a800e5f047de7389994db571c8ad5550d2c2d7c3913e74ea2b493f9441" gracePeriod=10 Jan 27 15:27:33 crc kubenswrapper[4697]: I0127 15:27:33.003998 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-mfkdn" podStartSLOduration=2.003970658 podStartE2EDuration="2.003970658s" podCreationTimestamp="2026-01-27 15:27:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:27:32.985555491 +0000 UTC m=+1149.157955272" watchObservedRunningTime="2026-01-27 15:27:33.003970658 +0000 UTC m=+1149.176370439" Jan 27 15:27:33 crc kubenswrapper[4697]: I0127 15:27:33.013247 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46d05374-2c9c-4e44-ba35-8c9d784fd630-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "46d05374-2c9c-4e44-ba35-8c9d784fd630" (UID: "46d05374-2c9c-4e44-ba35-8c9d784fd630"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:27:33 crc kubenswrapper[4697]: I0127 15:27:33.020340 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46d05374-2c9c-4e44-ba35-8c9d784fd630-config" (OuterVolumeSpecName: "config") pod "46d05374-2c9c-4e44-ba35-8c9d784fd630" (UID: "46d05374-2c9c-4e44-ba35-8c9d784fd630"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:27:33 crc kubenswrapper[4697]: I0127 15:27:33.023051 4697 scope.go:117] "RemoveContainer" containerID="c1a6c6b86a2b0ac4300c79e22422ef3e018b8eab2a2367e6f0025d979310524d" Jan 27 15:27:33 crc kubenswrapper[4697]: I0127 15:27:33.036019 4697 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46d05374-2c9c-4e44-ba35-8c9d784fd630-config\") on node \"crc\" DevicePath \"\"" Jan 27 15:27:33 crc kubenswrapper[4697]: I0127 15:27:33.036046 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nnzv6\" (UniqueName: \"kubernetes.io/projected/46d05374-2c9c-4e44-ba35-8c9d784fd630-kube-api-access-nnzv6\") on node \"crc\" DevicePath \"\"" Jan 27 15:27:33 crc kubenswrapper[4697]: I0127 15:27:33.036057 4697 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/46d05374-2c9c-4e44-ba35-8c9d784fd630-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 15:27:33 crc kubenswrapper[4697]: I0127 15:27:33.066965 4697 scope.go:117] "RemoveContainer" containerID="f8939cf2f841d3d04a50fab3bff6e1aae62d17cae48ff33582fd1f69d1c1679d" Jan 27 15:27:33 crc kubenswrapper[4697]: E0127 15:27:33.072974 4697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f8939cf2f841d3d04a50fab3bff6e1aae62d17cae48ff33582fd1f69d1c1679d\": container with ID starting with f8939cf2f841d3d04a50fab3bff6e1aae62d17cae48ff33582fd1f69d1c1679d not found: ID does not exist" containerID="f8939cf2f841d3d04a50fab3bff6e1aae62d17cae48ff33582fd1f69d1c1679d" Jan 27 15:27:33 crc kubenswrapper[4697]: I0127 15:27:33.073206 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8939cf2f841d3d04a50fab3bff6e1aae62d17cae48ff33582fd1f69d1c1679d"} err="failed to get container status \"f8939cf2f841d3d04a50fab3bff6e1aae62d17cae48ff33582fd1f69d1c1679d\": rpc error: code = NotFound desc = could not find container \"f8939cf2f841d3d04a50fab3bff6e1aae62d17cae48ff33582fd1f69d1c1679d\": container with ID starting with f8939cf2f841d3d04a50fab3bff6e1aae62d17cae48ff33582fd1f69d1c1679d not found: ID does not exist" Jan 27 15:27:33 crc kubenswrapper[4697]: I0127 15:27:33.073231 4697 scope.go:117] "RemoveContainer" containerID="c1a6c6b86a2b0ac4300c79e22422ef3e018b8eab2a2367e6f0025d979310524d" Jan 27 15:27:33 crc kubenswrapper[4697]: E0127 15:27:33.074328 4697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c1a6c6b86a2b0ac4300c79e22422ef3e018b8eab2a2367e6f0025d979310524d\": container with ID starting with c1a6c6b86a2b0ac4300c79e22422ef3e018b8eab2a2367e6f0025d979310524d not found: ID does not exist" containerID="c1a6c6b86a2b0ac4300c79e22422ef3e018b8eab2a2367e6f0025d979310524d" Jan 27 15:27:33 crc kubenswrapper[4697]: I0127 15:27:33.074394 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1a6c6b86a2b0ac4300c79e22422ef3e018b8eab2a2367e6f0025d979310524d"} err="failed to get container status \"c1a6c6b86a2b0ac4300c79e22422ef3e018b8eab2a2367e6f0025d979310524d\": rpc error: code = NotFound desc = could not find container \"c1a6c6b86a2b0ac4300c79e22422ef3e018b8eab2a2367e6f0025d979310524d\": container with ID starting with c1a6c6b86a2b0ac4300c79e22422ef3e018b8eab2a2367e6f0025d979310524d not found: ID does not exist" Jan 27 15:27:33 crc kubenswrapper[4697]: I0127 15:27:33.264689 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-jhr52"] Jan 27 15:27:33 crc kubenswrapper[4697]: I0127 15:27:33.268308 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-jhr52"] Jan 27 15:27:33 crc kubenswrapper[4697]: I0127 15:27:33.293745 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 27 15:27:33 crc kubenswrapper[4697]: I0127 15:27:33.381270 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-2plzf" Jan 27 15:27:33 crc kubenswrapper[4697]: I0127 15:27:33.441486 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a6d1b13a-3022-452e-8343-2a3a6ad0856e-dns-svc\") pod \"a6d1b13a-3022-452e-8343-2a3a6ad0856e\" (UID: \"a6d1b13a-3022-452e-8343-2a3a6ad0856e\") " Jan 27 15:27:33 crc kubenswrapper[4697]: I0127 15:27:33.441549 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6d1b13a-3022-452e-8343-2a3a6ad0856e-config\") pod \"a6d1b13a-3022-452e-8343-2a3a6ad0856e\" (UID: \"a6d1b13a-3022-452e-8343-2a3a6ad0856e\") " Jan 27 15:27:33 crc kubenswrapper[4697]: I0127 15:27:33.441626 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sp72l\" (UniqueName: \"kubernetes.io/projected/a6d1b13a-3022-452e-8343-2a3a6ad0856e-kube-api-access-sp72l\") pod \"a6d1b13a-3022-452e-8343-2a3a6ad0856e\" (UID: \"a6d1b13a-3022-452e-8343-2a3a6ad0856e\") " Jan 27 15:27:33 crc kubenswrapper[4697]: I0127 15:27:33.470203 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6d1b13a-3022-452e-8343-2a3a6ad0856e-kube-api-access-sp72l" (OuterVolumeSpecName: "kube-api-access-sp72l") pod "a6d1b13a-3022-452e-8343-2a3a6ad0856e" (UID: "a6d1b13a-3022-452e-8343-2a3a6ad0856e"). InnerVolumeSpecName "kube-api-access-sp72l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:27:33 crc kubenswrapper[4697]: I0127 15:27:33.502015 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a6d1b13a-3022-452e-8343-2a3a6ad0856e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a6d1b13a-3022-452e-8343-2a3a6ad0856e" (UID: "a6d1b13a-3022-452e-8343-2a3a6ad0856e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:27:33 crc kubenswrapper[4697]: I0127 15:27:33.504665 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a6d1b13a-3022-452e-8343-2a3a6ad0856e-config" (OuterVolumeSpecName: "config") pod "a6d1b13a-3022-452e-8343-2a3a6ad0856e" (UID: "a6d1b13a-3022-452e-8343-2a3a6ad0856e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:27:33 crc kubenswrapper[4697]: I0127 15:27:33.543399 4697 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a6d1b13a-3022-452e-8343-2a3a6ad0856e-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 15:27:33 crc kubenswrapper[4697]: I0127 15:27:33.543431 4697 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6d1b13a-3022-452e-8343-2a3a6ad0856e-config\") on node \"crc\" DevicePath \"\"" Jan 27 15:27:33 crc kubenswrapper[4697]: I0127 15:27:33.543443 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sp72l\" (UniqueName: \"kubernetes.io/projected/a6d1b13a-3022-452e-8343-2a3a6ad0856e-kube-api-access-sp72l\") on node \"crc\" DevicePath \"\"" Jan 27 15:27:33 crc kubenswrapper[4697]: I0127 15:27:33.968883 4697 generic.go:334] "Generic (PLEG): container finished" podID="17d130bb-85ab-4a76-a5cb-09370b6165b7" containerID="c08e5695fecc873af22df46da4696a764b1f5926aa5341f8440852914cc3769c" exitCode=0 Jan 27 15:27:33 crc kubenswrapper[4697]: I0127 15:27:33.969202 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-5fwsh" event={"ID":"17d130bb-85ab-4a76-a5cb-09370b6165b7","Type":"ContainerDied","Data":"c08e5695fecc873af22df46da4696a764b1f5926aa5341f8440852914cc3769c"} Jan 27 15:27:33 crc kubenswrapper[4697]: I0127 15:27:33.971432 4697 generic.go:334] "Generic (PLEG): container finished" podID="a6d1b13a-3022-452e-8343-2a3a6ad0856e" containerID="8ed138a800e5f047de7389994db571c8ad5550d2c2d7c3913e74ea2b493f9441" exitCode=0 Jan 27 15:27:33 crc kubenswrapper[4697]: I0127 15:27:33.971499 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-2plzf" event={"ID":"a6d1b13a-3022-452e-8343-2a3a6ad0856e","Type":"ContainerDied","Data":"8ed138a800e5f047de7389994db571c8ad5550d2c2d7c3913e74ea2b493f9441"} Jan 27 15:27:33 crc kubenswrapper[4697]: I0127 15:27:33.971523 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-2plzf" event={"ID":"a6d1b13a-3022-452e-8343-2a3a6ad0856e","Type":"ContainerDied","Data":"00debdfa4e38028dd025852bdaa4e5eafdb7146cd8d64c6008684df24706577c"} Jan 27 15:27:33 crc kubenswrapper[4697]: I0127 15:27:33.971539 4697 scope.go:117] "RemoveContainer" containerID="8ed138a800e5f047de7389994db571c8ad5550d2c2d7c3913e74ea2b493f9441" Jan 27 15:27:33 crc kubenswrapper[4697]: I0127 15:27:33.971612 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-2plzf" Jan 27 15:27:33 crc kubenswrapper[4697]: I0127 15:27:33.981891 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-rpfpm" event={"ID":"a20f2903-bfe4-4b89-8333-09e1adbd4605","Type":"ContainerStarted","Data":"14f1cc8cf76191aa5f56f49a2688ca574fb414061fa0875a7c74a376769584bc"} Jan 27 15:27:33 crc kubenswrapper[4697]: I0127 15:27:33.982512 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7fd796d7df-rpfpm" Jan 27 15:27:33 crc kubenswrapper[4697]: I0127 15:27:33.984572 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"93a06b1b-be54-4517-a12a-83c9a4f91367","Type":"ContainerStarted","Data":"89a285befd3f4a4058cdf2b0bebb7c7f645981736b8725354559d1cd74a6f375"} Jan 27 15:27:34 crc kubenswrapper[4697]: I0127 15:27:34.037576 4697 scope.go:117] "RemoveContainer" containerID="0f5e82f0e82b0311a0b81486017b9abd08e32b6bc9f7024716c67e18eeb598e5" Jan 27 15:27:34 crc kubenswrapper[4697]: I0127 15:27:34.039371 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7fd796d7df-rpfpm" podStartSLOduration=3.039354268 podStartE2EDuration="3.039354268s" podCreationTimestamp="2026-01-27 15:27:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:27:34.019476996 +0000 UTC m=+1150.191876777" watchObservedRunningTime="2026-01-27 15:27:34.039354268 +0000 UTC m=+1150.211754049" Jan 27 15:27:34 crc kubenswrapper[4697]: I0127 15:27:34.053759 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-2plzf"] Jan 27 15:27:34 crc kubenswrapper[4697]: I0127 15:27:34.060379 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-2plzf"] Jan 27 15:27:34 crc kubenswrapper[4697]: I0127 15:27:34.072101 4697 scope.go:117] "RemoveContainer" containerID="8ed138a800e5f047de7389994db571c8ad5550d2c2d7c3913e74ea2b493f9441" Jan 27 15:27:34 crc kubenswrapper[4697]: E0127 15:27:34.072513 4697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ed138a800e5f047de7389994db571c8ad5550d2c2d7c3913e74ea2b493f9441\": container with ID starting with 8ed138a800e5f047de7389994db571c8ad5550d2c2d7c3913e74ea2b493f9441 not found: ID does not exist" containerID="8ed138a800e5f047de7389994db571c8ad5550d2c2d7c3913e74ea2b493f9441" Jan 27 15:27:34 crc kubenswrapper[4697]: I0127 15:27:34.072548 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ed138a800e5f047de7389994db571c8ad5550d2c2d7c3913e74ea2b493f9441"} err="failed to get container status \"8ed138a800e5f047de7389994db571c8ad5550d2c2d7c3913e74ea2b493f9441\": rpc error: code = NotFound desc = could not find container \"8ed138a800e5f047de7389994db571c8ad5550d2c2d7c3913e74ea2b493f9441\": container with ID starting with 8ed138a800e5f047de7389994db571c8ad5550d2c2d7c3913e74ea2b493f9441 not found: ID does not exist" Jan 27 15:27:34 crc kubenswrapper[4697]: I0127 15:27:34.072567 4697 scope.go:117] "RemoveContainer" containerID="0f5e82f0e82b0311a0b81486017b9abd08e32b6bc9f7024716c67e18eeb598e5" Jan 27 15:27:34 crc kubenswrapper[4697]: E0127 15:27:34.073537 4697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f5e82f0e82b0311a0b81486017b9abd08e32b6bc9f7024716c67e18eeb598e5\": container with ID starting with 0f5e82f0e82b0311a0b81486017b9abd08e32b6bc9f7024716c67e18eeb598e5 not found: ID does not exist" containerID="0f5e82f0e82b0311a0b81486017b9abd08e32b6bc9f7024716c67e18eeb598e5" Jan 27 15:27:34 crc kubenswrapper[4697]: I0127 15:27:34.073587 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f5e82f0e82b0311a0b81486017b9abd08e32b6bc9f7024716c67e18eeb598e5"} err="failed to get container status \"0f5e82f0e82b0311a0b81486017b9abd08e32b6bc9f7024716c67e18eeb598e5\": rpc error: code = NotFound desc = could not find container \"0f5e82f0e82b0311a0b81486017b9abd08e32b6bc9f7024716c67e18eeb598e5\": container with ID starting with 0f5e82f0e82b0311a0b81486017b9abd08e32b6bc9f7024716c67e18eeb598e5 not found: ID does not exist" Jan 27 15:27:34 crc kubenswrapper[4697]: I0127 15:27:34.341168 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 27 15:27:34 crc kubenswrapper[4697]: I0127 15:27:34.341449 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 27 15:27:34 crc kubenswrapper[4697]: I0127 15:27:34.382199 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-jjnzt"] Jan 27 15:27:34 crc kubenswrapper[4697]: E0127 15:27:34.382600 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6d1b13a-3022-452e-8343-2a3a6ad0856e" containerName="init" Jan 27 15:27:34 crc kubenswrapper[4697]: I0127 15:27:34.382626 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6d1b13a-3022-452e-8343-2a3a6ad0856e" containerName="init" Jan 27 15:27:34 crc kubenswrapper[4697]: E0127 15:27:34.390885 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46d05374-2c9c-4e44-ba35-8c9d784fd630" containerName="init" Jan 27 15:27:34 crc kubenswrapper[4697]: I0127 15:27:34.390910 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="46d05374-2c9c-4e44-ba35-8c9d784fd630" containerName="init" Jan 27 15:27:34 crc kubenswrapper[4697]: E0127 15:27:34.390940 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46d05374-2c9c-4e44-ba35-8c9d784fd630" containerName="dnsmasq-dns" Jan 27 15:27:34 crc kubenswrapper[4697]: I0127 15:27:34.390949 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="46d05374-2c9c-4e44-ba35-8c9d784fd630" containerName="dnsmasq-dns" Jan 27 15:27:34 crc kubenswrapper[4697]: E0127 15:27:34.390981 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6d1b13a-3022-452e-8343-2a3a6ad0856e" containerName="dnsmasq-dns" Jan 27 15:27:34 crc kubenswrapper[4697]: I0127 15:27:34.390990 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6d1b13a-3022-452e-8343-2a3a6ad0856e" containerName="dnsmasq-dns" Jan 27 15:27:34 crc kubenswrapper[4697]: I0127 15:27:34.391281 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="46d05374-2c9c-4e44-ba35-8c9d784fd630" containerName="dnsmasq-dns" Jan 27 15:27:34 crc kubenswrapper[4697]: I0127 15:27:34.391314 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6d1b13a-3022-452e-8343-2a3a6ad0856e" containerName="dnsmasq-dns" Jan 27 15:27:34 crc kubenswrapper[4697]: I0127 15:27:34.392045 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-jjnzt" Jan 27 15:27:34 crc kubenswrapper[4697]: I0127 15:27:34.393674 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-jjnzt"] Jan 27 15:27:34 crc kubenswrapper[4697]: I0127 15:27:34.394099 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 27 15:27:34 crc kubenswrapper[4697]: I0127 15:27:34.560535 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jlnv\" (UniqueName: \"kubernetes.io/projected/d7b7f1e4-e33b-48e9-a6ba-4122f4113a95-kube-api-access-7jlnv\") pod \"root-account-create-update-jjnzt\" (UID: \"d7b7f1e4-e33b-48e9-a6ba-4122f4113a95\") " pod="openstack/root-account-create-update-jjnzt" Jan 27 15:27:34 crc kubenswrapper[4697]: I0127 15:27:34.560586 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d7b7f1e4-e33b-48e9-a6ba-4122f4113a95-operator-scripts\") pod \"root-account-create-update-jjnzt\" (UID: \"d7b7f1e4-e33b-48e9-a6ba-4122f4113a95\") " pod="openstack/root-account-create-update-jjnzt" Jan 27 15:27:34 crc kubenswrapper[4697]: I0127 15:27:34.580808 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46d05374-2c9c-4e44-ba35-8c9d784fd630" path="/var/lib/kubelet/pods/46d05374-2c9c-4e44-ba35-8c9d784fd630/volumes" Jan 27 15:27:34 crc kubenswrapper[4697]: I0127 15:27:34.582129 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6d1b13a-3022-452e-8343-2a3a6ad0856e" path="/var/lib/kubelet/pods/a6d1b13a-3022-452e-8343-2a3a6ad0856e/volumes" Jan 27 15:27:34 crc kubenswrapper[4697]: I0127 15:27:34.662054 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7jlnv\" (UniqueName: \"kubernetes.io/projected/d7b7f1e4-e33b-48e9-a6ba-4122f4113a95-kube-api-access-7jlnv\") pod \"root-account-create-update-jjnzt\" (UID: \"d7b7f1e4-e33b-48e9-a6ba-4122f4113a95\") " pod="openstack/root-account-create-update-jjnzt" Jan 27 15:27:34 crc kubenswrapper[4697]: I0127 15:27:34.662113 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d7b7f1e4-e33b-48e9-a6ba-4122f4113a95-operator-scripts\") pod \"root-account-create-update-jjnzt\" (UID: \"d7b7f1e4-e33b-48e9-a6ba-4122f4113a95\") " pod="openstack/root-account-create-update-jjnzt" Jan 27 15:27:34 crc kubenswrapper[4697]: I0127 15:27:34.662974 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d7b7f1e4-e33b-48e9-a6ba-4122f4113a95-operator-scripts\") pod \"root-account-create-update-jjnzt\" (UID: \"d7b7f1e4-e33b-48e9-a6ba-4122f4113a95\") " pod="openstack/root-account-create-update-jjnzt" Jan 27 15:27:34 crc kubenswrapper[4697]: I0127 15:27:34.681254 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7jlnv\" (UniqueName: \"kubernetes.io/projected/d7b7f1e4-e33b-48e9-a6ba-4122f4113a95-kube-api-access-7jlnv\") pod \"root-account-create-update-jjnzt\" (UID: \"d7b7f1e4-e33b-48e9-a6ba-4122f4113a95\") " pod="openstack/root-account-create-update-jjnzt" Jan 27 15:27:34 crc kubenswrapper[4697]: I0127 15:27:34.731736 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-jjnzt" Jan 27 15:27:35 crc kubenswrapper[4697]: I0127 15:27:35.015532 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"93a06b1b-be54-4517-a12a-83c9a4f91367","Type":"ContainerStarted","Data":"da63a91d871d62091516c3cb0cbc109640e37ac50848e5ba5fbfda45ab52c5b7"} Jan 27 15:27:35 crc kubenswrapper[4697]: I0127 15:27:35.020627 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-5fwsh" event={"ID":"17d130bb-85ab-4a76-a5cb-09370b6165b7","Type":"ContainerStarted","Data":"8ce3f4ba140789ddc7fc65de38844dac05645728b2451b178cc702d226eb1444"} Jan 27 15:27:35 crc kubenswrapper[4697]: I0127 15:27:35.024051 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-86db49b7ff-5fwsh" Jan 27 15:27:35 crc kubenswrapper[4697]: I0127 15:27:35.054366 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-86db49b7ff-5fwsh" podStartSLOduration=4.054351413 podStartE2EDuration="4.054351413s" podCreationTimestamp="2026-01-27 15:27:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:27:35.041725701 +0000 UTC m=+1151.214125492" watchObservedRunningTime="2026-01-27 15:27:35.054351413 +0000 UTC m=+1151.226751194" Jan 27 15:27:35 crc kubenswrapper[4697]: I0127 15:27:35.229336 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-jjnzt"] Jan 27 15:27:36 crc kubenswrapper[4697]: I0127 15:27:36.016129 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Jan 27 15:27:36 crc kubenswrapper[4697]: I0127 15:27:36.040476 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"93a06b1b-be54-4517-a12a-83c9a4f91367","Type":"ContainerStarted","Data":"185a2bd9e41ebbe821939ee859dd949f835a4a5ec0760ac2567189c346c5d251"} Jan 27 15:27:36 crc kubenswrapper[4697]: I0127 15:27:36.040668 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 27 15:27:36 crc kubenswrapper[4697]: I0127 15:27:36.042840 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-jjnzt" event={"ID":"d7b7f1e4-e33b-48e9-a6ba-4122f4113a95","Type":"ContainerStarted","Data":"d45ff3f7ef76614be34b74510b586a34de4d4a412b0b2ebe6939839e20680471"} Jan 27 15:27:36 crc kubenswrapper[4697]: I0127 15:27:36.042888 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-jjnzt" event={"ID":"d7b7f1e4-e33b-48e9-a6ba-4122f4113a95","Type":"ContainerStarted","Data":"0205b2ad87f0434b54b5d9f7c23c27921e516373bb08011fe4b597503702c50d"} Jan 27 15:27:36 crc kubenswrapper[4697]: I0127 15:27:36.075632 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.650380944 podStartE2EDuration="4.075611454s" podCreationTimestamp="2026-01-27 15:27:32 +0000 UTC" firstStartedPulling="2026-01-27 15:27:33.292068959 +0000 UTC m=+1149.464468740" lastFinishedPulling="2026-01-27 15:27:34.717299479 +0000 UTC m=+1150.889699250" observedRunningTime="2026-01-27 15:27:36.071694467 +0000 UTC m=+1152.244094258" watchObservedRunningTime="2026-01-27 15:27:36.075611454 +0000 UTC m=+1152.248011235" Jan 27 15:27:36 crc kubenswrapper[4697]: I0127 15:27:36.103506 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-jjnzt" podStartSLOduration=2.103487574 podStartE2EDuration="2.103487574s" podCreationTimestamp="2026-01-27 15:27:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:27:36.098052589 +0000 UTC m=+1152.270452380" watchObservedRunningTime="2026-01-27 15:27:36.103487574 +0000 UTC m=+1152.275887355" Jan 27 15:27:36 crc kubenswrapper[4697]: I0127 15:27:36.684435 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 27 15:27:36 crc kubenswrapper[4697]: I0127 15:27:36.812918 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 27 15:27:37 crc kubenswrapper[4697]: I0127 15:27:37.050222 4697 generic.go:334] "Generic (PLEG): container finished" podID="d7b7f1e4-e33b-48e9-a6ba-4122f4113a95" containerID="d45ff3f7ef76614be34b74510b586a34de4d4a412b0b2ebe6939839e20680471" exitCode=0 Jan 27 15:27:37 crc kubenswrapper[4697]: I0127 15:27:37.050982 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-jjnzt" event={"ID":"d7b7f1e4-e33b-48e9-a6ba-4122f4113a95","Type":"ContainerDied","Data":"d45ff3f7ef76614be34b74510b586a34de4d4a412b0b2ebe6939839e20680471"} Jan 27 15:27:38 crc kubenswrapper[4697]: I0127 15:27:38.199223 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-rpfpm"] Jan 27 15:27:38 crc kubenswrapper[4697]: I0127 15:27:38.199473 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7fd796d7df-rpfpm" podUID="a20f2903-bfe4-4b89-8333-09e1adbd4605" containerName="dnsmasq-dns" containerID="cri-o://14f1cc8cf76191aa5f56f49a2688ca574fb414061fa0875a7c74a376769584bc" gracePeriod=10 Jan 27 15:27:38 crc kubenswrapper[4697]: I0127 15:27:38.208977 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7fd796d7df-rpfpm" Jan 27 15:27:38 crc kubenswrapper[4697]: I0127 15:27:38.307585 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-698758b865-nrzk8"] Jan 27 15:27:38 crc kubenswrapper[4697]: I0127 15:27:38.314602 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-nrzk8" Jan 27 15:27:38 crc kubenswrapper[4697]: I0127 15:27:38.383368 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-nrzk8"] Jan 27 15:27:38 crc kubenswrapper[4697]: I0127 15:27:38.445529 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/621e9d49-138c-485b-a57e-1f3ec16c5875-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-nrzk8\" (UID: \"621e9d49-138c-485b-a57e-1f3ec16c5875\") " pod="openstack/dnsmasq-dns-698758b865-nrzk8" Jan 27 15:27:38 crc kubenswrapper[4697]: I0127 15:27:38.445903 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/621e9d49-138c-485b-a57e-1f3ec16c5875-dns-svc\") pod \"dnsmasq-dns-698758b865-nrzk8\" (UID: \"621e9d49-138c-485b-a57e-1f3ec16c5875\") " pod="openstack/dnsmasq-dns-698758b865-nrzk8" Jan 27 15:27:38 crc kubenswrapper[4697]: I0127 15:27:38.446048 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/621e9d49-138c-485b-a57e-1f3ec16c5875-config\") pod \"dnsmasq-dns-698758b865-nrzk8\" (UID: \"621e9d49-138c-485b-a57e-1f3ec16c5875\") " pod="openstack/dnsmasq-dns-698758b865-nrzk8" Jan 27 15:27:38 crc kubenswrapper[4697]: I0127 15:27:38.446167 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mf9q6\" (UniqueName: \"kubernetes.io/projected/621e9d49-138c-485b-a57e-1f3ec16c5875-kube-api-access-mf9q6\") pod \"dnsmasq-dns-698758b865-nrzk8\" (UID: \"621e9d49-138c-485b-a57e-1f3ec16c5875\") " pod="openstack/dnsmasq-dns-698758b865-nrzk8" Jan 27 15:27:38 crc kubenswrapper[4697]: I0127 15:27:38.446341 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/621e9d49-138c-485b-a57e-1f3ec16c5875-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-nrzk8\" (UID: \"621e9d49-138c-485b-a57e-1f3ec16c5875\") " pod="openstack/dnsmasq-dns-698758b865-nrzk8" Jan 27 15:27:38 crc kubenswrapper[4697]: I0127 15:27:38.548638 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/621e9d49-138c-485b-a57e-1f3ec16c5875-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-nrzk8\" (UID: \"621e9d49-138c-485b-a57e-1f3ec16c5875\") " pod="openstack/dnsmasq-dns-698758b865-nrzk8" Jan 27 15:27:38 crc kubenswrapper[4697]: I0127 15:27:38.548736 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/621e9d49-138c-485b-a57e-1f3ec16c5875-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-nrzk8\" (UID: \"621e9d49-138c-485b-a57e-1f3ec16c5875\") " pod="openstack/dnsmasq-dns-698758b865-nrzk8" Jan 27 15:27:38 crc kubenswrapper[4697]: I0127 15:27:38.548807 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/621e9d49-138c-485b-a57e-1f3ec16c5875-dns-svc\") pod \"dnsmasq-dns-698758b865-nrzk8\" (UID: \"621e9d49-138c-485b-a57e-1f3ec16c5875\") " pod="openstack/dnsmasq-dns-698758b865-nrzk8" Jan 27 15:27:38 crc kubenswrapper[4697]: I0127 15:27:38.548842 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/621e9d49-138c-485b-a57e-1f3ec16c5875-config\") pod \"dnsmasq-dns-698758b865-nrzk8\" (UID: \"621e9d49-138c-485b-a57e-1f3ec16c5875\") " pod="openstack/dnsmasq-dns-698758b865-nrzk8" Jan 27 15:27:38 crc kubenswrapper[4697]: I0127 15:27:38.548864 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mf9q6\" (UniqueName: \"kubernetes.io/projected/621e9d49-138c-485b-a57e-1f3ec16c5875-kube-api-access-mf9q6\") pod \"dnsmasq-dns-698758b865-nrzk8\" (UID: \"621e9d49-138c-485b-a57e-1f3ec16c5875\") " pod="openstack/dnsmasq-dns-698758b865-nrzk8" Jan 27 15:27:38 crc kubenswrapper[4697]: I0127 15:27:38.550083 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/621e9d49-138c-485b-a57e-1f3ec16c5875-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-nrzk8\" (UID: \"621e9d49-138c-485b-a57e-1f3ec16c5875\") " pod="openstack/dnsmasq-dns-698758b865-nrzk8" Jan 27 15:27:38 crc kubenswrapper[4697]: I0127 15:27:38.550396 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/621e9d49-138c-485b-a57e-1f3ec16c5875-dns-svc\") pod \"dnsmasq-dns-698758b865-nrzk8\" (UID: \"621e9d49-138c-485b-a57e-1f3ec16c5875\") " pod="openstack/dnsmasq-dns-698758b865-nrzk8" Jan 27 15:27:38 crc kubenswrapper[4697]: I0127 15:27:38.550669 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/621e9d49-138c-485b-a57e-1f3ec16c5875-config\") pod \"dnsmasq-dns-698758b865-nrzk8\" (UID: \"621e9d49-138c-485b-a57e-1f3ec16c5875\") " pod="openstack/dnsmasq-dns-698758b865-nrzk8" Jan 27 15:27:38 crc kubenswrapper[4697]: I0127 15:27:38.551063 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/621e9d49-138c-485b-a57e-1f3ec16c5875-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-nrzk8\" (UID: \"621e9d49-138c-485b-a57e-1f3ec16c5875\") " pod="openstack/dnsmasq-dns-698758b865-nrzk8" Jan 27 15:27:38 crc kubenswrapper[4697]: I0127 15:27:38.591776 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mf9q6\" (UniqueName: \"kubernetes.io/projected/621e9d49-138c-485b-a57e-1f3ec16c5875-kube-api-access-mf9q6\") pod \"dnsmasq-dns-698758b865-nrzk8\" (UID: \"621e9d49-138c-485b-a57e-1f3ec16c5875\") " pod="openstack/dnsmasq-dns-698758b865-nrzk8" Jan 27 15:27:38 crc kubenswrapper[4697]: I0127 15:27:38.653769 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-nrzk8" Jan 27 15:27:38 crc kubenswrapper[4697]: I0127 15:27:38.791163 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-jjnzt" Jan 27 15:27:38 crc kubenswrapper[4697]: I0127 15:27:38.959392 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7jlnv\" (UniqueName: \"kubernetes.io/projected/d7b7f1e4-e33b-48e9-a6ba-4122f4113a95-kube-api-access-7jlnv\") pod \"d7b7f1e4-e33b-48e9-a6ba-4122f4113a95\" (UID: \"d7b7f1e4-e33b-48e9-a6ba-4122f4113a95\") " Jan 27 15:27:38 crc kubenswrapper[4697]: I0127 15:27:38.959681 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d7b7f1e4-e33b-48e9-a6ba-4122f4113a95-operator-scripts\") pod \"d7b7f1e4-e33b-48e9-a6ba-4122f4113a95\" (UID: \"d7b7f1e4-e33b-48e9-a6ba-4122f4113a95\") " Jan 27 15:27:38 crc kubenswrapper[4697]: I0127 15:27:38.960684 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7b7f1e4-e33b-48e9-a6ba-4122f4113a95-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d7b7f1e4-e33b-48e9-a6ba-4122f4113a95" (UID: "d7b7f1e4-e33b-48e9-a6ba-4122f4113a95"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:27:38 crc kubenswrapper[4697]: I0127 15:27:38.963374 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7b7f1e4-e33b-48e9-a6ba-4122f4113a95-kube-api-access-7jlnv" (OuterVolumeSpecName: "kube-api-access-7jlnv") pod "d7b7f1e4-e33b-48e9-a6ba-4122f4113a95" (UID: "d7b7f1e4-e33b-48e9-a6ba-4122f4113a95"). InnerVolumeSpecName "kube-api-access-7jlnv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:27:39 crc kubenswrapper[4697]: I0127 15:27:39.061464 4697 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d7b7f1e4-e33b-48e9-a6ba-4122f4113a95-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 15:27:39 crc kubenswrapper[4697]: I0127 15:27:39.061519 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7jlnv\" (UniqueName: \"kubernetes.io/projected/d7b7f1e4-e33b-48e9-a6ba-4122f4113a95-kube-api-access-7jlnv\") on node \"crc\" DevicePath \"\"" Jan 27 15:27:39 crc kubenswrapper[4697]: I0127 15:27:39.072670 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-jjnzt" event={"ID":"d7b7f1e4-e33b-48e9-a6ba-4122f4113a95","Type":"ContainerDied","Data":"0205b2ad87f0434b54b5d9f7c23c27921e516373bb08011fe4b597503702c50d"} Jan 27 15:27:39 crc kubenswrapper[4697]: I0127 15:27:39.072740 4697 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0205b2ad87f0434b54b5d9f7c23c27921e516373bb08011fe4b597503702c50d" Jan 27 15:27:39 crc kubenswrapper[4697]: I0127 15:27:39.072852 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-jjnzt" Jan 27 15:27:39 crc kubenswrapper[4697]: I0127 15:27:39.074751 4697 generic.go:334] "Generic (PLEG): container finished" podID="a20f2903-bfe4-4b89-8333-09e1adbd4605" containerID="14f1cc8cf76191aa5f56f49a2688ca574fb414061fa0875a7c74a376769584bc" exitCode=0 Jan 27 15:27:39 crc kubenswrapper[4697]: I0127 15:27:39.074836 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-rpfpm" event={"ID":"a20f2903-bfe4-4b89-8333-09e1adbd4605","Type":"ContainerDied","Data":"14f1cc8cf76191aa5f56f49a2688ca574fb414061fa0875a7c74a376769584bc"} Jan 27 15:27:39 crc kubenswrapper[4697]: I0127 15:27:39.206056 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-nrzk8"] Jan 27 15:27:39 crc kubenswrapper[4697]: I0127 15:27:39.239581 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-rpfpm" Jan 27 15:27:39 crc kubenswrapper[4697]: I0127 15:27:39.366966 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a20f2903-bfe4-4b89-8333-09e1adbd4605-ovsdbserver-nb\") pod \"a20f2903-bfe4-4b89-8333-09e1adbd4605\" (UID: \"a20f2903-bfe4-4b89-8333-09e1adbd4605\") " Jan 27 15:27:39 crc kubenswrapper[4697]: I0127 15:27:39.367073 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a20f2903-bfe4-4b89-8333-09e1adbd4605-config\") pod \"a20f2903-bfe4-4b89-8333-09e1adbd4605\" (UID: \"a20f2903-bfe4-4b89-8333-09e1adbd4605\") " Jan 27 15:27:39 crc kubenswrapper[4697]: I0127 15:27:39.367130 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nm7qm\" (UniqueName: \"kubernetes.io/projected/a20f2903-bfe4-4b89-8333-09e1adbd4605-kube-api-access-nm7qm\") pod \"a20f2903-bfe4-4b89-8333-09e1adbd4605\" (UID: \"a20f2903-bfe4-4b89-8333-09e1adbd4605\") " Jan 27 15:27:39 crc kubenswrapper[4697]: I0127 15:27:39.367226 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a20f2903-bfe4-4b89-8333-09e1adbd4605-dns-svc\") pod \"a20f2903-bfe4-4b89-8333-09e1adbd4605\" (UID: \"a20f2903-bfe4-4b89-8333-09e1adbd4605\") " Jan 27 15:27:39 crc kubenswrapper[4697]: I0127 15:27:39.372711 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a20f2903-bfe4-4b89-8333-09e1adbd4605-kube-api-access-nm7qm" (OuterVolumeSpecName: "kube-api-access-nm7qm") pod "a20f2903-bfe4-4b89-8333-09e1adbd4605" (UID: "a20f2903-bfe4-4b89-8333-09e1adbd4605"). InnerVolumeSpecName "kube-api-access-nm7qm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:27:39 crc kubenswrapper[4697]: I0127 15:27:39.427535 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a20f2903-bfe4-4b89-8333-09e1adbd4605-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a20f2903-bfe4-4b89-8333-09e1adbd4605" (UID: "a20f2903-bfe4-4b89-8333-09e1adbd4605"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:27:39 crc kubenswrapper[4697]: I0127 15:27:39.433275 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a20f2903-bfe4-4b89-8333-09e1adbd4605-config" (OuterVolumeSpecName: "config") pod "a20f2903-bfe4-4b89-8333-09e1adbd4605" (UID: "a20f2903-bfe4-4b89-8333-09e1adbd4605"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:27:39 crc kubenswrapper[4697]: I0127 15:27:39.473166 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a20f2903-bfe4-4b89-8333-09e1adbd4605-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a20f2903-bfe4-4b89-8333-09e1adbd4605" (UID: "a20f2903-bfe4-4b89-8333-09e1adbd4605"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:27:39 crc kubenswrapper[4697]: I0127 15:27:39.473487 4697 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a20f2903-bfe4-4b89-8333-09e1adbd4605-config\") on node \"crc\" DevicePath \"\"" Jan 27 15:27:39 crc kubenswrapper[4697]: I0127 15:27:39.473506 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nm7qm\" (UniqueName: \"kubernetes.io/projected/a20f2903-bfe4-4b89-8333-09e1adbd4605-kube-api-access-nm7qm\") on node \"crc\" DevicePath \"\"" Jan 27 15:27:39 crc kubenswrapper[4697]: I0127 15:27:39.473516 4697 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a20f2903-bfe4-4b89-8333-09e1adbd4605-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 15:27:39 crc kubenswrapper[4697]: I0127 15:27:39.473526 4697 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a20f2903-bfe4-4b89-8333-09e1adbd4605-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 15:27:39 crc kubenswrapper[4697]: I0127 15:27:39.542742 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Jan 27 15:27:39 crc kubenswrapper[4697]: E0127 15:27:39.543151 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a20f2903-bfe4-4b89-8333-09e1adbd4605" containerName="dnsmasq-dns" Jan 27 15:27:39 crc kubenswrapper[4697]: I0127 15:27:39.543172 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="a20f2903-bfe4-4b89-8333-09e1adbd4605" containerName="dnsmasq-dns" Jan 27 15:27:39 crc kubenswrapper[4697]: E0127 15:27:39.543192 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7b7f1e4-e33b-48e9-a6ba-4122f4113a95" containerName="mariadb-account-create-update" Jan 27 15:27:39 crc kubenswrapper[4697]: I0127 15:27:39.543200 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7b7f1e4-e33b-48e9-a6ba-4122f4113a95" containerName="mariadb-account-create-update" Jan 27 15:27:39 crc kubenswrapper[4697]: E0127 15:27:39.543217 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a20f2903-bfe4-4b89-8333-09e1adbd4605" containerName="init" Jan 27 15:27:39 crc kubenswrapper[4697]: I0127 15:27:39.543224 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="a20f2903-bfe4-4b89-8333-09e1adbd4605" containerName="init" Jan 27 15:27:39 crc kubenswrapper[4697]: I0127 15:27:39.543393 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7b7f1e4-e33b-48e9-a6ba-4122f4113a95" containerName="mariadb-account-create-update" Jan 27 15:27:39 crc kubenswrapper[4697]: I0127 15:27:39.543425 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="a20f2903-bfe4-4b89-8333-09e1adbd4605" containerName="dnsmasq-dns" Jan 27 15:27:39 crc kubenswrapper[4697]: I0127 15:27:39.548538 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 27 15:27:39 crc kubenswrapper[4697]: I0127 15:27:39.550272 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-8kk6t" Jan 27 15:27:39 crc kubenswrapper[4697]: I0127 15:27:39.550707 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Jan 27 15:27:39 crc kubenswrapper[4697]: I0127 15:27:39.551831 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Jan 27 15:27:39 crc kubenswrapper[4697]: I0127 15:27:39.558904 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Jan 27 15:27:39 crc kubenswrapper[4697]: I0127 15:27:39.568986 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 27 15:27:39 crc kubenswrapper[4697]: I0127 15:27:39.675935 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"swift-storage-0\" (UID: \"c4c66cac-c142-4579-9d13-053d43983229\") " pod="openstack/swift-storage-0" Jan 27 15:27:39 crc kubenswrapper[4697]: I0127 15:27:39.676040 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4c66cac-c142-4579-9d13-053d43983229-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"c4c66cac-c142-4579-9d13-053d43983229\") " pod="openstack/swift-storage-0" Jan 27 15:27:39 crc kubenswrapper[4697]: I0127 15:27:39.676066 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/c4c66cac-c142-4579-9d13-053d43983229-cache\") pod \"swift-storage-0\" (UID: \"c4c66cac-c142-4579-9d13-053d43983229\") " pod="openstack/swift-storage-0" Jan 27 15:27:39 crc kubenswrapper[4697]: I0127 15:27:39.676979 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pklq8\" (UniqueName: \"kubernetes.io/projected/c4c66cac-c142-4579-9d13-053d43983229-kube-api-access-pklq8\") pod \"swift-storage-0\" (UID: \"c4c66cac-c142-4579-9d13-053d43983229\") " pod="openstack/swift-storage-0" Jan 27 15:27:39 crc kubenswrapper[4697]: I0127 15:27:39.677223 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/c4c66cac-c142-4579-9d13-053d43983229-lock\") pod \"swift-storage-0\" (UID: \"c4c66cac-c142-4579-9d13-053d43983229\") " pod="openstack/swift-storage-0" Jan 27 15:27:39 crc kubenswrapper[4697]: I0127 15:27:39.677303 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c4c66cac-c142-4579-9d13-053d43983229-etc-swift\") pod \"swift-storage-0\" (UID: \"c4c66cac-c142-4579-9d13-053d43983229\") " pod="openstack/swift-storage-0" Jan 27 15:27:39 crc kubenswrapper[4697]: I0127 15:27:39.704870 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-xj4x5"] Jan 27 15:27:39 crc kubenswrapper[4697]: I0127 15:27:39.710953 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-xj4x5" Jan 27 15:27:39 crc kubenswrapper[4697]: I0127 15:27:39.712406 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 27 15:27:39 crc kubenswrapper[4697]: I0127 15:27:39.716639 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Jan 27 15:27:39 crc kubenswrapper[4697]: I0127 15:27:39.716638 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Jan 27 15:27:39 crc kubenswrapper[4697]: I0127 15:27:39.746908 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-xj4x5"] Jan 27 15:27:39 crc kubenswrapper[4697]: I0127 15:27:39.763891 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-xj4x5"] Jan 27 15:27:39 crc kubenswrapper[4697]: E0127 15:27:39.764426 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle dispersionconf etc-swift kube-api-access-rm49l ring-data-devices scripts swiftconf], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/swift-ring-rebalance-xj4x5" podUID="26fd6c5e-f715-4c37-bde3-0f038b84c8f4" Jan 27 15:27:39 crc kubenswrapper[4697]: I0127 15:27:39.778322 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-w2t78"] Jan 27 15:27:39 crc kubenswrapper[4697]: I0127 15:27:39.779171 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26fd6c5e-f715-4c37-bde3-0f038b84c8f4-combined-ca-bundle\") pod \"swift-ring-rebalance-xj4x5\" (UID: \"26fd6c5e-f715-4c37-bde3-0f038b84c8f4\") " pod="openstack/swift-ring-rebalance-xj4x5" Jan 27 15:27:39 crc kubenswrapper[4697]: I0127 15:27:39.779299 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/26fd6c5e-f715-4c37-bde3-0f038b84c8f4-scripts\") pod \"swift-ring-rebalance-xj4x5\" (UID: \"26fd6c5e-f715-4c37-bde3-0f038b84c8f4\") " pod="openstack/swift-ring-rebalance-xj4x5" Jan 27 15:27:39 crc kubenswrapper[4697]: I0127 15:27:39.779403 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4c66cac-c142-4579-9d13-053d43983229-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"c4c66cac-c142-4579-9d13-053d43983229\") " pod="openstack/swift-storage-0" Jan 27 15:27:39 crc kubenswrapper[4697]: I0127 15:27:39.779488 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/26fd6c5e-f715-4c37-bde3-0f038b84c8f4-dispersionconf\") pod \"swift-ring-rebalance-xj4x5\" (UID: \"26fd6c5e-f715-4c37-bde3-0f038b84c8f4\") " pod="openstack/swift-ring-rebalance-xj4x5" Jan 27 15:27:39 crc kubenswrapper[4697]: I0127 15:27:39.779406 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-w2t78" Jan 27 15:27:39 crc kubenswrapper[4697]: I0127 15:27:39.779572 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/c4c66cac-c142-4579-9d13-053d43983229-cache\") pod \"swift-storage-0\" (UID: \"c4c66cac-c142-4579-9d13-053d43983229\") " pod="openstack/swift-storage-0" Jan 27 15:27:39 crc kubenswrapper[4697]: I0127 15:27:39.779812 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pklq8\" (UniqueName: \"kubernetes.io/projected/c4c66cac-c142-4579-9d13-053d43983229-kube-api-access-pklq8\") pod \"swift-storage-0\" (UID: \"c4c66cac-c142-4579-9d13-053d43983229\") " pod="openstack/swift-storage-0" Jan 27 15:27:39 crc kubenswrapper[4697]: I0127 15:27:39.779935 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/26fd6c5e-f715-4c37-bde3-0f038b84c8f4-etc-swift\") pod \"swift-ring-rebalance-xj4x5\" (UID: \"26fd6c5e-f715-4c37-bde3-0f038b84c8f4\") " pod="openstack/swift-ring-rebalance-xj4x5" Jan 27 15:27:39 crc kubenswrapper[4697]: I0127 15:27:39.780001 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/c4c66cac-c142-4579-9d13-053d43983229-lock\") pod \"swift-storage-0\" (UID: \"c4c66cac-c142-4579-9d13-053d43983229\") " pod="openstack/swift-storage-0" Jan 27 15:27:39 crc kubenswrapper[4697]: I0127 15:27:39.780063 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/26fd6c5e-f715-4c37-bde3-0f038b84c8f4-ring-data-devices\") pod \"swift-ring-rebalance-xj4x5\" (UID: \"26fd6c5e-f715-4c37-bde3-0f038b84c8f4\") " pod="openstack/swift-ring-rebalance-xj4x5" Jan 27 15:27:39 crc kubenswrapper[4697]: I0127 15:27:39.780115 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c4c66cac-c142-4579-9d13-053d43983229-etc-swift\") pod \"swift-storage-0\" (UID: \"c4c66cac-c142-4579-9d13-053d43983229\") " pod="openstack/swift-storage-0" Jan 27 15:27:39 crc kubenswrapper[4697]: I0127 15:27:39.780226 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rm49l\" (UniqueName: \"kubernetes.io/projected/26fd6c5e-f715-4c37-bde3-0f038b84c8f4-kube-api-access-rm49l\") pod \"swift-ring-rebalance-xj4x5\" (UID: \"26fd6c5e-f715-4c37-bde3-0f038b84c8f4\") " pod="openstack/swift-ring-rebalance-xj4x5" Jan 27 15:27:39 crc kubenswrapper[4697]: I0127 15:27:39.780298 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"swift-storage-0\" (UID: \"c4c66cac-c142-4579-9d13-053d43983229\") " pod="openstack/swift-storage-0" Jan 27 15:27:39 crc kubenswrapper[4697]: I0127 15:27:39.780345 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/26fd6c5e-f715-4c37-bde3-0f038b84c8f4-swiftconf\") pod \"swift-ring-rebalance-xj4x5\" (UID: \"26fd6c5e-f715-4c37-bde3-0f038b84c8f4\") " pod="openstack/swift-ring-rebalance-xj4x5" Jan 27 15:27:39 crc kubenswrapper[4697]: I0127 15:27:39.780496 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/c4c66cac-c142-4579-9d13-053d43983229-cache\") pod \"swift-storage-0\" (UID: \"c4c66cac-c142-4579-9d13-053d43983229\") " pod="openstack/swift-storage-0" Jan 27 15:27:39 crc kubenswrapper[4697]: I0127 15:27:39.780576 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/c4c66cac-c142-4579-9d13-053d43983229-lock\") pod \"swift-storage-0\" (UID: \"c4c66cac-c142-4579-9d13-053d43983229\") " pod="openstack/swift-storage-0" Jan 27 15:27:39 crc kubenswrapper[4697]: E0127 15:27:39.780556 4697 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 27 15:27:39 crc kubenswrapper[4697]: E0127 15:27:39.780624 4697 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 27 15:27:39 crc kubenswrapper[4697]: E0127 15:27:39.780674 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c4c66cac-c142-4579-9d13-053d43983229-etc-swift podName:c4c66cac-c142-4579-9d13-053d43983229 nodeName:}" failed. No retries permitted until 2026-01-27 15:27:40.280652241 +0000 UTC m=+1156.453052022 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c4c66cac-c142-4579-9d13-053d43983229-etc-swift") pod "swift-storage-0" (UID: "c4c66cac-c142-4579-9d13-053d43983229") : configmap "swift-ring-files" not found Jan 27 15:27:39 crc kubenswrapper[4697]: I0127 15:27:39.780886 4697 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"swift-storage-0\" (UID: \"c4c66cac-c142-4579-9d13-053d43983229\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/swift-storage-0" Jan 27 15:27:39 crc kubenswrapper[4697]: I0127 15:27:39.784557 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-w2t78"] Jan 27 15:27:39 crc kubenswrapper[4697]: I0127 15:27:39.787962 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4c66cac-c142-4579-9d13-053d43983229-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"c4c66cac-c142-4579-9d13-053d43983229\") " pod="openstack/swift-storage-0" Jan 27 15:27:39 crc kubenswrapper[4697]: I0127 15:27:39.799705 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pklq8\" (UniqueName: \"kubernetes.io/projected/c4c66cac-c142-4579-9d13-053d43983229-kube-api-access-pklq8\") pod \"swift-storage-0\" (UID: \"c4c66cac-c142-4579-9d13-053d43983229\") " pod="openstack/swift-storage-0" Jan 27 15:27:39 crc kubenswrapper[4697]: I0127 15:27:39.810556 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"swift-storage-0\" (UID: \"c4c66cac-c142-4579-9d13-053d43983229\") " pod="openstack/swift-storage-0" Jan 27 15:27:39 crc kubenswrapper[4697]: I0127 15:27:39.881565 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/26fd6c5e-f715-4c37-bde3-0f038b84c8f4-swiftconf\") pod \"swift-ring-rebalance-xj4x5\" (UID: \"26fd6c5e-f715-4c37-bde3-0f038b84c8f4\") " pod="openstack/swift-ring-rebalance-xj4x5" Jan 27 15:27:39 crc kubenswrapper[4697]: I0127 15:27:39.881820 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/afa66008-cd63-46fa-8ac6-622e2b465eec-swiftconf\") pod \"swift-ring-rebalance-w2t78\" (UID: \"afa66008-cd63-46fa-8ac6-622e2b465eec\") " pod="openstack/swift-ring-rebalance-w2t78" Jan 27 15:27:39 crc kubenswrapper[4697]: I0127 15:27:39.881841 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26fd6c5e-f715-4c37-bde3-0f038b84c8f4-combined-ca-bundle\") pod \"swift-ring-rebalance-xj4x5\" (UID: \"26fd6c5e-f715-4c37-bde3-0f038b84c8f4\") " pod="openstack/swift-ring-rebalance-xj4x5" Jan 27 15:27:39 crc kubenswrapper[4697]: I0127 15:27:39.881864 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/afa66008-cd63-46fa-8ac6-622e2b465eec-scripts\") pod \"swift-ring-rebalance-w2t78\" (UID: \"afa66008-cd63-46fa-8ac6-622e2b465eec\") " pod="openstack/swift-ring-rebalance-w2t78" Jan 27 15:27:39 crc kubenswrapper[4697]: I0127 15:27:39.881887 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/26fd6c5e-f715-4c37-bde3-0f038b84c8f4-scripts\") pod \"swift-ring-rebalance-xj4x5\" (UID: \"26fd6c5e-f715-4c37-bde3-0f038b84c8f4\") " pod="openstack/swift-ring-rebalance-xj4x5" Jan 27 15:27:39 crc kubenswrapper[4697]: I0127 15:27:39.881912 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/26fd6c5e-f715-4c37-bde3-0f038b84c8f4-dispersionconf\") pod \"swift-ring-rebalance-xj4x5\" (UID: \"26fd6c5e-f715-4c37-bde3-0f038b84c8f4\") " pod="openstack/swift-ring-rebalance-xj4x5" Jan 27 15:27:39 crc kubenswrapper[4697]: I0127 15:27:39.881941 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afa66008-cd63-46fa-8ac6-622e2b465eec-combined-ca-bundle\") pod \"swift-ring-rebalance-w2t78\" (UID: \"afa66008-cd63-46fa-8ac6-622e2b465eec\") " pod="openstack/swift-ring-rebalance-w2t78" Jan 27 15:27:39 crc kubenswrapper[4697]: I0127 15:27:39.881960 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8sw69\" (UniqueName: \"kubernetes.io/projected/afa66008-cd63-46fa-8ac6-622e2b465eec-kube-api-access-8sw69\") pod \"swift-ring-rebalance-w2t78\" (UID: \"afa66008-cd63-46fa-8ac6-622e2b465eec\") " pod="openstack/swift-ring-rebalance-w2t78" Jan 27 15:27:39 crc kubenswrapper[4697]: I0127 15:27:39.881982 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/26fd6c5e-f715-4c37-bde3-0f038b84c8f4-etc-swift\") pod \"swift-ring-rebalance-xj4x5\" (UID: \"26fd6c5e-f715-4c37-bde3-0f038b84c8f4\") " pod="openstack/swift-ring-rebalance-xj4x5" Jan 27 15:27:39 crc kubenswrapper[4697]: I0127 15:27:39.882002 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/afa66008-cd63-46fa-8ac6-622e2b465eec-dispersionconf\") pod \"swift-ring-rebalance-w2t78\" (UID: \"afa66008-cd63-46fa-8ac6-622e2b465eec\") " pod="openstack/swift-ring-rebalance-w2t78" Jan 27 15:27:39 crc kubenswrapper[4697]: I0127 15:27:39.882028 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/26fd6c5e-f715-4c37-bde3-0f038b84c8f4-ring-data-devices\") pod \"swift-ring-rebalance-xj4x5\" (UID: \"26fd6c5e-f715-4c37-bde3-0f038b84c8f4\") " pod="openstack/swift-ring-rebalance-xj4x5" Jan 27 15:27:39 crc kubenswrapper[4697]: I0127 15:27:39.882059 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/afa66008-cd63-46fa-8ac6-622e2b465eec-ring-data-devices\") pod \"swift-ring-rebalance-w2t78\" (UID: \"afa66008-cd63-46fa-8ac6-622e2b465eec\") " pod="openstack/swift-ring-rebalance-w2t78" Jan 27 15:27:39 crc kubenswrapper[4697]: I0127 15:27:39.882693 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/afa66008-cd63-46fa-8ac6-622e2b465eec-etc-swift\") pod \"swift-ring-rebalance-w2t78\" (UID: \"afa66008-cd63-46fa-8ac6-622e2b465eec\") " pod="openstack/swift-ring-rebalance-w2t78" Jan 27 15:27:39 crc kubenswrapper[4697]: I0127 15:27:39.882724 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rm49l\" (UniqueName: \"kubernetes.io/projected/26fd6c5e-f715-4c37-bde3-0f038b84c8f4-kube-api-access-rm49l\") pod \"swift-ring-rebalance-xj4x5\" (UID: \"26fd6c5e-f715-4c37-bde3-0f038b84c8f4\") " pod="openstack/swift-ring-rebalance-xj4x5" Jan 27 15:27:39 crc kubenswrapper[4697]: I0127 15:27:39.883300 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/26fd6c5e-f715-4c37-bde3-0f038b84c8f4-etc-swift\") pod \"swift-ring-rebalance-xj4x5\" (UID: \"26fd6c5e-f715-4c37-bde3-0f038b84c8f4\") " pod="openstack/swift-ring-rebalance-xj4x5" Jan 27 15:27:39 crc kubenswrapper[4697]: I0127 15:27:39.884109 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/26fd6c5e-f715-4c37-bde3-0f038b84c8f4-scripts\") pod \"swift-ring-rebalance-xj4x5\" (UID: \"26fd6c5e-f715-4c37-bde3-0f038b84c8f4\") " pod="openstack/swift-ring-rebalance-xj4x5" Jan 27 15:27:39 crc kubenswrapper[4697]: I0127 15:27:39.884714 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/26fd6c5e-f715-4c37-bde3-0f038b84c8f4-ring-data-devices\") pod \"swift-ring-rebalance-xj4x5\" (UID: \"26fd6c5e-f715-4c37-bde3-0f038b84c8f4\") " pod="openstack/swift-ring-rebalance-xj4x5" Jan 27 15:27:39 crc kubenswrapper[4697]: I0127 15:27:39.888304 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26fd6c5e-f715-4c37-bde3-0f038b84c8f4-combined-ca-bundle\") pod \"swift-ring-rebalance-xj4x5\" (UID: \"26fd6c5e-f715-4c37-bde3-0f038b84c8f4\") " pod="openstack/swift-ring-rebalance-xj4x5" Jan 27 15:27:39 crc kubenswrapper[4697]: I0127 15:27:39.889994 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/26fd6c5e-f715-4c37-bde3-0f038b84c8f4-dispersionconf\") pod \"swift-ring-rebalance-xj4x5\" (UID: \"26fd6c5e-f715-4c37-bde3-0f038b84c8f4\") " pod="openstack/swift-ring-rebalance-xj4x5" Jan 27 15:27:39 crc kubenswrapper[4697]: I0127 15:27:39.894335 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/26fd6c5e-f715-4c37-bde3-0f038b84c8f4-swiftconf\") pod \"swift-ring-rebalance-xj4x5\" (UID: \"26fd6c5e-f715-4c37-bde3-0f038b84c8f4\") " pod="openstack/swift-ring-rebalance-xj4x5" Jan 27 15:27:39 crc kubenswrapper[4697]: I0127 15:27:39.899612 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rm49l\" (UniqueName: \"kubernetes.io/projected/26fd6c5e-f715-4c37-bde3-0f038b84c8f4-kube-api-access-rm49l\") pod \"swift-ring-rebalance-xj4x5\" (UID: \"26fd6c5e-f715-4c37-bde3-0f038b84c8f4\") " pod="openstack/swift-ring-rebalance-xj4x5" Jan 27 15:27:39 crc kubenswrapper[4697]: I0127 15:27:39.984243 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/afa66008-cd63-46fa-8ac6-622e2b465eec-swiftconf\") pod \"swift-ring-rebalance-w2t78\" (UID: \"afa66008-cd63-46fa-8ac6-622e2b465eec\") " pod="openstack/swift-ring-rebalance-w2t78" Jan 27 15:27:39 crc kubenswrapper[4697]: I0127 15:27:39.984534 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/afa66008-cd63-46fa-8ac6-622e2b465eec-scripts\") pod \"swift-ring-rebalance-w2t78\" (UID: \"afa66008-cd63-46fa-8ac6-622e2b465eec\") " pod="openstack/swift-ring-rebalance-w2t78" Jan 27 15:27:39 crc kubenswrapper[4697]: I0127 15:27:39.984650 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afa66008-cd63-46fa-8ac6-622e2b465eec-combined-ca-bundle\") pod \"swift-ring-rebalance-w2t78\" (UID: \"afa66008-cd63-46fa-8ac6-622e2b465eec\") " pod="openstack/swift-ring-rebalance-w2t78" Jan 27 15:27:39 crc kubenswrapper[4697]: I0127 15:27:39.984740 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8sw69\" (UniqueName: \"kubernetes.io/projected/afa66008-cd63-46fa-8ac6-622e2b465eec-kube-api-access-8sw69\") pod \"swift-ring-rebalance-w2t78\" (UID: \"afa66008-cd63-46fa-8ac6-622e2b465eec\") " pod="openstack/swift-ring-rebalance-w2t78" Jan 27 15:27:39 crc kubenswrapper[4697]: I0127 15:27:39.984867 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/afa66008-cd63-46fa-8ac6-622e2b465eec-dispersionconf\") pod \"swift-ring-rebalance-w2t78\" (UID: \"afa66008-cd63-46fa-8ac6-622e2b465eec\") " pod="openstack/swift-ring-rebalance-w2t78" Jan 27 15:27:39 crc kubenswrapper[4697]: I0127 15:27:39.985330 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/afa66008-cd63-46fa-8ac6-622e2b465eec-ring-data-devices\") pod \"swift-ring-rebalance-w2t78\" (UID: \"afa66008-cd63-46fa-8ac6-622e2b465eec\") " pod="openstack/swift-ring-rebalance-w2t78" Jan 27 15:27:39 crc kubenswrapper[4697]: I0127 15:27:39.985497 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/afa66008-cd63-46fa-8ac6-622e2b465eec-etc-swift\") pod \"swift-ring-rebalance-w2t78\" (UID: \"afa66008-cd63-46fa-8ac6-622e2b465eec\") " pod="openstack/swift-ring-rebalance-w2t78" Jan 27 15:27:39 crc kubenswrapper[4697]: I0127 15:27:39.985808 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/afa66008-cd63-46fa-8ac6-622e2b465eec-etc-swift\") pod \"swift-ring-rebalance-w2t78\" (UID: \"afa66008-cd63-46fa-8ac6-622e2b465eec\") " pod="openstack/swift-ring-rebalance-w2t78" Jan 27 15:27:39 crc kubenswrapper[4697]: I0127 15:27:39.985895 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/afa66008-cd63-46fa-8ac6-622e2b465eec-ring-data-devices\") pod \"swift-ring-rebalance-w2t78\" (UID: \"afa66008-cd63-46fa-8ac6-622e2b465eec\") " pod="openstack/swift-ring-rebalance-w2t78" Jan 27 15:27:39 crc kubenswrapper[4697]: I0127 15:27:39.986134 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/afa66008-cd63-46fa-8ac6-622e2b465eec-scripts\") pod \"swift-ring-rebalance-w2t78\" (UID: \"afa66008-cd63-46fa-8ac6-622e2b465eec\") " pod="openstack/swift-ring-rebalance-w2t78" Jan 27 15:27:39 crc kubenswrapper[4697]: I0127 15:27:39.987871 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afa66008-cd63-46fa-8ac6-622e2b465eec-combined-ca-bundle\") pod \"swift-ring-rebalance-w2t78\" (UID: \"afa66008-cd63-46fa-8ac6-622e2b465eec\") " pod="openstack/swift-ring-rebalance-w2t78" Jan 27 15:27:39 crc kubenswrapper[4697]: I0127 15:27:39.988332 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/afa66008-cd63-46fa-8ac6-622e2b465eec-dispersionconf\") pod \"swift-ring-rebalance-w2t78\" (UID: \"afa66008-cd63-46fa-8ac6-622e2b465eec\") " pod="openstack/swift-ring-rebalance-w2t78" Jan 27 15:27:39 crc kubenswrapper[4697]: I0127 15:27:39.989332 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/afa66008-cd63-46fa-8ac6-622e2b465eec-swiftconf\") pod \"swift-ring-rebalance-w2t78\" (UID: \"afa66008-cd63-46fa-8ac6-622e2b465eec\") " pod="openstack/swift-ring-rebalance-w2t78" Jan 27 15:27:40 crc kubenswrapper[4697]: I0127 15:27:40.000482 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8sw69\" (UniqueName: \"kubernetes.io/projected/afa66008-cd63-46fa-8ac6-622e2b465eec-kube-api-access-8sw69\") pod \"swift-ring-rebalance-w2t78\" (UID: \"afa66008-cd63-46fa-8ac6-622e2b465eec\") " pod="openstack/swift-ring-rebalance-w2t78" Jan 27 15:27:40 crc kubenswrapper[4697]: I0127 15:27:40.087213 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-rpfpm" event={"ID":"a20f2903-bfe4-4b89-8333-09e1adbd4605","Type":"ContainerDied","Data":"49c28a7f86c20d9c166a2458901d474d480bf30a7e15885cc46e6c5337ecfd5d"} Jan 27 15:27:40 crc kubenswrapper[4697]: I0127 15:27:40.087271 4697 scope.go:117] "RemoveContainer" containerID="14f1cc8cf76191aa5f56f49a2688ca574fb414061fa0875a7c74a376769584bc" Jan 27 15:27:40 crc kubenswrapper[4697]: I0127 15:27:40.087418 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-rpfpm" Jan 27 15:27:40 crc kubenswrapper[4697]: I0127 15:27:40.090650 4697 generic.go:334] "Generic (PLEG): container finished" podID="621e9d49-138c-485b-a57e-1f3ec16c5875" containerID="567854e2f52d0ef7316cd93a2cf47b1045e956ea21cec65266ab23aaaefe8a96" exitCode=0 Jan 27 15:27:40 crc kubenswrapper[4697]: I0127 15:27:40.090827 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-xj4x5" Jan 27 15:27:40 crc kubenswrapper[4697]: I0127 15:27:40.091593 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-nrzk8" event={"ID":"621e9d49-138c-485b-a57e-1f3ec16c5875","Type":"ContainerDied","Data":"567854e2f52d0ef7316cd93a2cf47b1045e956ea21cec65266ab23aaaefe8a96"} Jan 27 15:27:40 crc kubenswrapper[4697]: I0127 15:27:40.091626 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-nrzk8" event={"ID":"621e9d49-138c-485b-a57e-1f3ec16c5875","Type":"ContainerStarted","Data":"3c64131b550b4f10c0ae08c4ef24f9a5b447d526f4ed8f27556ecd96eb936e0d"} Jan 27 15:27:40 crc kubenswrapper[4697]: I0127 15:27:40.115712 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-xj4x5" Jan 27 15:27:40 crc kubenswrapper[4697]: I0127 15:27:40.118055 4697 scope.go:117] "RemoveContainer" containerID="1d590c93adf75103f50007e1c4095f12338e543b14efd3d6e8821c3aed97c7a2" Jan 27 15:27:40 crc kubenswrapper[4697]: I0127 15:27:40.145487 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-rpfpm"] Jan 27 15:27:40 crc kubenswrapper[4697]: I0127 15:27:40.157434 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-rpfpm"] Jan 27 15:27:40 crc kubenswrapper[4697]: I0127 15:27:40.174544 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-w2t78" Jan 27 15:27:40 crc kubenswrapper[4697]: I0127 15:27:40.190169 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rm49l\" (UniqueName: \"kubernetes.io/projected/26fd6c5e-f715-4c37-bde3-0f038b84c8f4-kube-api-access-rm49l\") pod \"26fd6c5e-f715-4c37-bde3-0f038b84c8f4\" (UID: \"26fd6c5e-f715-4c37-bde3-0f038b84c8f4\") " Jan 27 15:27:40 crc kubenswrapper[4697]: I0127 15:27:40.190270 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/26fd6c5e-f715-4c37-bde3-0f038b84c8f4-swiftconf\") pod \"26fd6c5e-f715-4c37-bde3-0f038b84c8f4\" (UID: \"26fd6c5e-f715-4c37-bde3-0f038b84c8f4\") " Jan 27 15:27:40 crc kubenswrapper[4697]: I0127 15:27:40.190296 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/26fd6c5e-f715-4c37-bde3-0f038b84c8f4-dispersionconf\") pod \"26fd6c5e-f715-4c37-bde3-0f038b84c8f4\" (UID: \"26fd6c5e-f715-4c37-bde3-0f038b84c8f4\") " Jan 27 15:27:40 crc kubenswrapper[4697]: I0127 15:27:40.190370 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/26fd6c5e-f715-4c37-bde3-0f038b84c8f4-scripts\") pod \"26fd6c5e-f715-4c37-bde3-0f038b84c8f4\" (UID: \"26fd6c5e-f715-4c37-bde3-0f038b84c8f4\") " Jan 27 15:27:40 crc kubenswrapper[4697]: I0127 15:27:40.190454 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26fd6c5e-f715-4c37-bde3-0f038b84c8f4-combined-ca-bundle\") pod \"26fd6c5e-f715-4c37-bde3-0f038b84c8f4\" (UID: \"26fd6c5e-f715-4c37-bde3-0f038b84c8f4\") " Jan 27 15:27:40 crc kubenswrapper[4697]: I0127 15:27:40.190523 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/26fd6c5e-f715-4c37-bde3-0f038b84c8f4-ring-data-devices\") pod \"26fd6c5e-f715-4c37-bde3-0f038b84c8f4\" (UID: \"26fd6c5e-f715-4c37-bde3-0f038b84c8f4\") " Jan 27 15:27:40 crc kubenswrapper[4697]: I0127 15:27:40.190549 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/26fd6c5e-f715-4c37-bde3-0f038b84c8f4-etc-swift\") pod \"26fd6c5e-f715-4c37-bde3-0f038b84c8f4\" (UID: \"26fd6c5e-f715-4c37-bde3-0f038b84c8f4\") " Jan 27 15:27:40 crc kubenswrapper[4697]: I0127 15:27:40.192019 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26fd6c5e-f715-4c37-bde3-0f038b84c8f4-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "26fd6c5e-f715-4c37-bde3-0f038b84c8f4" (UID: "26fd6c5e-f715-4c37-bde3-0f038b84c8f4"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:27:40 crc kubenswrapper[4697]: I0127 15:27:40.193379 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26fd6c5e-f715-4c37-bde3-0f038b84c8f4-scripts" (OuterVolumeSpecName: "scripts") pod "26fd6c5e-f715-4c37-bde3-0f038b84c8f4" (UID: "26fd6c5e-f715-4c37-bde3-0f038b84c8f4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:27:40 crc kubenswrapper[4697]: I0127 15:27:40.194404 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/26fd6c5e-f715-4c37-bde3-0f038b84c8f4-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "26fd6c5e-f715-4c37-bde3-0f038b84c8f4" (UID: "26fd6c5e-f715-4c37-bde3-0f038b84c8f4"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:27:40 crc kubenswrapper[4697]: I0127 15:27:40.194749 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26fd6c5e-f715-4c37-bde3-0f038b84c8f4-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "26fd6c5e-f715-4c37-bde3-0f038b84c8f4" (UID: "26fd6c5e-f715-4c37-bde3-0f038b84c8f4"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:27:40 crc kubenswrapper[4697]: I0127 15:27:40.195511 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26fd6c5e-f715-4c37-bde3-0f038b84c8f4-kube-api-access-rm49l" (OuterVolumeSpecName: "kube-api-access-rm49l") pod "26fd6c5e-f715-4c37-bde3-0f038b84c8f4" (UID: "26fd6c5e-f715-4c37-bde3-0f038b84c8f4"). InnerVolumeSpecName "kube-api-access-rm49l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:27:40 crc kubenswrapper[4697]: I0127 15:27:40.198105 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26fd6c5e-f715-4c37-bde3-0f038b84c8f4-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "26fd6c5e-f715-4c37-bde3-0f038b84c8f4" (UID: "26fd6c5e-f715-4c37-bde3-0f038b84c8f4"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:27:40 crc kubenswrapper[4697]: I0127 15:27:40.215985 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26fd6c5e-f715-4c37-bde3-0f038b84c8f4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "26fd6c5e-f715-4c37-bde3-0f038b84c8f4" (UID: "26fd6c5e-f715-4c37-bde3-0f038b84c8f4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:27:40 crc kubenswrapper[4697]: I0127 15:27:40.299598 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c4c66cac-c142-4579-9d13-053d43983229-etc-swift\") pod \"swift-storage-0\" (UID: \"c4c66cac-c142-4579-9d13-053d43983229\") " pod="openstack/swift-storage-0" Jan 27 15:27:40 crc kubenswrapper[4697]: I0127 15:27:40.300324 4697 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/26fd6c5e-f715-4c37-bde3-0f038b84c8f4-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 27 15:27:40 crc kubenswrapper[4697]: I0127 15:27:40.300351 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rm49l\" (UniqueName: \"kubernetes.io/projected/26fd6c5e-f715-4c37-bde3-0f038b84c8f4-kube-api-access-rm49l\") on node \"crc\" DevicePath \"\"" Jan 27 15:27:40 crc kubenswrapper[4697]: I0127 15:27:40.300366 4697 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/26fd6c5e-f715-4c37-bde3-0f038b84c8f4-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 27 15:27:40 crc kubenswrapper[4697]: I0127 15:27:40.300378 4697 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/26fd6c5e-f715-4c37-bde3-0f038b84c8f4-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 27 15:27:40 crc kubenswrapper[4697]: I0127 15:27:40.300388 4697 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/26fd6c5e-f715-4c37-bde3-0f038b84c8f4-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 15:27:40 crc kubenswrapper[4697]: I0127 15:27:40.300399 4697 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26fd6c5e-f715-4c37-bde3-0f038b84c8f4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 15:27:40 crc kubenswrapper[4697]: I0127 15:27:40.300410 4697 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/26fd6c5e-f715-4c37-bde3-0f038b84c8f4-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 27 15:27:40 crc kubenswrapper[4697]: E0127 15:27:40.300541 4697 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 27 15:27:40 crc kubenswrapper[4697]: E0127 15:27:40.300557 4697 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 27 15:27:40 crc kubenswrapper[4697]: E0127 15:27:40.300608 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c4c66cac-c142-4579-9d13-053d43983229-etc-swift podName:c4c66cac-c142-4579-9d13-053d43983229 nodeName:}" failed. No retries permitted until 2026-01-27 15:27:41.300589241 +0000 UTC m=+1157.472989022 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c4c66cac-c142-4579-9d13-053d43983229-etc-swift") pod "swift-storage-0" (UID: "c4c66cac-c142-4579-9d13-053d43983229") : configmap "swift-ring-files" not found Jan 27 15:27:40 crc kubenswrapper[4697]: I0127 15:27:40.577999 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a20f2903-bfe4-4b89-8333-09e1adbd4605" path="/var/lib/kubelet/pods/a20f2903-bfe4-4b89-8333-09e1adbd4605/volumes" Jan 27 15:27:40 crc kubenswrapper[4697]: I0127 15:27:40.669403 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-w2t78"] Jan 27 15:27:41 crc kubenswrapper[4697]: I0127 15:27:41.123553 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-w2t78" event={"ID":"afa66008-cd63-46fa-8ac6-622e2b465eec","Type":"ContainerStarted","Data":"0ee3ccbb133bd2b876146df7e502d8d76c8769055031a9670bc56e8a5af53112"} Jan 27 15:27:41 crc kubenswrapper[4697]: I0127 15:27:41.126044 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-xj4x5" Jan 27 15:27:41 crc kubenswrapper[4697]: I0127 15:27:41.126040 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-nrzk8" event={"ID":"621e9d49-138c-485b-a57e-1f3ec16c5875","Type":"ContainerStarted","Data":"ecbb4d6d233ecc2067421709c3613025b36534385ec0d4620144bbb9d9533977"} Jan 27 15:27:41 crc kubenswrapper[4697]: I0127 15:27:41.126451 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-nrzk8" Jan 27 15:27:41 crc kubenswrapper[4697]: I0127 15:27:41.154906 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-698758b865-nrzk8" podStartSLOduration=3.154884098 podStartE2EDuration="3.154884098s" podCreationTimestamp="2026-01-27 15:27:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:27:41.150482089 +0000 UTC m=+1157.322881870" watchObservedRunningTime="2026-01-27 15:27:41.154884098 +0000 UTC m=+1157.327283879" Jan 27 15:27:41 crc kubenswrapper[4697]: I0127 15:27:41.188872 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-xj4x5"] Jan 27 15:27:41 crc kubenswrapper[4697]: I0127 15:27:41.196195 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-ring-rebalance-xj4x5"] Jan 27 15:27:41 crc kubenswrapper[4697]: I0127 15:27:41.320278 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c4c66cac-c142-4579-9d13-053d43983229-etc-swift\") pod \"swift-storage-0\" (UID: \"c4c66cac-c142-4579-9d13-053d43983229\") " pod="openstack/swift-storage-0" Jan 27 15:27:41 crc kubenswrapper[4697]: E0127 15:27:41.320466 4697 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 27 15:27:41 crc kubenswrapper[4697]: E0127 15:27:41.320501 4697 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 27 15:27:41 crc kubenswrapper[4697]: E0127 15:27:41.320562 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c4c66cac-c142-4579-9d13-053d43983229-etc-swift podName:c4c66cac-c142-4579-9d13-053d43983229 nodeName:}" failed. No retries permitted until 2026-01-27 15:27:43.320543659 +0000 UTC m=+1159.492943440 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c4c66cac-c142-4579-9d13-053d43983229-etc-swift") pod "swift-storage-0" (UID: "c4c66cac-c142-4579-9d13-053d43983229") : configmap "swift-ring-files" not found Jan 27 15:27:42 crc kubenswrapper[4697]: I0127 15:27:42.004961 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-86db49b7ff-5fwsh" Jan 27 15:27:42 crc kubenswrapper[4697]: I0127 15:27:42.583350 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26fd6c5e-f715-4c37-bde3-0f038b84c8f4" path="/var/lib/kubelet/pods/26fd6c5e-f715-4c37-bde3-0f038b84c8f4/volumes" Jan 27 15:27:42 crc kubenswrapper[4697]: I0127 15:27:42.998087 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-jjnzt"] Jan 27 15:27:43 crc kubenswrapper[4697]: I0127 15:27:43.001113 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-jjnzt"] Jan 27 15:27:43 crc kubenswrapper[4697]: I0127 15:27:43.070545 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-5lnm2"] Jan 27 15:27:43 crc kubenswrapper[4697]: I0127 15:27:43.077130 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-5lnm2" Jan 27 15:27:43 crc kubenswrapper[4697]: I0127 15:27:43.082901 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-5lnm2"] Jan 27 15:27:43 crc kubenswrapper[4697]: I0127 15:27:43.083479 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 27 15:27:43 crc kubenswrapper[4697]: I0127 15:27:43.255003 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/470d5b7e-11d4-44c4-ac71-698390cec599-operator-scripts\") pod \"root-account-create-update-5lnm2\" (UID: \"470d5b7e-11d4-44c4-ac71-698390cec599\") " pod="openstack/root-account-create-update-5lnm2" Jan 27 15:27:43 crc kubenswrapper[4697]: I0127 15:27:43.255055 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6wzh\" (UniqueName: \"kubernetes.io/projected/470d5b7e-11d4-44c4-ac71-698390cec599-kube-api-access-j6wzh\") pod \"root-account-create-update-5lnm2\" (UID: \"470d5b7e-11d4-44c4-ac71-698390cec599\") " pod="openstack/root-account-create-update-5lnm2" Jan 27 15:27:43 crc kubenswrapper[4697]: I0127 15:27:43.356692 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c4c66cac-c142-4579-9d13-053d43983229-etc-swift\") pod \"swift-storage-0\" (UID: \"c4c66cac-c142-4579-9d13-053d43983229\") " pod="openstack/swift-storage-0" Jan 27 15:27:43 crc kubenswrapper[4697]: I0127 15:27:43.356872 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/470d5b7e-11d4-44c4-ac71-698390cec599-operator-scripts\") pod \"root-account-create-update-5lnm2\" (UID: \"470d5b7e-11d4-44c4-ac71-698390cec599\") " pod="openstack/root-account-create-update-5lnm2" Jan 27 15:27:43 crc kubenswrapper[4697]: I0127 15:27:43.356905 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j6wzh\" (UniqueName: \"kubernetes.io/projected/470d5b7e-11d4-44c4-ac71-698390cec599-kube-api-access-j6wzh\") pod \"root-account-create-update-5lnm2\" (UID: \"470d5b7e-11d4-44c4-ac71-698390cec599\") " pod="openstack/root-account-create-update-5lnm2" Jan 27 15:27:43 crc kubenswrapper[4697]: E0127 15:27:43.357305 4697 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 27 15:27:43 crc kubenswrapper[4697]: E0127 15:27:43.357322 4697 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 27 15:27:43 crc kubenswrapper[4697]: E0127 15:27:43.357356 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c4c66cac-c142-4579-9d13-053d43983229-etc-swift podName:c4c66cac-c142-4579-9d13-053d43983229 nodeName:}" failed. No retries permitted until 2026-01-27 15:27:47.357343419 +0000 UTC m=+1163.529743190 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c4c66cac-c142-4579-9d13-053d43983229-etc-swift") pod "swift-storage-0" (UID: "c4c66cac-c142-4579-9d13-053d43983229") : configmap "swift-ring-files" not found Jan 27 15:27:43 crc kubenswrapper[4697]: I0127 15:27:43.358263 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/470d5b7e-11d4-44c4-ac71-698390cec599-operator-scripts\") pod \"root-account-create-update-5lnm2\" (UID: \"470d5b7e-11d4-44c4-ac71-698390cec599\") " pod="openstack/root-account-create-update-5lnm2" Jan 27 15:27:43 crc kubenswrapper[4697]: I0127 15:27:43.393563 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j6wzh\" (UniqueName: \"kubernetes.io/projected/470d5b7e-11d4-44c4-ac71-698390cec599-kube-api-access-j6wzh\") pod \"root-account-create-update-5lnm2\" (UID: \"470d5b7e-11d4-44c4-ac71-698390cec599\") " pod="openstack/root-account-create-update-5lnm2" Jan 27 15:27:43 crc kubenswrapper[4697]: I0127 15:27:43.693508 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-5lnm2" Jan 27 15:27:44 crc kubenswrapper[4697]: I0127 15:27:44.582586 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7b7f1e4-e33b-48e9-a6ba-4122f4113a95" path="/var/lib/kubelet/pods/d7b7f1e4-e33b-48e9-a6ba-4122f4113a95/volumes" Jan 27 15:27:45 crc kubenswrapper[4697]: I0127 15:27:45.125516 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-5lnm2"] Jan 27 15:27:45 crc kubenswrapper[4697]: I0127 15:27:45.168796 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-w2t78" event={"ID":"afa66008-cd63-46fa-8ac6-622e2b465eec","Type":"ContainerStarted","Data":"3616ddcb175a2d176f8fb77d00789c8cb0ed50ff85d4daad3dfe11682e208a3a"} Jan 27 15:27:45 crc kubenswrapper[4697]: I0127 15:27:45.169635 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-5lnm2" event={"ID":"470d5b7e-11d4-44c4-ac71-698390cec599","Type":"ContainerStarted","Data":"ce5ffdb23e5b2c70ba70288906596c3ce2558e59ed146b94433b776141341a28"} Jan 27 15:27:45 crc kubenswrapper[4697]: I0127 15:27:45.194069 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-w2t78" podStartSLOduration=2.162674391 podStartE2EDuration="6.194052456s" podCreationTimestamp="2026-01-27 15:27:39 +0000 UTC" firstStartedPulling="2026-01-27 15:27:40.674195689 +0000 UTC m=+1156.846595470" lastFinishedPulling="2026-01-27 15:27:44.705573754 +0000 UTC m=+1160.877973535" observedRunningTime="2026-01-27 15:27:45.193654526 +0000 UTC m=+1161.366054307" watchObservedRunningTime="2026-01-27 15:27:45.194052456 +0000 UTC m=+1161.366452247" Jan 27 15:27:45 crc kubenswrapper[4697]: I0127 15:27:45.558718 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-dk6vd"] Jan 27 15:27:45 crc kubenswrapper[4697]: I0127 15:27:45.560309 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-dk6vd" Jan 27 15:27:45 crc kubenswrapper[4697]: I0127 15:27:45.567707 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-dk6vd"] Jan 27 15:27:45 crc kubenswrapper[4697]: I0127 15:27:45.679624 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-ca1e-account-create-update-qktst"] Jan 27 15:27:45 crc kubenswrapper[4697]: I0127 15:27:45.681118 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-ca1e-account-create-update-qktst" Jan 27 15:27:45 crc kubenswrapper[4697]: I0127 15:27:45.683135 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 27 15:27:45 crc kubenswrapper[4697]: I0127 15:27:45.695676 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-ca1e-account-create-update-qktst"] Jan 27 15:27:45 crc kubenswrapper[4697]: I0127 15:27:45.733038 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6l5f\" (UniqueName: \"kubernetes.io/projected/b261bbe5-03e9-4ebe-a8d0-a375b87722df-kube-api-access-g6l5f\") pod \"keystone-db-create-dk6vd\" (UID: \"b261bbe5-03e9-4ebe-a8d0-a375b87722df\") " pod="openstack/keystone-db-create-dk6vd" Jan 27 15:27:45 crc kubenswrapper[4697]: I0127 15:27:45.733244 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b261bbe5-03e9-4ebe-a8d0-a375b87722df-operator-scripts\") pod \"keystone-db-create-dk6vd\" (UID: \"b261bbe5-03e9-4ebe-a8d0-a375b87722df\") " pod="openstack/keystone-db-create-dk6vd" Jan 27 15:27:45 crc kubenswrapper[4697]: I0127 15:27:45.834696 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b261bbe5-03e9-4ebe-a8d0-a375b87722df-operator-scripts\") pod \"keystone-db-create-dk6vd\" (UID: \"b261bbe5-03e9-4ebe-a8d0-a375b87722df\") " pod="openstack/keystone-db-create-dk6vd" Jan 27 15:27:45 crc kubenswrapper[4697]: I0127 15:27:45.834764 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7wlw\" (UniqueName: \"kubernetes.io/projected/e0bb870a-beb1-45cc-a1b2-30f6692a4cb6-kube-api-access-j7wlw\") pod \"keystone-ca1e-account-create-update-qktst\" (UID: \"e0bb870a-beb1-45cc-a1b2-30f6692a4cb6\") " pod="openstack/keystone-ca1e-account-create-update-qktst" Jan 27 15:27:45 crc kubenswrapper[4697]: I0127 15:27:45.834822 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6l5f\" (UniqueName: \"kubernetes.io/projected/b261bbe5-03e9-4ebe-a8d0-a375b87722df-kube-api-access-g6l5f\") pod \"keystone-db-create-dk6vd\" (UID: \"b261bbe5-03e9-4ebe-a8d0-a375b87722df\") " pod="openstack/keystone-db-create-dk6vd" Jan 27 15:27:45 crc kubenswrapper[4697]: I0127 15:27:45.834954 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e0bb870a-beb1-45cc-a1b2-30f6692a4cb6-operator-scripts\") pod \"keystone-ca1e-account-create-update-qktst\" (UID: \"e0bb870a-beb1-45cc-a1b2-30f6692a4cb6\") " pod="openstack/keystone-ca1e-account-create-update-qktst" Jan 27 15:27:45 crc kubenswrapper[4697]: I0127 15:27:45.836004 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b261bbe5-03e9-4ebe-a8d0-a375b87722df-operator-scripts\") pod \"keystone-db-create-dk6vd\" (UID: \"b261bbe5-03e9-4ebe-a8d0-a375b87722df\") " pod="openstack/keystone-db-create-dk6vd" Jan 27 15:27:45 crc kubenswrapper[4697]: I0127 15:27:45.852115 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6l5f\" (UniqueName: \"kubernetes.io/projected/b261bbe5-03e9-4ebe-a8d0-a375b87722df-kube-api-access-g6l5f\") pod \"keystone-db-create-dk6vd\" (UID: \"b261bbe5-03e9-4ebe-a8d0-a375b87722df\") " pod="openstack/keystone-db-create-dk6vd" Jan 27 15:27:45 crc kubenswrapper[4697]: I0127 15:27:45.884330 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-dk6vd" Jan 27 15:27:45 crc kubenswrapper[4697]: I0127 15:27:45.924971 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-hflbt"] Jan 27 15:27:45 crc kubenswrapper[4697]: I0127 15:27:45.926288 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-hflbt" Jan 27 15:27:45 crc kubenswrapper[4697]: I0127 15:27:45.939995 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e0bb870a-beb1-45cc-a1b2-30f6692a4cb6-operator-scripts\") pod \"keystone-ca1e-account-create-update-qktst\" (UID: \"e0bb870a-beb1-45cc-a1b2-30f6692a4cb6\") " pod="openstack/keystone-ca1e-account-create-update-qktst" Jan 27 15:27:45 crc kubenswrapper[4697]: I0127 15:27:45.940276 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7wlw\" (UniqueName: \"kubernetes.io/projected/e0bb870a-beb1-45cc-a1b2-30f6692a4cb6-kube-api-access-j7wlw\") pod \"keystone-ca1e-account-create-update-qktst\" (UID: \"e0bb870a-beb1-45cc-a1b2-30f6692a4cb6\") " pod="openstack/keystone-ca1e-account-create-update-qktst" Jan 27 15:27:45 crc kubenswrapper[4697]: I0127 15:27:45.941610 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e0bb870a-beb1-45cc-a1b2-30f6692a4cb6-operator-scripts\") pod \"keystone-ca1e-account-create-update-qktst\" (UID: \"e0bb870a-beb1-45cc-a1b2-30f6692a4cb6\") " pod="openstack/keystone-ca1e-account-create-update-qktst" Jan 27 15:27:45 crc kubenswrapper[4697]: I0127 15:27:45.942216 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-hflbt"] Jan 27 15:27:45 crc kubenswrapper[4697]: I0127 15:27:45.965935 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7wlw\" (UniqueName: \"kubernetes.io/projected/e0bb870a-beb1-45cc-a1b2-30f6692a4cb6-kube-api-access-j7wlw\") pod \"keystone-ca1e-account-create-update-qktst\" (UID: \"e0bb870a-beb1-45cc-a1b2-30f6692a4cb6\") " pod="openstack/keystone-ca1e-account-create-update-qktst" Jan 27 15:27:46 crc kubenswrapper[4697]: I0127 15:27:46.014860 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-958c-account-create-update-8gl6b"] Jan 27 15:27:46 crc kubenswrapper[4697]: I0127 15:27:46.015948 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-958c-account-create-update-8gl6b" Jan 27 15:27:46 crc kubenswrapper[4697]: I0127 15:27:46.018179 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 27 15:27:46 crc kubenswrapper[4697]: I0127 15:27:46.034914 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-958c-account-create-update-8gl6b"] Jan 27 15:27:46 crc kubenswrapper[4697]: I0127 15:27:46.050622 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-ca1e-account-create-update-qktst" Jan 27 15:27:46 crc kubenswrapper[4697]: I0127 15:27:46.051348 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s89p9\" (UniqueName: \"kubernetes.io/projected/f8f1c9d2-cb07-4cd8-8614-30734cff2994-kube-api-access-s89p9\") pod \"placement-db-create-hflbt\" (UID: \"f8f1c9d2-cb07-4cd8-8614-30734cff2994\") " pod="openstack/placement-db-create-hflbt" Jan 27 15:27:46 crc kubenswrapper[4697]: I0127 15:27:46.051451 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f8f1c9d2-cb07-4cd8-8614-30734cff2994-operator-scripts\") pod \"placement-db-create-hflbt\" (UID: \"f8f1c9d2-cb07-4cd8-8614-30734cff2994\") " pod="openstack/placement-db-create-hflbt" Jan 27 15:27:46 crc kubenswrapper[4697]: I0127 15:27:46.155530 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hj7wj\" (UniqueName: \"kubernetes.io/projected/5e67f4cf-29a8-46ce-8ea6-757b703f82b1-kube-api-access-hj7wj\") pod \"placement-958c-account-create-update-8gl6b\" (UID: \"5e67f4cf-29a8-46ce-8ea6-757b703f82b1\") " pod="openstack/placement-958c-account-create-update-8gl6b" Jan 27 15:27:46 crc kubenswrapper[4697]: I0127 15:27:46.155610 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f8f1c9d2-cb07-4cd8-8614-30734cff2994-operator-scripts\") pod \"placement-db-create-hflbt\" (UID: \"f8f1c9d2-cb07-4cd8-8614-30734cff2994\") " pod="openstack/placement-db-create-hflbt" Jan 27 15:27:46 crc kubenswrapper[4697]: I0127 15:27:46.155692 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s89p9\" (UniqueName: \"kubernetes.io/projected/f8f1c9d2-cb07-4cd8-8614-30734cff2994-kube-api-access-s89p9\") pod \"placement-db-create-hflbt\" (UID: \"f8f1c9d2-cb07-4cd8-8614-30734cff2994\") " pod="openstack/placement-db-create-hflbt" Jan 27 15:27:46 crc kubenswrapper[4697]: I0127 15:27:46.155719 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5e67f4cf-29a8-46ce-8ea6-757b703f82b1-operator-scripts\") pod \"placement-958c-account-create-update-8gl6b\" (UID: \"5e67f4cf-29a8-46ce-8ea6-757b703f82b1\") " pod="openstack/placement-958c-account-create-update-8gl6b" Jan 27 15:27:46 crc kubenswrapper[4697]: I0127 15:27:46.157373 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f8f1c9d2-cb07-4cd8-8614-30734cff2994-operator-scripts\") pod \"placement-db-create-hflbt\" (UID: \"f8f1c9d2-cb07-4cd8-8614-30734cff2994\") " pod="openstack/placement-db-create-hflbt" Jan 27 15:27:46 crc kubenswrapper[4697]: I0127 15:27:46.177477 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s89p9\" (UniqueName: \"kubernetes.io/projected/f8f1c9d2-cb07-4cd8-8614-30734cff2994-kube-api-access-s89p9\") pod \"placement-db-create-hflbt\" (UID: \"f8f1c9d2-cb07-4cd8-8614-30734cff2994\") " pod="openstack/placement-db-create-hflbt" Jan 27 15:27:46 crc kubenswrapper[4697]: I0127 15:27:46.191650 4697 generic.go:334] "Generic (PLEG): container finished" podID="470d5b7e-11d4-44c4-ac71-698390cec599" containerID="b37152d1cfebc0bfb73c67e235ff7c5ea59c99acd5875d5a03d755ff23ac9fe2" exitCode=0 Jan 27 15:27:46 crc kubenswrapper[4697]: I0127 15:27:46.191771 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-5lnm2" event={"ID":"470d5b7e-11d4-44c4-ac71-698390cec599","Type":"ContainerDied","Data":"b37152d1cfebc0bfb73c67e235ff7c5ea59c99acd5875d5a03d755ff23ac9fe2"} Jan 27 15:27:46 crc kubenswrapper[4697]: I0127 15:27:46.256577 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hj7wj\" (UniqueName: \"kubernetes.io/projected/5e67f4cf-29a8-46ce-8ea6-757b703f82b1-kube-api-access-hj7wj\") pod \"placement-958c-account-create-update-8gl6b\" (UID: \"5e67f4cf-29a8-46ce-8ea6-757b703f82b1\") " pod="openstack/placement-958c-account-create-update-8gl6b" Jan 27 15:27:46 crc kubenswrapper[4697]: I0127 15:27:46.256925 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5e67f4cf-29a8-46ce-8ea6-757b703f82b1-operator-scripts\") pod \"placement-958c-account-create-update-8gl6b\" (UID: \"5e67f4cf-29a8-46ce-8ea6-757b703f82b1\") " pod="openstack/placement-958c-account-create-update-8gl6b" Jan 27 15:27:46 crc kubenswrapper[4697]: I0127 15:27:46.257726 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5e67f4cf-29a8-46ce-8ea6-757b703f82b1-operator-scripts\") pod \"placement-958c-account-create-update-8gl6b\" (UID: \"5e67f4cf-29a8-46ce-8ea6-757b703f82b1\") " pod="openstack/placement-958c-account-create-update-8gl6b" Jan 27 15:27:46 crc kubenswrapper[4697]: I0127 15:27:46.286107 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hj7wj\" (UniqueName: \"kubernetes.io/projected/5e67f4cf-29a8-46ce-8ea6-757b703f82b1-kube-api-access-hj7wj\") pod \"placement-958c-account-create-update-8gl6b\" (UID: \"5e67f4cf-29a8-46ce-8ea6-757b703f82b1\") " pod="openstack/placement-958c-account-create-update-8gl6b" Jan 27 15:27:46 crc kubenswrapper[4697]: I0127 15:27:46.298249 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-scmbs"] Jan 27 15:27:46 crc kubenswrapper[4697]: I0127 15:27:46.299671 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-scmbs" Jan 27 15:27:46 crc kubenswrapper[4697]: I0127 15:27:46.305949 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-scmbs"] Jan 27 15:27:46 crc kubenswrapper[4697]: I0127 15:27:46.329455 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-hflbt" Jan 27 15:27:46 crc kubenswrapper[4697]: I0127 15:27:46.338474 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-958c-account-create-update-8gl6b" Jan 27 15:27:46 crc kubenswrapper[4697]: I0127 15:27:46.396184 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-4e77-account-create-update-x64b2"] Jan 27 15:27:46 crc kubenswrapper[4697]: I0127 15:27:46.397520 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-4e77-account-create-update-x64b2" Jan 27 15:27:46 crc kubenswrapper[4697]: I0127 15:27:46.401149 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 27 15:27:46 crc kubenswrapper[4697]: I0127 15:27:46.445638 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-4e77-account-create-update-x64b2"] Jan 27 15:27:46 crc kubenswrapper[4697]: I0127 15:27:46.456370 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-dk6vd"] Jan 27 15:27:46 crc kubenswrapper[4697]: I0127 15:27:46.461645 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rpp4x\" (UniqueName: \"kubernetes.io/projected/35eda2ea-2c5d-446d-9065-cd7a9d12cd1e-kube-api-access-rpp4x\") pod \"glance-db-create-scmbs\" (UID: \"35eda2ea-2c5d-446d-9065-cd7a9d12cd1e\") " pod="openstack/glance-db-create-scmbs" Jan 27 15:27:46 crc kubenswrapper[4697]: I0127 15:27:46.461806 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/35eda2ea-2c5d-446d-9065-cd7a9d12cd1e-operator-scripts\") pod \"glance-db-create-scmbs\" (UID: \"35eda2ea-2c5d-446d-9065-cd7a9d12cd1e\") " pod="openstack/glance-db-create-scmbs" Jan 27 15:27:46 crc kubenswrapper[4697]: I0127 15:27:46.532998 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-ca1e-account-create-update-qktst"] Jan 27 15:27:46 crc kubenswrapper[4697]: W0127 15:27:46.561241 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode0bb870a_beb1_45cc_a1b2_30f6692a4cb6.slice/crio-fbabbd2d7ae9f4d153ccc2ba81720bca7c88cdad6bb64bd980fd1257341ba3c9 WatchSource:0}: Error finding container fbabbd2d7ae9f4d153ccc2ba81720bca7c88cdad6bb64bd980fd1257341ba3c9: Status 404 returned error can't find the container with id fbabbd2d7ae9f4d153ccc2ba81720bca7c88cdad6bb64bd980fd1257341ba3c9 Jan 27 15:27:46 crc kubenswrapper[4697]: I0127 15:27:46.563301 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7ndg\" (UniqueName: \"kubernetes.io/projected/59c870c6-b268-440d-a6a4-d1ea57382a67-kube-api-access-k7ndg\") pod \"glance-4e77-account-create-update-x64b2\" (UID: \"59c870c6-b268-440d-a6a4-d1ea57382a67\") " pod="openstack/glance-4e77-account-create-update-x64b2" Jan 27 15:27:46 crc kubenswrapper[4697]: I0127 15:27:46.563391 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/35eda2ea-2c5d-446d-9065-cd7a9d12cd1e-operator-scripts\") pod \"glance-db-create-scmbs\" (UID: \"35eda2ea-2c5d-446d-9065-cd7a9d12cd1e\") " pod="openstack/glance-db-create-scmbs" Jan 27 15:27:46 crc kubenswrapper[4697]: I0127 15:27:46.563470 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rpp4x\" (UniqueName: \"kubernetes.io/projected/35eda2ea-2c5d-446d-9065-cd7a9d12cd1e-kube-api-access-rpp4x\") pod \"glance-db-create-scmbs\" (UID: \"35eda2ea-2c5d-446d-9065-cd7a9d12cd1e\") " pod="openstack/glance-db-create-scmbs" Jan 27 15:27:46 crc kubenswrapper[4697]: I0127 15:27:46.563561 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/59c870c6-b268-440d-a6a4-d1ea57382a67-operator-scripts\") pod \"glance-4e77-account-create-update-x64b2\" (UID: \"59c870c6-b268-440d-a6a4-d1ea57382a67\") " pod="openstack/glance-4e77-account-create-update-x64b2" Jan 27 15:27:46 crc kubenswrapper[4697]: I0127 15:27:46.564613 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/35eda2ea-2c5d-446d-9065-cd7a9d12cd1e-operator-scripts\") pod \"glance-db-create-scmbs\" (UID: \"35eda2ea-2c5d-446d-9065-cd7a9d12cd1e\") " pod="openstack/glance-db-create-scmbs" Jan 27 15:27:46 crc kubenswrapper[4697]: I0127 15:27:46.585215 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rpp4x\" (UniqueName: \"kubernetes.io/projected/35eda2ea-2c5d-446d-9065-cd7a9d12cd1e-kube-api-access-rpp4x\") pod \"glance-db-create-scmbs\" (UID: \"35eda2ea-2c5d-446d-9065-cd7a9d12cd1e\") " pod="openstack/glance-db-create-scmbs" Jan 27 15:27:46 crc kubenswrapper[4697]: I0127 15:27:46.620998 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-scmbs" Jan 27 15:27:46 crc kubenswrapper[4697]: I0127 15:27:46.665693 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k7ndg\" (UniqueName: \"kubernetes.io/projected/59c870c6-b268-440d-a6a4-d1ea57382a67-kube-api-access-k7ndg\") pod \"glance-4e77-account-create-update-x64b2\" (UID: \"59c870c6-b268-440d-a6a4-d1ea57382a67\") " pod="openstack/glance-4e77-account-create-update-x64b2" Jan 27 15:27:46 crc kubenswrapper[4697]: I0127 15:27:46.665888 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/59c870c6-b268-440d-a6a4-d1ea57382a67-operator-scripts\") pod \"glance-4e77-account-create-update-x64b2\" (UID: \"59c870c6-b268-440d-a6a4-d1ea57382a67\") " pod="openstack/glance-4e77-account-create-update-x64b2" Jan 27 15:27:46 crc kubenswrapper[4697]: I0127 15:27:46.666592 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/59c870c6-b268-440d-a6a4-d1ea57382a67-operator-scripts\") pod \"glance-4e77-account-create-update-x64b2\" (UID: \"59c870c6-b268-440d-a6a4-d1ea57382a67\") " pod="openstack/glance-4e77-account-create-update-x64b2" Jan 27 15:27:46 crc kubenswrapper[4697]: I0127 15:27:46.692965 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k7ndg\" (UniqueName: \"kubernetes.io/projected/59c870c6-b268-440d-a6a4-d1ea57382a67-kube-api-access-k7ndg\") pod \"glance-4e77-account-create-update-x64b2\" (UID: \"59c870c6-b268-440d-a6a4-d1ea57382a67\") " pod="openstack/glance-4e77-account-create-update-x64b2" Jan 27 15:27:46 crc kubenswrapper[4697]: I0127 15:27:46.727158 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-4e77-account-create-update-x64b2" Jan 27 15:27:46 crc kubenswrapper[4697]: I0127 15:27:46.751153 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-958c-account-create-update-8gl6b"] Jan 27 15:27:46 crc kubenswrapper[4697]: W0127 15:27:46.790072 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5e67f4cf_29a8_46ce_8ea6_757b703f82b1.slice/crio-c5f707e7620695778fdace9bc7ddf1de3ca33103908ab6c3e2be606ac57b159d WatchSource:0}: Error finding container c5f707e7620695778fdace9bc7ddf1de3ca33103908ab6c3e2be606ac57b159d: Status 404 returned error can't find the container with id c5f707e7620695778fdace9bc7ddf1de3ca33103908ab6c3e2be606ac57b159d Jan 27 15:27:46 crc kubenswrapper[4697]: I0127 15:27:46.855768 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-hflbt"] Jan 27 15:27:46 crc kubenswrapper[4697]: W0127 15:27:46.912873 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf8f1c9d2_cb07_4cd8_8614_30734cff2994.slice/crio-7f9a136428db34a297f3d4df8af0f1e5a8cf1e982f6ca69c6982885cd5e26e9f WatchSource:0}: Error finding container 7f9a136428db34a297f3d4df8af0f1e5a8cf1e982f6ca69c6982885cd5e26e9f: Status 404 returned error can't find the container with id 7f9a136428db34a297f3d4df8af0f1e5a8cf1e982f6ca69c6982885cd5e26e9f Jan 27 15:27:47 crc kubenswrapper[4697]: I0127 15:27:47.163586 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-scmbs"] Jan 27 15:27:47 crc kubenswrapper[4697]: I0127 15:27:47.200544 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-hflbt" event={"ID":"f8f1c9d2-cb07-4cd8-8614-30734cff2994","Type":"ContainerStarted","Data":"c7a8042ba9f4bda68f0c264b663451c54f6fa0f486e1d7ef66e911f74e1f0a2d"} Jan 27 15:27:47 crc kubenswrapper[4697]: I0127 15:27:47.200850 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-hflbt" event={"ID":"f8f1c9d2-cb07-4cd8-8614-30734cff2994","Type":"ContainerStarted","Data":"7f9a136428db34a297f3d4df8af0f1e5a8cf1e982f6ca69c6982885cd5e26e9f"} Jan 27 15:27:47 crc kubenswrapper[4697]: I0127 15:27:47.202180 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-958c-account-create-update-8gl6b" event={"ID":"5e67f4cf-29a8-46ce-8ea6-757b703f82b1","Type":"ContainerStarted","Data":"a0bce1eef5ce6123d85beeabcc7eb474c6257112de00b49a05aab36470706ee8"} Jan 27 15:27:47 crc kubenswrapper[4697]: I0127 15:27:47.202288 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-958c-account-create-update-8gl6b" event={"ID":"5e67f4cf-29a8-46ce-8ea6-757b703f82b1","Type":"ContainerStarted","Data":"c5f707e7620695778fdace9bc7ddf1de3ca33103908ab6c3e2be606ac57b159d"} Jan 27 15:27:47 crc kubenswrapper[4697]: I0127 15:27:47.205160 4697 generic.go:334] "Generic (PLEG): container finished" podID="b261bbe5-03e9-4ebe-a8d0-a375b87722df" containerID="b1db229931a4ea3764a6e347740499b8491aa2d4b4fe54bfce8e0cdb98689a27" exitCode=0 Jan 27 15:27:47 crc kubenswrapper[4697]: I0127 15:27:47.205295 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-dk6vd" event={"ID":"b261bbe5-03e9-4ebe-a8d0-a375b87722df","Type":"ContainerDied","Data":"b1db229931a4ea3764a6e347740499b8491aa2d4b4fe54bfce8e0cdb98689a27"} Jan 27 15:27:47 crc kubenswrapper[4697]: I0127 15:27:47.205378 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-dk6vd" event={"ID":"b261bbe5-03e9-4ebe-a8d0-a375b87722df","Type":"ContainerStarted","Data":"2aa0c6fc8e0e189cfab7f04aa2bb9b1dda42ba899ecd0dc6a740a28f578c6f82"} Jan 27 15:27:47 crc kubenswrapper[4697]: I0127 15:27:47.209305 4697 generic.go:334] "Generic (PLEG): container finished" podID="e0bb870a-beb1-45cc-a1b2-30f6692a4cb6" containerID="62fd4c4b0a1857647cf9d9df3b31686fd3cb5d2d66c9bd518cb4bc39e28c6460" exitCode=0 Jan 27 15:27:47 crc kubenswrapper[4697]: I0127 15:27:47.209503 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-ca1e-account-create-update-qktst" event={"ID":"e0bb870a-beb1-45cc-a1b2-30f6692a4cb6","Type":"ContainerDied","Data":"62fd4c4b0a1857647cf9d9df3b31686fd3cb5d2d66c9bd518cb4bc39e28c6460"} Jan 27 15:27:47 crc kubenswrapper[4697]: I0127 15:27:47.209573 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-ca1e-account-create-update-qktst" event={"ID":"e0bb870a-beb1-45cc-a1b2-30f6692a4cb6","Type":"ContainerStarted","Data":"fbabbd2d7ae9f4d153ccc2ba81720bca7c88cdad6bb64bd980fd1257341ba3c9"} Jan 27 15:27:47 crc kubenswrapper[4697]: I0127 15:27:47.210849 4697 generic.go:334] "Generic (PLEG): container finished" podID="abff1f2e-e0f3-4730-888c-2e2d8464f624" containerID="e08609f1f8f84cd80d5406d5f5667af27d378fc1d71f72faca38aca992d8af76" exitCode=0 Jan 27 15:27:47 crc kubenswrapper[4697]: I0127 15:27:47.210969 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"abff1f2e-e0f3-4730-888c-2e2d8464f624","Type":"ContainerDied","Data":"e08609f1f8f84cd80d5406d5f5667af27d378fc1d71f72faca38aca992d8af76"} Jan 27 15:27:47 crc kubenswrapper[4697]: I0127 15:27:47.250497 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-create-hflbt" podStartSLOduration=2.250482942 podStartE2EDuration="2.250482942s" podCreationTimestamp="2026-01-27 15:27:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:27:47.22216524 +0000 UTC m=+1163.394565021" watchObservedRunningTime="2026-01-27 15:27:47.250482942 +0000 UTC m=+1163.422882723" Jan 27 15:27:47 crc kubenswrapper[4697]: I0127 15:27:47.282699 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-958c-account-create-update-8gl6b" podStartSLOduration=2.282677249 podStartE2EDuration="2.282677249s" podCreationTimestamp="2026-01-27 15:27:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:27:47.279066589 +0000 UTC m=+1163.451466370" watchObservedRunningTime="2026-01-27 15:27:47.282677249 +0000 UTC m=+1163.455077030" Jan 27 15:27:47 crc kubenswrapper[4697]: I0127 15:27:47.377434 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-4e77-account-create-update-x64b2"] Jan 27 15:27:47 crc kubenswrapper[4697]: I0127 15:27:47.395926 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c4c66cac-c142-4579-9d13-053d43983229-etc-swift\") pod \"swift-storage-0\" (UID: \"c4c66cac-c142-4579-9d13-053d43983229\") " pod="openstack/swift-storage-0" Jan 27 15:27:47 crc kubenswrapper[4697]: E0127 15:27:47.396356 4697 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 27 15:27:47 crc kubenswrapper[4697]: E0127 15:27:47.396368 4697 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 27 15:27:47 crc kubenswrapper[4697]: E0127 15:27:47.396402 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c4c66cac-c142-4579-9d13-053d43983229-etc-swift podName:c4c66cac-c142-4579-9d13-053d43983229 nodeName:}" failed. No retries permitted until 2026-01-27 15:27:55.396388993 +0000 UTC m=+1171.568788774 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c4c66cac-c142-4579-9d13-053d43983229-etc-swift") pod "swift-storage-0" (UID: "c4c66cac-c142-4579-9d13-053d43983229") : configmap "swift-ring-files" not found Jan 27 15:27:47 crc kubenswrapper[4697]: I0127 15:27:47.607040 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-5lnm2" Jan 27 15:27:47 crc kubenswrapper[4697]: I0127 15:27:47.701757 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j6wzh\" (UniqueName: \"kubernetes.io/projected/470d5b7e-11d4-44c4-ac71-698390cec599-kube-api-access-j6wzh\") pod \"470d5b7e-11d4-44c4-ac71-698390cec599\" (UID: \"470d5b7e-11d4-44c4-ac71-698390cec599\") " Jan 27 15:27:47 crc kubenswrapper[4697]: I0127 15:27:47.701968 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/470d5b7e-11d4-44c4-ac71-698390cec599-operator-scripts\") pod \"470d5b7e-11d4-44c4-ac71-698390cec599\" (UID: \"470d5b7e-11d4-44c4-ac71-698390cec599\") " Jan 27 15:27:47 crc kubenswrapper[4697]: I0127 15:27:47.706834 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/470d5b7e-11d4-44c4-ac71-698390cec599-kube-api-access-j6wzh" (OuterVolumeSpecName: "kube-api-access-j6wzh") pod "470d5b7e-11d4-44c4-ac71-698390cec599" (UID: "470d5b7e-11d4-44c4-ac71-698390cec599"). InnerVolumeSpecName "kube-api-access-j6wzh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:27:47 crc kubenswrapper[4697]: I0127 15:27:47.708096 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/470d5b7e-11d4-44c4-ac71-698390cec599-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "470d5b7e-11d4-44c4-ac71-698390cec599" (UID: "470d5b7e-11d4-44c4-ac71-698390cec599"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:27:47 crc kubenswrapper[4697]: I0127 15:27:47.804032 4697 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/470d5b7e-11d4-44c4-ac71-698390cec599-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 15:27:47 crc kubenswrapper[4697]: I0127 15:27:47.804519 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j6wzh\" (UniqueName: \"kubernetes.io/projected/470d5b7e-11d4-44c4-ac71-698390cec599-kube-api-access-j6wzh\") on node \"crc\" DevicePath \"\"" Jan 27 15:27:48 crc kubenswrapper[4697]: I0127 15:27:48.222911 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-5lnm2" Jan 27 15:27:48 crc kubenswrapper[4697]: I0127 15:27:48.222916 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-5lnm2" event={"ID":"470d5b7e-11d4-44c4-ac71-698390cec599","Type":"ContainerDied","Data":"ce5ffdb23e5b2c70ba70288906596c3ce2558e59ed146b94433b776141341a28"} Jan 27 15:27:48 crc kubenswrapper[4697]: I0127 15:27:48.223143 4697 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ce5ffdb23e5b2c70ba70288906596c3ce2558e59ed146b94433b776141341a28" Jan 27 15:27:48 crc kubenswrapper[4697]: I0127 15:27:48.224084 4697 generic.go:334] "Generic (PLEG): container finished" podID="5e67f4cf-29a8-46ce-8ea6-757b703f82b1" containerID="a0bce1eef5ce6123d85beeabcc7eb474c6257112de00b49a05aab36470706ee8" exitCode=0 Jan 27 15:27:48 crc kubenswrapper[4697]: I0127 15:27:48.224143 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-958c-account-create-update-8gl6b" event={"ID":"5e67f4cf-29a8-46ce-8ea6-757b703f82b1","Type":"ContainerDied","Data":"a0bce1eef5ce6123d85beeabcc7eb474c6257112de00b49a05aab36470706ee8"} Jan 27 15:27:48 crc kubenswrapper[4697]: I0127 15:27:48.229059 4697 generic.go:334] "Generic (PLEG): container finished" podID="35eda2ea-2c5d-446d-9065-cd7a9d12cd1e" containerID="fbec09964c961e55efcb6ef5783c84a88f1c95e1a6aa9cc800f141dd8e5062db" exitCode=0 Jan 27 15:27:48 crc kubenswrapper[4697]: I0127 15:27:48.229174 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-scmbs" event={"ID":"35eda2ea-2c5d-446d-9065-cd7a9d12cd1e","Type":"ContainerDied","Data":"fbec09964c961e55efcb6ef5783c84a88f1c95e1a6aa9cc800f141dd8e5062db"} Jan 27 15:27:48 crc kubenswrapper[4697]: I0127 15:27:48.229213 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-scmbs" event={"ID":"35eda2ea-2c5d-446d-9065-cd7a9d12cd1e","Type":"ContainerStarted","Data":"12da7037ffc21b32ca053755d0c70609cb3f95b770f00ac84b947a9fdd8db2a7"} Jan 27 15:27:48 crc kubenswrapper[4697]: I0127 15:27:48.231676 4697 generic.go:334] "Generic (PLEG): container finished" podID="59c870c6-b268-440d-a6a4-d1ea57382a67" containerID="22911e52044c925463a4f21a5c009c3a79669418899ee8ae8827d98621b6b6b5" exitCode=0 Jan 27 15:27:48 crc kubenswrapper[4697]: I0127 15:27:48.231757 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-4e77-account-create-update-x64b2" event={"ID":"59c870c6-b268-440d-a6a4-d1ea57382a67","Type":"ContainerDied","Data":"22911e52044c925463a4f21a5c009c3a79669418899ee8ae8827d98621b6b6b5"} Jan 27 15:27:48 crc kubenswrapper[4697]: I0127 15:27:48.231821 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-4e77-account-create-update-x64b2" event={"ID":"59c870c6-b268-440d-a6a4-d1ea57382a67","Type":"ContainerStarted","Data":"3a158259a43a16b248cc034524a2a6a034c81e9ff83c480a3fba055859daed1a"} Jan 27 15:27:48 crc kubenswrapper[4697]: I0127 15:27:48.235662 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"abff1f2e-e0f3-4730-888c-2e2d8464f624","Type":"ContainerStarted","Data":"489c8c931a994429732b5b400a22535c3856ed191c25e0569f38c1a130722991"} Jan 27 15:27:48 crc kubenswrapper[4697]: I0127 15:27:48.236846 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 27 15:27:48 crc kubenswrapper[4697]: I0127 15:27:48.239069 4697 generic.go:334] "Generic (PLEG): container finished" podID="f8f1c9d2-cb07-4cd8-8614-30734cff2994" containerID="c7a8042ba9f4bda68f0c264b663451c54f6fa0f486e1d7ef66e911f74e1f0a2d" exitCode=0 Jan 27 15:27:48 crc kubenswrapper[4697]: I0127 15:27:48.239144 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-hflbt" event={"ID":"f8f1c9d2-cb07-4cd8-8614-30734cff2994","Type":"ContainerDied","Data":"c7a8042ba9f4bda68f0c264b663451c54f6fa0f486e1d7ef66e911f74e1f0a2d"} Jan 27 15:27:48 crc kubenswrapper[4697]: I0127 15:27:48.273226 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=36.979348566 podStartE2EDuration="1m7.273212048s" podCreationTimestamp="2026-01-27 15:26:41 +0000 UTC" firstStartedPulling="2026-01-27 15:26:43.404033616 +0000 UTC m=+1099.576433397" lastFinishedPulling="2026-01-27 15:27:13.697897098 +0000 UTC m=+1129.870296879" observedRunningTime="2026-01-27 15:27:48.272895291 +0000 UTC m=+1164.445295072" watchObservedRunningTime="2026-01-27 15:27:48.273212048 +0000 UTC m=+1164.445611829" Jan 27 15:27:48 crc kubenswrapper[4697]: I0127 15:27:48.660992 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-698758b865-nrzk8" Jan 27 15:27:48 crc kubenswrapper[4697]: I0127 15:27:48.668813 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-dk6vd" Jan 27 15:27:48 crc kubenswrapper[4697]: I0127 15:27:48.738382 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-5fwsh"] Jan 27 15:27:48 crc kubenswrapper[4697]: I0127 15:27:48.738607 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-86db49b7ff-5fwsh" podUID="17d130bb-85ab-4a76-a5cb-09370b6165b7" containerName="dnsmasq-dns" containerID="cri-o://8ce3f4ba140789ddc7fc65de38844dac05645728b2451b178cc702d226eb1444" gracePeriod=10 Jan 27 15:27:48 crc kubenswrapper[4697]: I0127 15:27:48.824886 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b261bbe5-03e9-4ebe-a8d0-a375b87722df-operator-scripts\") pod \"b261bbe5-03e9-4ebe-a8d0-a375b87722df\" (UID: \"b261bbe5-03e9-4ebe-a8d0-a375b87722df\") " Jan 27 15:27:48 crc kubenswrapper[4697]: I0127 15:27:48.824986 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g6l5f\" (UniqueName: \"kubernetes.io/projected/b261bbe5-03e9-4ebe-a8d0-a375b87722df-kube-api-access-g6l5f\") pod \"b261bbe5-03e9-4ebe-a8d0-a375b87722df\" (UID: \"b261bbe5-03e9-4ebe-a8d0-a375b87722df\") " Jan 27 15:27:48 crc kubenswrapper[4697]: I0127 15:27:48.825453 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b261bbe5-03e9-4ebe-a8d0-a375b87722df-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b261bbe5-03e9-4ebe-a8d0-a375b87722df" (UID: "b261bbe5-03e9-4ebe-a8d0-a375b87722df"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:27:48 crc kubenswrapper[4697]: I0127 15:27:48.846210 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b261bbe5-03e9-4ebe-a8d0-a375b87722df-kube-api-access-g6l5f" (OuterVolumeSpecName: "kube-api-access-g6l5f") pod "b261bbe5-03e9-4ebe-a8d0-a375b87722df" (UID: "b261bbe5-03e9-4ebe-a8d0-a375b87722df"). InnerVolumeSpecName "kube-api-access-g6l5f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:27:48 crc kubenswrapper[4697]: I0127 15:27:48.926677 4697 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b261bbe5-03e9-4ebe-a8d0-a375b87722df-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 15:27:48 crc kubenswrapper[4697]: I0127 15:27:48.926712 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g6l5f\" (UniqueName: \"kubernetes.io/projected/b261bbe5-03e9-4ebe-a8d0-a375b87722df-kube-api-access-g6l5f\") on node \"crc\" DevicePath \"\"" Jan 27 15:27:48 crc kubenswrapper[4697]: I0127 15:27:48.940038 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-ca1e-account-create-update-qktst" Jan 27 15:27:49 crc kubenswrapper[4697]: I0127 15:27:49.029955 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e0bb870a-beb1-45cc-a1b2-30f6692a4cb6-operator-scripts\") pod \"e0bb870a-beb1-45cc-a1b2-30f6692a4cb6\" (UID: \"e0bb870a-beb1-45cc-a1b2-30f6692a4cb6\") " Jan 27 15:27:49 crc kubenswrapper[4697]: I0127 15:27:49.030152 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j7wlw\" (UniqueName: \"kubernetes.io/projected/e0bb870a-beb1-45cc-a1b2-30f6692a4cb6-kube-api-access-j7wlw\") pod \"e0bb870a-beb1-45cc-a1b2-30f6692a4cb6\" (UID: \"e0bb870a-beb1-45cc-a1b2-30f6692a4cb6\") " Jan 27 15:27:49 crc kubenswrapper[4697]: I0127 15:27:49.030322 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0bb870a-beb1-45cc-a1b2-30f6692a4cb6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e0bb870a-beb1-45cc-a1b2-30f6692a4cb6" (UID: "e0bb870a-beb1-45cc-a1b2-30f6692a4cb6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:27:49 crc kubenswrapper[4697]: I0127 15:27:49.030668 4697 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e0bb870a-beb1-45cc-a1b2-30f6692a4cb6-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 15:27:49 crc kubenswrapper[4697]: I0127 15:27:49.034435 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0bb870a-beb1-45cc-a1b2-30f6692a4cb6-kube-api-access-j7wlw" (OuterVolumeSpecName: "kube-api-access-j7wlw") pod "e0bb870a-beb1-45cc-a1b2-30f6692a4cb6" (UID: "e0bb870a-beb1-45cc-a1b2-30f6692a4cb6"). InnerVolumeSpecName "kube-api-access-j7wlw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:27:49 crc kubenswrapper[4697]: I0127 15:27:49.132424 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j7wlw\" (UniqueName: \"kubernetes.io/projected/e0bb870a-beb1-45cc-a1b2-30f6692a4cb6-kube-api-access-j7wlw\") on node \"crc\" DevicePath \"\"" Jan 27 15:27:49 crc kubenswrapper[4697]: I0127 15:27:49.230584 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-5fwsh" Jan 27 15:27:49 crc kubenswrapper[4697]: I0127 15:27:49.284743 4697 generic.go:334] "Generic (PLEG): container finished" podID="17d130bb-85ab-4a76-a5cb-09370b6165b7" containerID="8ce3f4ba140789ddc7fc65de38844dac05645728b2451b178cc702d226eb1444" exitCode=0 Jan 27 15:27:49 crc kubenswrapper[4697]: I0127 15:27:49.284877 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-5fwsh" event={"ID":"17d130bb-85ab-4a76-a5cb-09370b6165b7","Type":"ContainerDied","Data":"8ce3f4ba140789ddc7fc65de38844dac05645728b2451b178cc702d226eb1444"} Jan 27 15:27:49 crc kubenswrapper[4697]: I0127 15:27:49.284913 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-5fwsh" event={"ID":"17d130bb-85ab-4a76-a5cb-09370b6165b7","Type":"ContainerDied","Data":"62464c3fc48999ffbe1abee787184fb879e303cc591700757f2116ce663c8786"} Jan 27 15:27:49 crc kubenswrapper[4697]: I0127 15:27:49.284941 4697 scope.go:117] "RemoveContainer" containerID="8ce3f4ba140789ddc7fc65de38844dac05645728b2451b178cc702d226eb1444" Jan 27 15:27:49 crc kubenswrapper[4697]: I0127 15:27:49.285095 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-5fwsh" Jan 27 15:27:49 crc kubenswrapper[4697]: I0127 15:27:49.297417 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-dk6vd" event={"ID":"b261bbe5-03e9-4ebe-a8d0-a375b87722df","Type":"ContainerDied","Data":"2aa0c6fc8e0e189cfab7f04aa2bb9b1dda42ba899ecd0dc6a740a28f578c6f82"} Jan 27 15:27:49 crc kubenswrapper[4697]: I0127 15:27:49.297456 4697 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2aa0c6fc8e0e189cfab7f04aa2bb9b1dda42ba899ecd0dc6a740a28f578c6f82" Jan 27 15:27:49 crc kubenswrapper[4697]: I0127 15:27:49.297555 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-dk6vd" Jan 27 15:27:49 crc kubenswrapper[4697]: I0127 15:27:49.311499 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-ca1e-account-create-update-qktst" event={"ID":"e0bb870a-beb1-45cc-a1b2-30f6692a4cb6","Type":"ContainerDied","Data":"fbabbd2d7ae9f4d153ccc2ba81720bca7c88cdad6bb64bd980fd1257341ba3c9"} Jan 27 15:27:49 crc kubenswrapper[4697]: I0127 15:27:49.311532 4697 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fbabbd2d7ae9f4d153ccc2ba81720bca7c88cdad6bb64bd980fd1257341ba3c9" Jan 27 15:27:49 crc kubenswrapper[4697]: I0127 15:27:49.311577 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-ca1e-account-create-update-qktst" Jan 27 15:27:49 crc kubenswrapper[4697]: I0127 15:27:49.334462 4697 scope.go:117] "RemoveContainer" containerID="c08e5695fecc873af22df46da4696a764b1f5926aa5341f8440852914cc3769c" Jan 27 15:27:49 crc kubenswrapper[4697]: I0127 15:27:49.335484 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17d130bb-85ab-4a76-a5cb-09370b6165b7-config\") pod \"17d130bb-85ab-4a76-a5cb-09370b6165b7\" (UID: \"17d130bb-85ab-4a76-a5cb-09370b6165b7\") " Jan 27 15:27:49 crc kubenswrapper[4697]: I0127 15:27:49.335514 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/17d130bb-85ab-4a76-a5cb-09370b6165b7-ovsdbserver-sb\") pod \"17d130bb-85ab-4a76-a5cb-09370b6165b7\" (UID: \"17d130bb-85ab-4a76-a5cb-09370b6165b7\") " Jan 27 15:27:49 crc kubenswrapper[4697]: I0127 15:27:49.335643 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/17d130bb-85ab-4a76-a5cb-09370b6165b7-dns-svc\") pod \"17d130bb-85ab-4a76-a5cb-09370b6165b7\" (UID: \"17d130bb-85ab-4a76-a5cb-09370b6165b7\") " Jan 27 15:27:49 crc kubenswrapper[4697]: I0127 15:27:49.335664 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/17d130bb-85ab-4a76-a5cb-09370b6165b7-ovsdbserver-nb\") pod \"17d130bb-85ab-4a76-a5cb-09370b6165b7\" (UID: \"17d130bb-85ab-4a76-a5cb-09370b6165b7\") " Jan 27 15:27:49 crc kubenswrapper[4697]: I0127 15:27:49.335772 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-srcz6\" (UniqueName: \"kubernetes.io/projected/17d130bb-85ab-4a76-a5cb-09370b6165b7-kube-api-access-srcz6\") pod \"17d130bb-85ab-4a76-a5cb-09370b6165b7\" (UID: \"17d130bb-85ab-4a76-a5cb-09370b6165b7\") " Jan 27 15:27:49 crc kubenswrapper[4697]: I0127 15:27:49.347260 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17d130bb-85ab-4a76-a5cb-09370b6165b7-kube-api-access-srcz6" (OuterVolumeSpecName: "kube-api-access-srcz6") pod "17d130bb-85ab-4a76-a5cb-09370b6165b7" (UID: "17d130bb-85ab-4a76-a5cb-09370b6165b7"). InnerVolumeSpecName "kube-api-access-srcz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:27:49 crc kubenswrapper[4697]: I0127 15:27:49.389727 4697 scope.go:117] "RemoveContainer" containerID="8ce3f4ba140789ddc7fc65de38844dac05645728b2451b178cc702d226eb1444" Jan 27 15:27:49 crc kubenswrapper[4697]: E0127 15:27:49.392293 4697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ce3f4ba140789ddc7fc65de38844dac05645728b2451b178cc702d226eb1444\": container with ID starting with 8ce3f4ba140789ddc7fc65de38844dac05645728b2451b178cc702d226eb1444 not found: ID does not exist" containerID="8ce3f4ba140789ddc7fc65de38844dac05645728b2451b178cc702d226eb1444" Jan 27 15:27:49 crc kubenswrapper[4697]: I0127 15:27:49.392324 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ce3f4ba140789ddc7fc65de38844dac05645728b2451b178cc702d226eb1444"} err="failed to get container status \"8ce3f4ba140789ddc7fc65de38844dac05645728b2451b178cc702d226eb1444\": rpc error: code = NotFound desc = could not find container \"8ce3f4ba140789ddc7fc65de38844dac05645728b2451b178cc702d226eb1444\": container with ID starting with 8ce3f4ba140789ddc7fc65de38844dac05645728b2451b178cc702d226eb1444 not found: ID does not exist" Jan 27 15:27:49 crc kubenswrapper[4697]: I0127 15:27:49.392345 4697 scope.go:117] "RemoveContainer" containerID="c08e5695fecc873af22df46da4696a764b1f5926aa5341f8440852914cc3769c" Jan 27 15:27:49 crc kubenswrapper[4697]: E0127 15:27:49.393596 4697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c08e5695fecc873af22df46da4696a764b1f5926aa5341f8440852914cc3769c\": container with ID starting with c08e5695fecc873af22df46da4696a764b1f5926aa5341f8440852914cc3769c not found: ID does not exist" containerID="c08e5695fecc873af22df46da4696a764b1f5926aa5341f8440852914cc3769c" Jan 27 15:27:49 crc kubenswrapper[4697]: I0127 15:27:49.393625 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c08e5695fecc873af22df46da4696a764b1f5926aa5341f8440852914cc3769c"} err="failed to get container status \"c08e5695fecc873af22df46da4696a764b1f5926aa5341f8440852914cc3769c\": rpc error: code = NotFound desc = could not find container \"c08e5695fecc873af22df46da4696a764b1f5926aa5341f8440852914cc3769c\": container with ID starting with c08e5695fecc873af22df46da4696a764b1f5926aa5341f8440852914cc3769c not found: ID does not exist" Jan 27 15:27:49 crc kubenswrapper[4697]: I0127 15:27:49.406928 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-5lnm2"] Jan 27 15:27:49 crc kubenswrapper[4697]: I0127 15:27:49.407737 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17d130bb-85ab-4a76-a5cb-09370b6165b7-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "17d130bb-85ab-4a76-a5cb-09370b6165b7" (UID: "17d130bb-85ab-4a76-a5cb-09370b6165b7"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:27:49 crc kubenswrapper[4697]: I0127 15:27:49.408577 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17d130bb-85ab-4a76-a5cb-09370b6165b7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "17d130bb-85ab-4a76-a5cb-09370b6165b7" (UID: "17d130bb-85ab-4a76-a5cb-09370b6165b7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:27:49 crc kubenswrapper[4697]: I0127 15:27:49.413223 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-5lnm2"] Jan 27 15:27:49 crc kubenswrapper[4697]: I0127 15:27:49.418295 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17d130bb-85ab-4a76-a5cb-09370b6165b7-config" (OuterVolumeSpecName: "config") pod "17d130bb-85ab-4a76-a5cb-09370b6165b7" (UID: "17d130bb-85ab-4a76-a5cb-09370b6165b7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:27:49 crc kubenswrapper[4697]: I0127 15:27:49.422565 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17d130bb-85ab-4a76-a5cb-09370b6165b7-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "17d130bb-85ab-4a76-a5cb-09370b6165b7" (UID: "17d130bb-85ab-4a76-a5cb-09370b6165b7"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:27:49 crc kubenswrapper[4697]: I0127 15:27:49.438018 4697 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/17d130bb-85ab-4a76-a5cb-09370b6165b7-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 15:27:49 crc kubenswrapper[4697]: I0127 15:27:49.438089 4697 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/17d130bb-85ab-4a76-a5cb-09370b6165b7-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 15:27:49 crc kubenswrapper[4697]: I0127 15:27:49.438100 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-srcz6\" (UniqueName: \"kubernetes.io/projected/17d130bb-85ab-4a76-a5cb-09370b6165b7-kube-api-access-srcz6\") on node \"crc\" DevicePath \"\"" Jan 27 15:27:49 crc kubenswrapper[4697]: I0127 15:27:49.438109 4697 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17d130bb-85ab-4a76-a5cb-09370b6165b7-config\") on node \"crc\" DevicePath \"\"" Jan 27 15:27:49 crc kubenswrapper[4697]: I0127 15:27:49.438117 4697 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/17d130bb-85ab-4a76-a5cb-09370b6165b7-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 15:27:49 crc kubenswrapper[4697]: I0127 15:27:49.624734 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-5fwsh"] Jan 27 15:27:49 crc kubenswrapper[4697]: I0127 15:27:49.636870 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-5fwsh"] Jan 27 15:27:50 crc kubenswrapper[4697]: I0127 15:27:50.060615 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-4e77-account-create-update-x64b2" Jan 27 15:27:50 crc kubenswrapper[4697]: I0127 15:27:50.164834 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k7ndg\" (UniqueName: \"kubernetes.io/projected/59c870c6-b268-440d-a6a4-d1ea57382a67-kube-api-access-k7ndg\") pod \"59c870c6-b268-440d-a6a4-d1ea57382a67\" (UID: \"59c870c6-b268-440d-a6a4-d1ea57382a67\") " Jan 27 15:27:50 crc kubenswrapper[4697]: I0127 15:27:50.165015 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/59c870c6-b268-440d-a6a4-d1ea57382a67-operator-scripts\") pod \"59c870c6-b268-440d-a6a4-d1ea57382a67\" (UID: \"59c870c6-b268-440d-a6a4-d1ea57382a67\") " Jan 27 15:27:50 crc kubenswrapper[4697]: I0127 15:27:50.173542 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59c870c6-b268-440d-a6a4-d1ea57382a67-kube-api-access-k7ndg" (OuterVolumeSpecName: "kube-api-access-k7ndg") pod "59c870c6-b268-440d-a6a4-d1ea57382a67" (UID: "59c870c6-b268-440d-a6a4-d1ea57382a67"). InnerVolumeSpecName "kube-api-access-k7ndg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:27:50 crc kubenswrapper[4697]: I0127 15:27:50.174532 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59c870c6-b268-440d-a6a4-d1ea57382a67-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "59c870c6-b268-440d-a6a4-d1ea57382a67" (UID: "59c870c6-b268-440d-a6a4-d1ea57382a67"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:27:50 crc kubenswrapper[4697]: I0127 15:27:50.201945 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-958c-account-create-update-8gl6b" Jan 27 15:27:50 crc kubenswrapper[4697]: I0127 15:27:50.220116 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-scmbs" Jan 27 15:27:50 crc kubenswrapper[4697]: I0127 15:27:50.224213 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-hflbt" Jan 27 15:27:50 crc kubenswrapper[4697]: I0127 15:27:50.266686 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5e67f4cf-29a8-46ce-8ea6-757b703f82b1-operator-scripts\") pod \"5e67f4cf-29a8-46ce-8ea6-757b703f82b1\" (UID: \"5e67f4cf-29a8-46ce-8ea6-757b703f82b1\") " Jan 27 15:27:50 crc kubenswrapper[4697]: I0127 15:27:50.266802 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hj7wj\" (UniqueName: \"kubernetes.io/projected/5e67f4cf-29a8-46ce-8ea6-757b703f82b1-kube-api-access-hj7wj\") pod \"5e67f4cf-29a8-46ce-8ea6-757b703f82b1\" (UID: \"5e67f4cf-29a8-46ce-8ea6-757b703f82b1\") " Jan 27 15:27:50 crc kubenswrapper[4697]: I0127 15:27:50.267276 4697 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/59c870c6-b268-440d-a6a4-d1ea57382a67-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 15:27:50 crc kubenswrapper[4697]: I0127 15:27:50.267291 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k7ndg\" (UniqueName: \"kubernetes.io/projected/59c870c6-b268-440d-a6a4-d1ea57382a67-kube-api-access-k7ndg\") on node \"crc\" DevicePath \"\"" Jan 27 15:27:50 crc kubenswrapper[4697]: I0127 15:27:50.268334 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e67f4cf-29a8-46ce-8ea6-757b703f82b1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5e67f4cf-29a8-46ce-8ea6-757b703f82b1" (UID: "5e67f4cf-29a8-46ce-8ea6-757b703f82b1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:27:50 crc kubenswrapper[4697]: I0127 15:27:50.269889 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e67f4cf-29a8-46ce-8ea6-757b703f82b1-kube-api-access-hj7wj" (OuterVolumeSpecName: "kube-api-access-hj7wj") pod "5e67f4cf-29a8-46ce-8ea6-757b703f82b1" (UID: "5e67f4cf-29a8-46ce-8ea6-757b703f82b1"). InnerVolumeSpecName "kube-api-access-hj7wj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:27:50 crc kubenswrapper[4697]: I0127 15:27:50.320414 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-958c-account-create-update-8gl6b" event={"ID":"5e67f4cf-29a8-46ce-8ea6-757b703f82b1","Type":"ContainerDied","Data":"c5f707e7620695778fdace9bc7ddf1de3ca33103908ab6c3e2be606ac57b159d"} Jan 27 15:27:50 crc kubenswrapper[4697]: I0127 15:27:50.320492 4697 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c5f707e7620695778fdace9bc7ddf1de3ca33103908ab6c3e2be606ac57b159d" Jan 27 15:27:50 crc kubenswrapper[4697]: I0127 15:27:50.320438 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-958c-account-create-update-8gl6b" Jan 27 15:27:50 crc kubenswrapper[4697]: I0127 15:27:50.322838 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-4e77-account-create-update-x64b2" event={"ID":"59c870c6-b268-440d-a6a4-d1ea57382a67","Type":"ContainerDied","Data":"3a158259a43a16b248cc034524a2a6a034c81e9ff83c480a3fba055859daed1a"} Jan 27 15:27:50 crc kubenswrapper[4697]: I0127 15:27:50.322929 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-4e77-account-create-update-x64b2" Jan 27 15:27:50 crc kubenswrapper[4697]: I0127 15:27:50.322998 4697 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3a158259a43a16b248cc034524a2a6a034c81e9ff83c480a3fba055859daed1a" Jan 27 15:27:50 crc kubenswrapper[4697]: I0127 15:27:50.326154 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-scmbs" event={"ID":"35eda2ea-2c5d-446d-9065-cd7a9d12cd1e","Type":"ContainerDied","Data":"12da7037ffc21b32ca053755d0c70609cb3f95b770f00ac84b947a9fdd8db2a7"} Jan 27 15:27:50 crc kubenswrapper[4697]: I0127 15:27:50.326192 4697 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="12da7037ffc21b32ca053755d0c70609cb3f95b770f00ac84b947a9fdd8db2a7" Jan 27 15:27:50 crc kubenswrapper[4697]: I0127 15:27:50.326249 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-scmbs" Jan 27 15:27:50 crc kubenswrapper[4697]: I0127 15:27:50.328320 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-hflbt" event={"ID":"f8f1c9d2-cb07-4cd8-8614-30734cff2994","Type":"ContainerDied","Data":"7f9a136428db34a297f3d4df8af0f1e5a8cf1e982f6ca69c6982885cd5e26e9f"} Jan 27 15:27:50 crc kubenswrapper[4697]: I0127 15:27:50.328342 4697 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7f9a136428db34a297f3d4df8af0f1e5a8cf1e982f6ca69c6982885cd5e26e9f" Jan 27 15:27:50 crc kubenswrapper[4697]: I0127 15:27:50.328491 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-hflbt" Jan 27 15:27:50 crc kubenswrapper[4697]: I0127 15:27:50.367894 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s89p9\" (UniqueName: \"kubernetes.io/projected/f8f1c9d2-cb07-4cd8-8614-30734cff2994-kube-api-access-s89p9\") pod \"f8f1c9d2-cb07-4cd8-8614-30734cff2994\" (UID: \"f8f1c9d2-cb07-4cd8-8614-30734cff2994\") " Jan 27 15:27:50 crc kubenswrapper[4697]: I0127 15:27:50.367931 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f8f1c9d2-cb07-4cd8-8614-30734cff2994-operator-scripts\") pod \"f8f1c9d2-cb07-4cd8-8614-30734cff2994\" (UID: \"f8f1c9d2-cb07-4cd8-8614-30734cff2994\") " Jan 27 15:27:50 crc kubenswrapper[4697]: I0127 15:27:50.367953 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/35eda2ea-2c5d-446d-9065-cd7a9d12cd1e-operator-scripts\") pod \"35eda2ea-2c5d-446d-9065-cd7a9d12cd1e\" (UID: \"35eda2ea-2c5d-446d-9065-cd7a9d12cd1e\") " Jan 27 15:27:50 crc kubenswrapper[4697]: I0127 15:27:50.368304 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f8f1c9d2-cb07-4cd8-8614-30734cff2994-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f8f1c9d2-cb07-4cd8-8614-30734cff2994" (UID: "f8f1c9d2-cb07-4cd8-8614-30734cff2994"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:27:50 crc kubenswrapper[4697]: I0127 15:27:50.368442 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rpp4x\" (UniqueName: \"kubernetes.io/projected/35eda2ea-2c5d-446d-9065-cd7a9d12cd1e-kube-api-access-rpp4x\") pod \"35eda2ea-2c5d-446d-9065-cd7a9d12cd1e\" (UID: \"35eda2ea-2c5d-446d-9065-cd7a9d12cd1e\") " Jan 27 15:27:50 crc kubenswrapper[4697]: I0127 15:27:50.368461 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/35eda2ea-2c5d-446d-9065-cd7a9d12cd1e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "35eda2ea-2c5d-446d-9065-cd7a9d12cd1e" (UID: "35eda2ea-2c5d-446d-9065-cd7a9d12cd1e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:27:50 crc kubenswrapper[4697]: I0127 15:27:50.369310 4697 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f8f1c9d2-cb07-4cd8-8614-30734cff2994-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 15:27:50 crc kubenswrapper[4697]: I0127 15:27:50.369328 4697 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/35eda2ea-2c5d-446d-9065-cd7a9d12cd1e-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 15:27:50 crc kubenswrapper[4697]: I0127 15:27:50.369338 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hj7wj\" (UniqueName: \"kubernetes.io/projected/5e67f4cf-29a8-46ce-8ea6-757b703f82b1-kube-api-access-hj7wj\") on node \"crc\" DevicePath \"\"" Jan 27 15:27:50 crc kubenswrapper[4697]: I0127 15:27:50.369348 4697 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5e67f4cf-29a8-46ce-8ea6-757b703f82b1-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 15:27:50 crc kubenswrapper[4697]: I0127 15:27:50.371287 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8f1c9d2-cb07-4cd8-8614-30734cff2994-kube-api-access-s89p9" (OuterVolumeSpecName: "kube-api-access-s89p9") pod "f8f1c9d2-cb07-4cd8-8614-30734cff2994" (UID: "f8f1c9d2-cb07-4cd8-8614-30734cff2994"). InnerVolumeSpecName "kube-api-access-s89p9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:27:50 crc kubenswrapper[4697]: I0127 15:27:50.371883 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35eda2ea-2c5d-446d-9065-cd7a9d12cd1e-kube-api-access-rpp4x" (OuterVolumeSpecName: "kube-api-access-rpp4x") pod "35eda2ea-2c5d-446d-9065-cd7a9d12cd1e" (UID: "35eda2ea-2c5d-446d-9065-cd7a9d12cd1e"). InnerVolumeSpecName "kube-api-access-rpp4x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:27:50 crc kubenswrapper[4697]: I0127 15:27:50.471162 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rpp4x\" (UniqueName: \"kubernetes.io/projected/35eda2ea-2c5d-446d-9065-cd7a9d12cd1e-kube-api-access-rpp4x\") on node \"crc\" DevicePath \"\"" Jan 27 15:27:50 crc kubenswrapper[4697]: I0127 15:27:50.471197 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s89p9\" (UniqueName: \"kubernetes.io/projected/f8f1c9d2-cb07-4cd8-8614-30734cff2994-kube-api-access-s89p9\") on node \"crc\" DevicePath \"\"" Jan 27 15:27:50 crc kubenswrapper[4697]: I0127 15:27:50.582094 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17d130bb-85ab-4a76-a5cb-09370b6165b7" path="/var/lib/kubelet/pods/17d130bb-85ab-4a76-a5cb-09370b6165b7/volumes" Jan 27 15:27:50 crc kubenswrapper[4697]: I0127 15:27:50.582970 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="470d5b7e-11d4-44c4-ac71-698390cec599" path="/var/lib/kubelet/pods/470d5b7e-11d4-44c4-ac71-698390cec599/volumes" Jan 27 15:27:51 crc kubenswrapper[4697]: I0127 15:27:51.444325 4697 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-6sgqx" podUID="72f31a1f-c388-4fed-9842-13f65cf91e9b" containerName="ovn-controller" probeResult="failure" output=< Jan 27 15:27:51 crc kubenswrapper[4697]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 27 15:27:51 crc kubenswrapper[4697]: > Jan 27 15:27:51 crc kubenswrapper[4697]: I0127 15:27:51.492902 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-p278q" Jan 27 15:27:51 crc kubenswrapper[4697]: I0127 15:27:51.503854 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-p278q" Jan 27 15:27:51 crc kubenswrapper[4697]: I0127 15:27:51.588278 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-xgxk5"] Jan 27 15:27:51 crc kubenswrapper[4697]: E0127 15:27:51.588558 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b261bbe5-03e9-4ebe-a8d0-a375b87722df" containerName="mariadb-database-create" Jan 27 15:27:51 crc kubenswrapper[4697]: I0127 15:27:51.588574 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="b261bbe5-03e9-4ebe-a8d0-a375b87722df" containerName="mariadb-database-create" Jan 27 15:27:51 crc kubenswrapper[4697]: E0127 15:27:51.588584 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="470d5b7e-11d4-44c4-ac71-698390cec599" containerName="mariadb-account-create-update" Jan 27 15:27:51 crc kubenswrapper[4697]: I0127 15:27:51.588589 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="470d5b7e-11d4-44c4-ac71-698390cec599" containerName="mariadb-account-create-update" Jan 27 15:27:51 crc kubenswrapper[4697]: E0127 15:27:51.588598 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0bb870a-beb1-45cc-a1b2-30f6692a4cb6" containerName="mariadb-account-create-update" Jan 27 15:27:51 crc kubenswrapper[4697]: I0127 15:27:51.588604 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0bb870a-beb1-45cc-a1b2-30f6692a4cb6" containerName="mariadb-account-create-update" Jan 27 15:27:51 crc kubenswrapper[4697]: E0127 15:27:51.588618 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35eda2ea-2c5d-446d-9065-cd7a9d12cd1e" containerName="mariadb-database-create" Jan 27 15:27:51 crc kubenswrapper[4697]: I0127 15:27:51.588623 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="35eda2ea-2c5d-446d-9065-cd7a9d12cd1e" containerName="mariadb-database-create" Jan 27 15:27:51 crc kubenswrapper[4697]: E0127 15:27:51.588633 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17d130bb-85ab-4a76-a5cb-09370b6165b7" containerName="init" Jan 27 15:27:51 crc kubenswrapper[4697]: I0127 15:27:51.588640 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="17d130bb-85ab-4a76-a5cb-09370b6165b7" containerName="init" Jan 27 15:27:51 crc kubenswrapper[4697]: E0127 15:27:51.588648 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17d130bb-85ab-4a76-a5cb-09370b6165b7" containerName="dnsmasq-dns" Jan 27 15:27:51 crc kubenswrapper[4697]: I0127 15:27:51.588654 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="17d130bb-85ab-4a76-a5cb-09370b6165b7" containerName="dnsmasq-dns" Jan 27 15:27:51 crc kubenswrapper[4697]: E0127 15:27:51.588665 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8f1c9d2-cb07-4cd8-8614-30734cff2994" containerName="mariadb-database-create" Jan 27 15:27:51 crc kubenswrapper[4697]: I0127 15:27:51.588672 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8f1c9d2-cb07-4cd8-8614-30734cff2994" containerName="mariadb-database-create" Jan 27 15:27:51 crc kubenswrapper[4697]: E0127 15:27:51.588680 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59c870c6-b268-440d-a6a4-d1ea57382a67" containerName="mariadb-account-create-update" Jan 27 15:27:51 crc kubenswrapper[4697]: I0127 15:27:51.588686 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="59c870c6-b268-440d-a6a4-d1ea57382a67" containerName="mariadb-account-create-update" Jan 27 15:27:51 crc kubenswrapper[4697]: E0127 15:27:51.588698 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e67f4cf-29a8-46ce-8ea6-757b703f82b1" containerName="mariadb-account-create-update" Jan 27 15:27:51 crc kubenswrapper[4697]: I0127 15:27:51.588705 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e67f4cf-29a8-46ce-8ea6-757b703f82b1" containerName="mariadb-account-create-update" Jan 27 15:27:51 crc kubenswrapper[4697]: I0127 15:27:51.588877 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="470d5b7e-11d4-44c4-ac71-698390cec599" containerName="mariadb-account-create-update" Jan 27 15:27:51 crc kubenswrapper[4697]: I0127 15:27:51.588888 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e67f4cf-29a8-46ce-8ea6-757b703f82b1" containerName="mariadb-account-create-update" Jan 27 15:27:51 crc kubenswrapper[4697]: I0127 15:27:51.588899 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0bb870a-beb1-45cc-a1b2-30f6692a4cb6" containerName="mariadb-account-create-update" Jan 27 15:27:51 crc kubenswrapper[4697]: I0127 15:27:51.588908 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8f1c9d2-cb07-4cd8-8614-30734cff2994" containerName="mariadb-database-create" Jan 27 15:27:51 crc kubenswrapper[4697]: I0127 15:27:51.588916 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="b261bbe5-03e9-4ebe-a8d0-a375b87722df" containerName="mariadb-database-create" Jan 27 15:27:51 crc kubenswrapper[4697]: I0127 15:27:51.588923 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="35eda2ea-2c5d-446d-9065-cd7a9d12cd1e" containerName="mariadb-database-create" Jan 27 15:27:51 crc kubenswrapper[4697]: I0127 15:27:51.588936 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="59c870c6-b268-440d-a6a4-d1ea57382a67" containerName="mariadb-account-create-update" Jan 27 15:27:51 crc kubenswrapper[4697]: I0127 15:27:51.588945 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="17d130bb-85ab-4a76-a5cb-09370b6165b7" containerName="dnsmasq-dns" Jan 27 15:27:51 crc kubenswrapper[4697]: I0127 15:27:51.589378 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-xgxk5" Jan 27 15:27:51 crc kubenswrapper[4697]: I0127 15:27:51.591448 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-dxlmx" Jan 27 15:27:51 crc kubenswrapper[4697]: I0127 15:27:51.591496 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Jan 27 15:27:51 crc kubenswrapper[4697]: I0127 15:27:51.611020 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-xgxk5"] Jan 27 15:27:51 crc kubenswrapper[4697]: I0127 15:27:51.694737 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b89bk\" (UniqueName: \"kubernetes.io/projected/f75c5842-64d4-45c9-a282-b8fb8bea1af6-kube-api-access-b89bk\") pod \"glance-db-sync-xgxk5\" (UID: \"f75c5842-64d4-45c9-a282-b8fb8bea1af6\") " pod="openstack/glance-db-sync-xgxk5" Jan 27 15:27:51 crc kubenswrapper[4697]: I0127 15:27:51.694873 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f75c5842-64d4-45c9-a282-b8fb8bea1af6-db-sync-config-data\") pod \"glance-db-sync-xgxk5\" (UID: \"f75c5842-64d4-45c9-a282-b8fb8bea1af6\") " pod="openstack/glance-db-sync-xgxk5" Jan 27 15:27:51 crc kubenswrapper[4697]: I0127 15:27:51.694946 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f75c5842-64d4-45c9-a282-b8fb8bea1af6-combined-ca-bundle\") pod \"glance-db-sync-xgxk5\" (UID: \"f75c5842-64d4-45c9-a282-b8fb8bea1af6\") " pod="openstack/glance-db-sync-xgxk5" Jan 27 15:27:51 crc kubenswrapper[4697]: I0127 15:27:51.695137 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f75c5842-64d4-45c9-a282-b8fb8bea1af6-config-data\") pod \"glance-db-sync-xgxk5\" (UID: \"f75c5842-64d4-45c9-a282-b8fb8bea1af6\") " pod="openstack/glance-db-sync-xgxk5" Jan 27 15:27:51 crc kubenswrapper[4697]: I0127 15:27:51.761743 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-6sgqx-config-r62lp"] Jan 27 15:27:51 crc kubenswrapper[4697]: I0127 15:27:51.762917 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-6sgqx-config-r62lp" Jan 27 15:27:51 crc kubenswrapper[4697]: I0127 15:27:51.768110 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 27 15:27:51 crc kubenswrapper[4697]: I0127 15:27:51.797515 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f75c5842-64d4-45c9-a282-b8fb8bea1af6-db-sync-config-data\") pod \"glance-db-sync-xgxk5\" (UID: \"f75c5842-64d4-45c9-a282-b8fb8bea1af6\") " pod="openstack/glance-db-sync-xgxk5" Jan 27 15:27:51 crc kubenswrapper[4697]: I0127 15:27:51.797569 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f75c5842-64d4-45c9-a282-b8fb8bea1af6-combined-ca-bundle\") pod \"glance-db-sync-xgxk5\" (UID: \"f75c5842-64d4-45c9-a282-b8fb8bea1af6\") " pod="openstack/glance-db-sync-xgxk5" Jan 27 15:27:51 crc kubenswrapper[4697]: I0127 15:27:51.797672 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f75c5842-64d4-45c9-a282-b8fb8bea1af6-config-data\") pod \"glance-db-sync-xgxk5\" (UID: \"f75c5842-64d4-45c9-a282-b8fb8bea1af6\") " pod="openstack/glance-db-sync-xgxk5" Jan 27 15:27:51 crc kubenswrapper[4697]: I0127 15:27:51.797717 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b89bk\" (UniqueName: \"kubernetes.io/projected/f75c5842-64d4-45c9-a282-b8fb8bea1af6-kube-api-access-b89bk\") pod \"glance-db-sync-xgxk5\" (UID: \"f75c5842-64d4-45c9-a282-b8fb8bea1af6\") " pod="openstack/glance-db-sync-xgxk5" Jan 27 15:27:51 crc kubenswrapper[4697]: I0127 15:27:51.802503 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f75c5842-64d4-45c9-a282-b8fb8bea1af6-db-sync-config-data\") pod \"glance-db-sync-xgxk5\" (UID: \"f75c5842-64d4-45c9-a282-b8fb8bea1af6\") " pod="openstack/glance-db-sync-xgxk5" Jan 27 15:27:51 crc kubenswrapper[4697]: I0127 15:27:51.805146 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f75c5842-64d4-45c9-a282-b8fb8bea1af6-combined-ca-bundle\") pod \"glance-db-sync-xgxk5\" (UID: \"f75c5842-64d4-45c9-a282-b8fb8bea1af6\") " pod="openstack/glance-db-sync-xgxk5" Jan 27 15:27:51 crc kubenswrapper[4697]: I0127 15:27:51.806107 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-6sgqx-config-r62lp"] Jan 27 15:27:51 crc kubenswrapper[4697]: I0127 15:27:51.820260 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f75c5842-64d4-45c9-a282-b8fb8bea1af6-config-data\") pod \"glance-db-sync-xgxk5\" (UID: \"f75c5842-64d4-45c9-a282-b8fb8bea1af6\") " pod="openstack/glance-db-sync-xgxk5" Jan 27 15:27:51 crc kubenswrapper[4697]: I0127 15:27:51.851588 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b89bk\" (UniqueName: \"kubernetes.io/projected/f75c5842-64d4-45c9-a282-b8fb8bea1af6-kube-api-access-b89bk\") pod \"glance-db-sync-xgxk5\" (UID: \"f75c5842-64d4-45c9-a282-b8fb8bea1af6\") " pod="openstack/glance-db-sync-xgxk5" Jan 27 15:27:51 crc kubenswrapper[4697]: I0127 15:27:51.899088 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/63de902c-4b1c-4fec-a5d4-573c52ff8c4a-scripts\") pod \"ovn-controller-6sgqx-config-r62lp\" (UID: \"63de902c-4b1c-4fec-a5d4-573c52ff8c4a\") " pod="openstack/ovn-controller-6sgqx-config-r62lp" Jan 27 15:27:51 crc kubenswrapper[4697]: I0127 15:27:51.899692 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8ckv\" (UniqueName: \"kubernetes.io/projected/63de902c-4b1c-4fec-a5d4-573c52ff8c4a-kube-api-access-d8ckv\") pod \"ovn-controller-6sgqx-config-r62lp\" (UID: \"63de902c-4b1c-4fec-a5d4-573c52ff8c4a\") " pod="openstack/ovn-controller-6sgqx-config-r62lp" Jan 27 15:27:51 crc kubenswrapper[4697]: I0127 15:27:51.899819 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/63de902c-4b1c-4fec-a5d4-573c52ff8c4a-additional-scripts\") pod \"ovn-controller-6sgqx-config-r62lp\" (UID: \"63de902c-4b1c-4fec-a5d4-573c52ff8c4a\") " pod="openstack/ovn-controller-6sgqx-config-r62lp" Jan 27 15:27:51 crc kubenswrapper[4697]: I0127 15:27:51.899921 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/63de902c-4b1c-4fec-a5d4-573c52ff8c4a-var-run\") pod \"ovn-controller-6sgqx-config-r62lp\" (UID: \"63de902c-4b1c-4fec-a5d4-573c52ff8c4a\") " pod="openstack/ovn-controller-6sgqx-config-r62lp" Jan 27 15:27:51 crc kubenswrapper[4697]: I0127 15:27:51.900108 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/63de902c-4b1c-4fec-a5d4-573c52ff8c4a-var-log-ovn\") pod \"ovn-controller-6sgqx-config-r62lp\" (UID: \"63de902c-4b1c-4fec-a5d4-573c52ff8c4a\") " pod="openstack/ovn-controller-6sgqx-config-r62lp" Jan 27 15:27:51 crc kubenswrapper[4697]: I0127 15:27:51.900266 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/63de902c-4b1c-4fec-a5d4-573c52ff8c4a-var-run-ovn\") pod \"ovn-controller-6sgqx-config-r62lp\" (UID: \"63de902c-4b1c-4fec-a5d4-573c52ff8c4a\") " pod="openstack/ovn-controller-6sgqx-config-r62lp" Jan 27 15:27:51 crc kubenswrapper[4697]: I0127 15:27:51.903971 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-xgxk5" Jan 27 15:27:52 crc kubenswrapper[4697]: I0127 15:27:52.001818 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/63de902c-4b1c-4fec-a5d4-573c52ff8c4a-scripts\") pod \"ovn-controller-6sgqx-config-r62lp\" (UID: \"63de902c-4b1c-4fec-a5d4-573c52ff8c4a\") " pod="openstack/ovn-controller-6sgqx-config-r62lp" Jan 27 15:27:52 crc kubenswrapper[4697]: I0127 15:27:52.001890 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8ckv\" (UniqueName: \"kubernetes.io/projected/63de902c-4b1c-4fec-a5d4-573c52ff8c4a-kube-api-access-d8ckv\") pod \"ovn-controller-6sgqx-config-r62lp\" (UID: \"63de902c-4b1c-4fec-a5d4-573c52ff8c4a\") " pod="openstack/ovn-controller-6sgqx-config-r62lp" Jan 27 15:27:52 crc kubenswrapper[4697]: I0127 15:27:52.001924 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/63de902c-4b1c-4fec-a5d4-573c52ff8c4a-additional-scripts\") pod \"ovn-controller-6sgqx-config-r62lp\" (UID: \"63de902c-4b1c-4fec-a5d4-573c52ff8c4a\") " pod="openstack/ovn-controller-6sgqx-config-r62lp" Jan 27 15:27:52 crc kubenswrapper[4697]: I0127 15:27:52.001955 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/63de902c-4b1c-4fec-a5d4-573c52ff8c4a-var-run\") pod \"ovn-controller-6sgqx-config-r62lp\" (UID: \"63de902c-4b1c-4fec-a5d4-573c52ff8c4a\") " pod="openstack/ovn-controller-6sgqx-config-r62lp" Jan 27 15:27:52 crc kubenswrapper[4697]: I0127 15:27:52.001977 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/63de902c-4b1c-4fec-a5d4-573c52ff8c4a-var-log-ovn\") pod \"ovn-controller-6sgqx-config-r62lp\" (UID: \"63de902c-4b1c-4fec-a5d4-573c52ff8c4a\") " pod="openstack/ovn-controller-6sgqx-config-r62lp" Jan 27 15:27:52 crc kubenswrapper[4697]: I0127 15:27:52.002067 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/63de902c-4b1c-4fec-a5d4-573c52ff8c4a-var-run-ovn\") pod \"ovn-controller-6sgqx-config-r62lp\" (UID: \"63de902c-4b1c-4fec-a5d4-573c52ff8c4a\") " pod="openstack/ovn-controller-6sgqx-config-r62lp" Jan 27 15:27:52 crc kubenswrapper[4697]: I0127 15:27:52.002467 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/63de902c-4b1c-4fec-a5d4-573c52ff8c4a-var-run-ovn\") pod \"ovn-controller-6sgqx-config-r62lp\" (UID: \"63de902c-4b1c-4fec-a5d4-573c52ff8c4a\") " pod="openstack/ovn-controller-6sgqx-config-r62lp" Jan 27 15:27:52 crc kubenswrapper[4697]: I0127 15:27:52.002543 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/63de902c-4b1c-4fec-a5d4-573c52ff8c4a-var-run\") pod \"ovn-controller-6sgqx-config-r62lp\" (UID: \"63de902c-4b1c-4fec-a5d4-573c52ff8c4a\") " pod="openstack/ovn-controller-6sgqx-config-r62lp" Jan 27 15:27:52 crc kubenswrapper[4697]: I0127 15:27:52.003159 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/63de902c-4b1c-4fec-a5d4-573c52ff8c4a-var-log-ovn\") pod \"ovn-controller-6sgqx-config-r62lp\" (UID: \"63de902c-4b1c-4fec-a5d4-573c52ff8c4a\") " pod="openstack/ovn-controller-6sgqx-config-r62lp" Jan 27 15:27:52 crc kubenswrapper[4697]: I0127 15:27:52.004382 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/63de902c-4b1c-4fec-a5d4-573c52ff8c4a-scripts\") pod \"ovn-controller-6sgqx-config-r62lp\" (UID: \"63de902c-4b1c-4fec-a5d4-573c52ff8c4a\") " pod="openstack/ovn-controller-6sgqx-config-r62lp" Jan 27 15:27:52 crc kubenswrapper[4697]: I0127 15:27:52.004749 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/63de902c-4b1c-4fec-a5d4-573c52ff8c4a-additional-scripts\") pod \"ovn-controller-6sgqx-config-r62lp\" (UID: \"63de902c-4b1c-4fec-a5d4-573c52ff8c4a\") " pod="openstack/ovn-controller-6sgqx-config-r62lp" Jan 27 15:27:52 crc kubenswrapper[4697]: I0127 15:27:52.019893 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8ckv\" (UniqueName: \"kubernetes.io/projected/63de902c-4b1c-4fec-a5d4-573c52ff8c4a-kube-api-access-d8ckv\") pod \"ovn-controller-6sgqx-config-r62lp\" (UID: \"63de902c-4b1c-4fec-a5d4-573c52ff8c4a\") " pod="openstack/ovn-controller-6sgqx-config-r62lp" Jan 27 15:27:52 crc kubenswrapper[4697]: I0127 15:27:52.077909 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-6sgqx-config-r62lp" Jan 27 15:27:52 crc kubenswrapper[4697]: I0127 15:27:52.337433 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-xgxk5"] Jan 27 15:27:52 crc kubenswrapper[4697]: I0127 15:27:52.599149 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-6sgqx-config-r62lp"] Jan 27 15:27:52 crc kubenswrapper[4697]: W0127 15:27:52.604513 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod63de902c_4b1c_4fec_a5d4_573c52ff8c4a.slice/crio-0218342efa0e2d48f45372a374eb389d960bb60b033acc85e63d34a1be8f2c07 WatchSource:0}: Error finding container 0218342efa0e2d48f45372a374eb389d960bb60b033acc85e63d34a1be8f2c07: Status 404 returned error can't find the container with id 0218342efa0e2d48f45372a374eb389d960bb60b033acc85e63d34a1be8f2c07 Jan 27 15:27:52 crc kubenswrapper[4697]: I0127 15:27:52.769741 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 27 15:27:53 crc kubenswrapper[4697]: I0127 15:27:53.055074 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-zm9sc"] Jan 27 15:27:53 crc kubenswrapper[4697]: I0127 15:27:53.056236 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-zm9sc" Jan 27 15:27:53 crc kubenswrapper[4697]: I0127 15:27:53.063146 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 27 15:27:53 crc kubenswrapper[4697]: I0127 15:27:53.071682 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-zm9sc"] Jan 27 15:27:53 crc kubenswrapper[4697]: I0127 15:27:53.125351 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/28beb3db-3bed-4d98-b02d-781812a54a1e-operator-scripts\") pod \"root-account-create-update-zm9sc\" (UID: \"28beb3db-3bed-4d98-b02d-781812a54a1e\") " pod="openstack/root-account-create-update-zm9sc" Jan 27 15:27:53 crc kubenswrapper[4697]: I0127 15:27:53.125410 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4kln6\" (UniqueName: \"kubernetes.io/projected/28beb3db-3bed-4d98-b02d-781812a54a1e-kube-api-access-4kln6\") pod \"root-account-create-update-zm9sc\" (UID: \"28beb3db-3bed-4d98-b02d-781812a54a1e\") " pod="openstack/root-account-create-update-zm9sc" Jan 27 15:27:53 crc kubenswrapper[4697]: I0127 15:27:53.226423 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/28beb3db-3bed-4d98-b02d-781812a54a1e-operator-scripts\") pod \"root-account-create-update-zm9sc\" (UID: \"28beb3db-3bed-4d98-b02d-781812a54a1e\") " pod="openstack/root-account-create-update-zm9sc" Jan 27 15:27:53 crc kubenswrapper[4697]: I0127 15:27:53.226482 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4kln6\" (UniqueName: \"kubernetes.io/projected/28beb3db-3bed-4d98-b02d-781812a54a1e-kube-api-access-4kln6\") pod \"root-account-create-update-zm9sc\" (UID: \"28beb3db-3bed-4d98-b02d-781812a54a1e\") " pod="openstack/root-account-create-update-zm9sc" Jan 27 15:27:53 crc kubenswrapper[4697]: I0127 15:27:53.227132 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/28beb3db-3bed-4d98-b02d-781812a54a1e-operator-scripts\") pod \"root-account-create-update-zm9sc\" (UID: \"28beb3db-3bed-4d98-b02d-781812a54a1e\") " pod="openstack/root-account-create-update-zm9sc" Jan 27 15:27:53 crc kubenswrapper[4697]: I0127 15:27:53.261445 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4kln6\" (UniqueName: \"kubernetes.io/projected/28beb3db-3bed-4d98-b02d-781812a54a1e-kube-api-access-4kln6\") pod \"root-account-create-update-zm9sc\" (UID: \"28beb3db-3bed-4d98-b02d-781812a54a1e\") " pod="openstack/root-account-create-update-zm9sc" Jan 27 15:27:53 crc kubenswrapper[4697]: I0127 15:27:53.359015 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-xgxk5" event={"ID":"f75c5842-64d4-45c9-a282-b8fb8bea1af6","Type":"ContainerStarted","Data":"f13488db6c04a180035cc860a9fa33ac2f12c65c59ed929a139bea8ddec9c293"} Jan 27 15:27:53 crc kubenswrapper[4697]: I0127 15:27:53.361236 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-6sgqx-config-r62lp" event={"ID":"63de902c-4b1c-4fec-a5d4-573c52ff8c4a","Type":"ContainerStarted","Data":"149ead3ebf8a54efc657986d8158df9b8a0b93f70d2ec71a6dc4f1748fec1467"} Jan 27 15:27:53 crc kubenswrapper[4697]: I0127 15:27:53.361279 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-6sgqx-config-r62lp" event={"ID":"63de902c-4b1c-4fec-a5d4-573c52ff8c4a","Type":"ContainerStarted","Data":"0218342efa0e2d48f45372a374eb389d960bb60b033acc85e63d34a1be8f2c07"} Jan 27 15:27:53 crc kubenswrapper[4697]: I0127 15:27:53.375108 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-zm9sc" Jan 27 15:27:53 crc kubenswrapper[4697]: I0127 15:27:53.386251 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-6sgqx-config-r62lp" podStartSLOduration=2.386230046 podStartE2EDuration="2.386230046s" podCreationTimestamp="2026-01-27 15:27:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:27:53.378670158 +0000 UTC m=+1169.551069939" watchObservedRunningTime="2026-01-27 15:27:53.386230046 +0000 UTC m=+1169.558629827" Jan 27 15:27:53 crc kubenswrapper[4697]: I0127 15:27:53.895485 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-zm9sc"] Jan 27 15:27:54 crc kubenswrapper[4697]: I0127 15:27:54.376937 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-w2t78" event={"ID":"afa66008-cd63-46fa-8ac6-622e2b465eec","Type":"ContainerDied","Data":"3616ddcb175a2d176f8fb77d00789c8cb0ed50ff85d4daad3dfe11682e208a3a"} Jan 27 15:27:54 crc kubenswrapper[4697]: I0127 15:27:54.377024 4697 generic.go:334] "Generic (PLEG): container finished" podID="afa66008-cd63-46fa-8ac6-622e2b465eec" containerID="3616ddcb175a2d176f8fb77d00789c8cb0ed50ff85d4daad3dfe11682e208a3a" exitCode=0 Jan 27 15:27:54 crc kubenswrapper[4697]: I0127 15:27:54.380575 4697 generic.go:334] "Generic (PLEG): container finished" podID="28beb3db-3bed-4d98-b02d-781812a54a1e" containerID="a3851091dd16121e6a0150dc86e85c5f20ee73c74922a8b289249e30c8a4bf63" exitCode=0 Jan 27 15:27:54 crc kubenswrapper[4697]: I0127 15:27:54.380696 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-zm9sc" event={"ID":"28beb3db-3bed-4d98-b02d-781812a54a1e","Type":"ContainerDied","Data":"a3851091dd16121e6a0150dc86e85c5f20ee73c74922a8b289249e30c8a4bf63"} Jan 27 15:27:54 crc kubenswrapper[4697]: I0127 15:27:54.380743 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-zm9sc" event={"ID":"28beb3db-3bed-4d98-b02d-781812a54a1e","Type":"ContainerStarted","Data":"4fbf32b4b027093dca516ce7654beb8b998b794389f0b810038a641300eb5a63"} Jan 27 15:27:54 crc kubenswrapper[4697]: I0127 15:27:54.382844 4697 generic.go:334] "Generic (PLEG): container finished" podID="63de902c-4b1c-4fec-a5d4-573c52ff8c4a" containerID="149ead3ebf8a54efc657986d8158df9b8a0b93f70d2ec71a6dc4f1748fec1467" exitCode=0 Jan 27 15:27:54 crc kubenswrapper[4697]: I0127 15:27:54.382894 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-6sgqx-config-r62lp" event={"ID":"63de902c-4b1c-4fec-a5d4-573c52ff8c4a","Type":"ContainerDied","Data":"149ead3ebf8a54efc657986d8158df9b8a0b93f70d2ec71a6dc4f1748fec1467"} Jan 27 15:27:55 crc kubenswrapper[4697]: I0127 15:27:55.112064 4697 patch_prober.go:28] interesting pod/machine-config-daemon-wz495 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 15:27:55 crc kubenswrapper[4697]: I0127 15:27:55.112465 4697 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 15:27:55 crc kubenswrapper[4697]: I0127 15:27:55.112565 4697 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wz495" Jan 27 15:27:55 crc kubenswrapper[4697]: I0127 15:27:55.113994 4697 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"92d797174f2c61fd113567cb99c93ce3ccc4863dd93b46c4dc54df8e401db4fd"} pod="openshift-machine-config-operator/machine-config-daemon-wz495" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 15:27:55 crc kubenswrapper[4697]: I0127 15:27:55.114133 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" containerName="machine-config-daemon" containerID="cri-o://92d797174f2c61fd113567cb99c93ce3ccc4863dd93b46c4dc54df8e401db4fd" gracePeriod=600 Jan 27 15:27:55 crc kubenswrapper[4697]: I0127 15:27:55.393345 4697 generic.go:334] "Generic (PLEG): container finished" podID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" containerID="92d797174f2c61fd113567cb99c93ce3ccc4863dd93b46c4dc54df8e401db4fd" exitCode=0 Jan 27 15:27:55 crc kubenswrapper[4697]: I0127 15:27:55.393517 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wz495" event={"ID":"e9bec8bc-b2a6-4865-83ca-692ae5c022a6","Type":"ContainerDied","Data":"92d797174f2c61fd113567cb99c93ce3ccc4863dd93b46c4dc54df8e401db4fd"} Jan 27 15:27:55 crc kubenswrapper[4697]: I0127 15:27:55.393559 4697 scope.go:117] "RemoveContainer" containerID="939f9c93ba265c5d99e68011d55d9135f74940c6f260b8c578f1d67844ceb0ed" Jan 27 15:27:55 crc kubenswrapper[4697]: I0127 15:27:55.462361 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c4c66cac-c142-4579-9d13-053d43983229-etc-swift\") pod \"swift-storage-0\" (UID: \"c4c66cac-c142-4579-9d13-053d43983229\") " pod="openstack/swift-storage-0" Jan 27 15:27:55 crc kubenswrapper[4697]: I0127 15:27:55.470956 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c4c66cac-c142-4579-9d13-053d43983229-etc-swift\") pod \"swift-storage-0\" (UID: \"c4c66cac-c142-4579-9d13-053d43983229\") " pod="openstack/swift-storage-0" Jan 27 15:27:55 crc kubenswrapper[4697]: I0127 15:27:55.771939 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 27 15:27:55 crc kubenswrapper[4697]: I0127 15:27:55.814927 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-w2t78" Jan 27 15:27:55 crc kubenswrapper[4697]: I0127 15:27:55.869818 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8sw69\" (UniqueName: \"kubernetes.io/projected/afa66008-cd63-46fa-8ac6-622e2b465eec-kube-api-access-8sw69\") pod \"afa66008-cd63-46fa-8ac6-622e2b465eec\" (UID: \"afa66008-cd63-46fa-8ac6-622e2b465eec\") " Jan 27 15:27:55 crc kubenswrapper[4697]: I0127 15:27:55.869900 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/afa66008-cd63-46fa-8ac6-622e2b465eec-scripts\") pod \"afa66008-cd63-46fa-8ac6-622e2b465eec\" (UID: \"afa66008-cd63-46fa-8ac6-622e2b465eec\") " Jan 27 15:27:55 crc kubenswrapper[4697]: I0127 15:27:55.869976 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/afa66008-cd63-46fa-8ac6-622e2b465eec-dispersionconf\") pod \"afa66008-cd63-46fa-8ac6-622e2b465eec\" (UID: \"afa66008-cd63-46fa-8ac6-622e2b465eec\") " Jan 27 15:27:55 crc kubenswrapper[4697]: I0127 15:27:55.870040 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afa66008-cd63-46fa-8ac6-622e2b465eec-combined-ca-bundle\") pod \"afa66008-cd63-46fa-8ac6-622e2b465eec\" (UID: \"afa66008-cd63-46fa-8ac6-622e2b465eec\") " Jan 27 15:27:55 crc kubenswrapper[4697]: I0127 15:27:55.870109 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/afa66008-cd63-46fa-8ac6-622e2b465eec-ring-data-devices\") pod \"afa66008-cd63-46fa-8ac6-622e2b465eec\" (UID: \"afa66008-cd63-46fa-8ac6-622e2b465eec\") " Jan 27 15:27:55 crc kubenswrapper[4697]: I0127 15:27:55.870129 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/afa66008-cd63-46fa-8ac6-622e2b465eec-etc-swift\") pod \"afa66008-cd63-46fa-8ac6-622e2b465eec\" (UID: \"afa66008-cd63-46fa-8ac6-622e2b465eec\") " Jan 27 15:27:55 crc kubenswrapper[4697]: I0127 15:27:55.870164 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/afa66008-cd63-46fa-8ac6-622e2b465eec-swiftconf\") pod \"afa66008-cd63-46fa-8ac6-622e2b465eec\" (UID: \"afa66008-cd63-46fa-8ac6-622e2b465eec\") " Jan 27 15:27:55 crc kubenswrapper[4697]: I0127 15:27:55.872416 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/afa66008-cd63-46fa-8ac6-622e2b465eec-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "afa66008-cd63-46fa-8ac6-622e2b465eec" (UID: "afa66008-cd63-46fa-8ac6-622e2b465eec"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:27:55 crc kubenswrapper[4697]: I0127 15:27:55.874184 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/afa66008-cd63-46fa-8ac6-622e2b465eec-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "afa66008-cd63-46fa-8ac6-622e2b465eec" (UID: "afa66008-cd63-46fa-8ac6-622e2b465eec"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:27:55 crc kubenswrapper[4697]: I0127 15:27:55.887598 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/afa66008-cd63-46fa-8ac6-622e2b465eec-kube-api-access-8sw69" (OuterVolumeSpecName: "kube-api-access-8sw69") pod "afa66008-cd63-46fa-8ac6-622e2b465eec" (UID: "afa66008-cd63-46fa-8ac6-622e2b465eec"). InnerVolumeSpecName "kube-api-access-8sw69". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:27:55 crc kubenswrapper[4697]: I0127 15:27:55.901645 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afa66008-cd63-46fa-8ac6-622e2b465eec-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "afa66008-cd63-46fa-8ac6-622e2b465eec" (UID: "afa66008-cd63-46fa-8ac6-622e2b465eec"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:27:55 crc kubenswrapper[4697]: I0127 15:27:55.902290 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afa66008-cd63-46fa-8ac6-622e2b465eec-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "afa66008-cd63-46fa-8ac6-622e2b465eec" (UID: "afa66008-cd63-46fa-8ac6-622e2b465eec"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:27:55 crc kubenswrapper[4697]: I0127 15:27:55.933420 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-zm9sc" Jan 27 15:27:55 crc kubenswrapper[4697]: I0127 15:27:55.935486 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afa66008-cd63-46fa-8ac6-622e2b465eec-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "afa66008-cd63-46fa-8ac6-622e2b465eec" (UID: "afa66008-cd63-46fa-8ac6-622e2b465eec"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:27:55 crc kubenswrapper[4697]: I0127 15:27:55.950518 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-6sgqx-config-r62lp" Jan 27 15:27:55 crc kubenswrapper[4697]: I0127 15:27:55.951981 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/afa66008-cd63-46fa-8ac6-622e2b465eec-scripts" (OuterVolumeSpecName: "scripts") pod "afa66008-cd63-46fa-8ac6-622e2b465eec" (UID: "afa66008-cd63-46fa-8ac6-622e2b465eec"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:27:55 crc kubenswrapper[4697]: I0127 15:27:55.972126 4697 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/afa66008-cd63-46fa-8ac6-622e2b465eec-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 27 15:27:55 crc kubenswrapper[4697]: I0127 15:27:55.972151 4697 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/afa66008-cd63-46fa-8ac6-622e2b465eec-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 27 15:27:55 crc kubenswrapper[4697]: I0127 15:27:55.972162 4697 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/afa66008-cd63-46fa-8ac6-622e2b465eec-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 27 15:27:55 crc kubenswrapper[4697]: I0127 15:27:55.972171 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8sw69\" (UniqueName: \"kubernetes.io/projected/afa66008-cd63-46fa-8ac6-622e2b465eec-kube-api-access-8sw69\") on node \"crc\" DevicePath \"\"" Jan 27 15:27:55 crc kubenswrapper[4697]: I0127 15:27:55.972181 4697 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/afa66008-cd63-46fa-8ac6-622e2b465eec-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 15:27:55 crc kubenswrapper[4697]: I0127 15:27:55.972189 4697 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/afa66008-cd63-46fa-8ac6-622e2b465eec-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 27 15:27:55 crc kubenswrapper[4697]: I0127 15:27:55.972196 4697 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afa66008-cd63-46fa-8ac6-622e2b465eec-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 15:27:56 crc kubenswrapper[4697]: I0127 15:27:56.073647 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/63de902c-4b1c-4fec-a5d4-573c52ff8c4a-scripts\") pod \"63de902c-4b1c-4fec-a5d4-573c52ff8c4a\" (UID: \"63de902c-4b1c-4fec-a5d4-573c52ff8c4a\") " Jan 27 15:27:56 crc kubenswrapper[4697]: I0127 15:27:56.073697 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/63de902c-4b1c-4fec-a5d4-573c52ff8c4a-additional-scripts\") pod \"63de902c-4b1c-4fec-a5d4-573c52ff8c4a\" (UID: \"63de902c-4b1c-4fec-a5d4-573c52ff8c4a\") " Jan 27 15:27:56 crc kubenswrapper[4697]: I0127 15:27:56.073797 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/63de902c-4b1c-4fec-a5d4-573c52ff8c4a-var-run\") pod \"63de902c-4b1c-4fec-a5d4-573c52ff8c4a\" (UID: \"63de902c-4b1c-4fec-a5d4-573c52ff8c4a\") " Jan 27 15:27:56 crc kubenswrapper[4697]: I0127 15:27:56.073836 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/63de902c-4b1c-4fec-a5d4-573c52ff8c4a-var-run-ovn\") pod \"63de902c-4b1c-4fec-a5d4-573c52ff8c4a\" (UID: \"63de902c-4b1c-4fec-a5d4-573c52ff8c4a\") " Jan 27 15:27:56 crc kubenswrapper[4697]: I0127 15:27:56.073866 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4kln6\" (UniqueName: \"kubernetes.io/projected/28beb3db-3bed-4d98-b02d-781812a54a1e-kube-api-access-4kln6\") pod \"28beb3db-3bed-4d98-b02d-781812a54a1e\" (UID: \"28beb3db-3bed-4d98-b02d-781812a54a1e\") " Jan 27 15:27:56 crc kubenswrapper[4697]: I0127 15:27:56.073894 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/28beb3db-3bed-4d98-b02d-781812a54a1e-operator-scripts\") pod \"28beb3db-3bed-4d98-b02d-781812a54a1e\" (UID: \"28beb3db-3bed-4d98-b02d-781812a54a1e\") " Jan 27 15:27:56 crc kubenswrapper[4697]: I0127 15:27:56.073942 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d8ckv\" (UniqueName: \"kubernetes.io/projected/63de902c-4b1c-4fec-a5d4-573c52ff8c4a-kube-api-access-d8ckv\") pod \"63de902c-4b1c-4fec-a5d4-573c52ff8c4a\" (UID: \"63de902c-4b1c-4fec-a5d4-573c52ff8c4a\") " Jan 27 15:27:56 crc kubenswrapper[4697]: I0127 15:27:56.073973 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/63de902c-4b1c-4fec-a5d4-573c52ff8c4a-var-log-ovn\") pod \"63de902c-4b1c-4fec-a5d4-573c52ff8c4a\" (UID: \"63de902c-4b1c-4fec-a5d4-573c52ff8c4a\") " Jan 27 15:27:56 crc kubenswrapper[4697]: I0127 15:27:56.074298 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/63de902c-4b1c-4fec-a5d4-573c52ff8c4a-var-run" (OuterVolumeSpecName: "var-run") pod "63de902c-4b1c-4fec-a5d4-573c52ff8c4a" (UID: "63de902c-4b1c-4fec-a5d4-573c52ff8c4a"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 15:27:56 crc kubenswrapper[4697]: I0127 15:27:56.074341 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/63de902c-4b1c-4fec-a5d4-573c52ff8c4a-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "63de902c-4b1c-4fec-a5d4-573c52ff8c4a" (UID: "63de902c-4b1c-4fec-a5d4-573c52ff8c4a"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 15:27:56 crc kubenswrapper[4697]: I0127 15:27:56.074368 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/63de902c-4b1c-4fec-a5d4-573c52ff8c4a-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "63de902c-4b1c-4fec-a5d4-573c52ff8c4a" (UID: "63de902c-4b1c-4fec-a5d4-573c52ff8c4a"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 15:27:56 crc kubenswrapper[4697]: I0127 15:27:56.074413 4697 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/63de902c-4b1c-4fec-a5d4-573c52ff8c4a-var-run\") on node \"crc\" DevicePath \"\"" Jan 27 15:27:56 crc kubenswrapper[4697]: I0127 15:27:56.074485 4697 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/63de902c-4b1c-4fec-a5d4-573c52ff8c4a-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 27 15:27:56 crc kubenswrapper[4697]: I0127 15:27:56.074648 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/63de902c-4b1c-4fec-a5d4-573c52ff8c4a-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "63de902c-4b1c-4fec-a5d4-573c52ff8c4a" (UID: "63de902c-4b1c-4fec-a5d4-573c52ff8c4a"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:27:56 crc kubenswrapper[4697]: I0127 15:27:56.075089 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/28beb3db-3bed-4d98-b02d-781812a54a1e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "28beb3db-3bed-4d98-b02d-781812a54a1e" (UID: "28beb3db-3bed-4d98-b02d-781812a54a1e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:27:56 crc kubenswrapper[4697]: I0127 15:27:56.077228 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/63de902c-4b1c-4fec-a5d4-573c52ff8c4a-scripts" (OuterVolumeSpecName: "scripts") pod "63de902c-4b1c-4fec-a5d4-573c52ff8c4a" (UID: "63de902c-4b1c-4fec-a5d4-573c52ff8c4a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:27:56 crc kubenswrapper[4697]: I0127 15:27:56.078580 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28beb3db-3bed-4d98-b02d-781812a54a1e-kube-api-access-4kln6" (OuterVolumeSpecName: "kube-api-access-4kln6") pod "28beb3db-3bed-4d98-b02d-781812a54a1e" (UID: "28beb3db-3bed-4d98-b02d-781812a54a1e"). InnerVolumeSpecName "kube-api-access-4kln6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:27:56 crc kubenswrapper[4697]: I0127 15:27:56.079398 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63de902c-4b1c-4fec-a5d4-573c52ff8c4a-kube-api-access-d8ckv" (OuterVolumeSpecName: "kube-api-access-d8ckv") pod "63de902c-4b1c-4fec-a5d4-573c52ff8c4a" (UID: "63de902c-4b1c-4fec-a5d4-573c52ff8c4a"). InnerVolumeSpecName "kube-api-access-d8ckv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:27:56 crc kubenswrapper[4697]: I0127 15:27:56.176463 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4kln6\" (UniqueName: \"kubernetes.io/projected/28beb3db-3bed-4d98-b02d-781812a54a1e-kube-api-access-4kln6\") on node \"crc\" DevicePath \"\"" Jan 27 15:27:56 crc kubenswrapper[4697]: I0127 15:27:56.176506 4697 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/28beb3db-3bed-4d98-b02d-781812a54a1e-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 15:27:56 crc kubenswrapper[4697]: I0127 15:27:56.176523 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d8ckv\" (UniqueName: \"kubernetes.io/projected/63de902c-4b1c-4fec-a5d4-573c52ff8c4a-kube-api-access-d8ckv\") on node \"crc\" DevicePath \"\"" Jan 27 15:27:56 crc kubenswrapper[4697]: I0127 15:27:56.176535 4697 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/63de902c-4b1c-4fec-a5d4-573c52ff8c4a-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 27 15:27:56 crc kubenswrapper[4697]: I0127 15:27:56.176549 4697 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/63de902c-4b1c-4fec-a5d4-573c52ff8c4a-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 15:27:56 crc kubenswrapper[4697]: I0127 15:27:56.176559 4697 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/63de902c-4b1c-4fec-a5d4-573c52ff8c4a-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 15:27:56 crc kubenswrapper[4697]: I0127 15:27:56.379736 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 27 15:27:56 crc kubenswrapper[4697]: I0127 15:27:56.404940 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c4c66cac-c142-4579-9d13-053d43983229","Type":"ContainerStarted","Data":"aadc7ec8e4b71218b15ccafc4aae6f56fed135408fdbb8396c6703d59f36a0b9"} Jan 27 15:27:56 crc kubenswrapper[4697]: I0127 15:27:56.407740 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-w2t78" event={"ID":"afa66008-cd63-46fa-8ac6-622e2b465eec","Type":"ContainerDied","Data":"0ee3ccbb133bd2b876146df7e502d8d76c8769055031a9670bc56e8a5af53112"} Jan 27 15:27:56 crc kubenswrapper[4697]: I0127 15:27:56.407817 4697 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0ee3ccbb133bd2b876146df7e502d8d76c8769055031a9670bc56e8a5af53112" Jan 27 15:27:56 crc kubenswrapper[4697]: I0127 15:27:56.407762 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-w2t78" Jan 27 15:27:56 crc kubenswrapper[4697]: I0127 15:27:56.412015 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-zm9sc" event={"ID":"28beb3db-3bed-4d98-b02d-781812a54a1e","Type":"ContainerDied","Data":"4fbf32b4b027093dca516ce7654beb8b998b794389f0b810038a641300eb5a63"} Jan 27 15:27:56 crc kubenswrapper[4697]: I0127 15:27:56.412064 4697 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4fbf32b4b027093dca516ce7654beb8b998b794389f0b810038a641300eb5a63" Jan 27 15:27:56 crc kubenswrapper[4697]: I0127 15:27:56.412143 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-zm9sc" Jan 27 15:27:56 crc kubenswrapper[4697]: I0127 15:27:56.423748 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-6sgqx-config-r62lp" event={"ID":"63de902c-4b1c-4fec-a5d4-573c52ff8c4a","Type":"ContainerDied","Data":"0218342efa0e2d48f45372a374eb389d960bb60b033acc85e63d34a1be8f2c07"} Jan 27 15:27:56 crc kubenswrapper[4697]: I0127 15:27:56.423810 4697 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0218342efa0e2d48f45372a374eb389d960bb60b033acc85e63d34a1be8f2c07" Jan 27 15:27:56 crc kubenswrapper[4697]: I0127 15:27:56.423879 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-6sgqx-config-r62lp" Jan 27 15:27:56 crc kubenswrapper[4697]: I0127 15:27:56.480681 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wz495" event={"ID":"e9bec8bc-b2a6-4865-83ca-692ae5c022a6","Type":"ContainerStarted","Data":"5f08c1e0b4fdd3c835b2715925dd8d1fa9438edf0fb56dd634b6fc87424d2b5d"} Jan 27 15:27:56 crc kubenswrapper[4697]: I0127 15:27:56.502281 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-6sgqx" Jan 27 15:27:56 crc kubenswrapper[4697]: I0127 15:27:56.551669 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-6sgqx-config-r62lp"] Jan 27 15:27:56 crc kubenswrapper[4697]: I0127 15:27:56.553331 4697 status_manager.go:907] "Failed to delete status for pod" pod="openstack/ovn-controller-6sgqx-config-r62lp" err="pods \"ovn-controller-6sgqx-config-r62lp\" not found" Jan 27 15:27:56 crc kubenswrapper[4697]: I0127 15:27:56.558001 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-6sgqx-config-r62lp"] Jan 27 15:27:56 crc kubenswrapper[4697]: I0127 15:27:56.584643 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63de902c-4b1c-4fec-a5d4-573c52ff8c4a" path="/var/lib/kubelet/pods/63de902c-4b1c-4fec-a5d4-573c52ff8c4a/volumes" Jan 27 15:27:58 crc kubenswrapper[4697]: I0127 15:27:58.509244 4697 generic.go:334] "Generic (PLEG): container finished" podID="eda501db-ef38-4c1f-b2d6-3e009fe24e40" containerID="840022435db03186405d1974a393a665897eadcc1c7df67f122cbcc886b3f4cc" exitCode=0 Jan 27 15:27:58 crc kubenswrapper[4697]: I0127 15:27:58.509329 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"eda501db-ef38-4c1f-b2d6-3e009fe24e40","Type":"ContainerDied","Data":"840022435db03186405d1974a393a665897eadcc1c7df67f122cbcc886b3f4cc"} Jan 27 15:27:59 crc kubenswrapper[4697]: I0127 15:27:59.503767 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-zm9sc"] Jan 27 15:27:59 crc kubenswrapper[4697]: I0127 15:27:59.513928 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-zm9sc"] Jan 27 15:27:59 crc kubenswrapper[4697]: I0127 15:27:59.529214 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"eda501db-ef38-4c1f-b2d6-3e009fe24e40","Type":"ContainerStarted","Data":"b99a323fb1faec530a3f0d6f4c8ee524ea60d2eceda116d7699ad05c31946607"} Jan 27 15:27:59 crc kubenswrapper[4697]: I0127 15:27:59.531034 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 27 15:27:59 crc kubenswrapper[4697]: I0127 15:27:59.543196 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c4c66cac-c142-4579-9d13-053d43983229","Type":"ContainerStarted","Data":"dfdfd7ef7dddf22cde984a6b8217870bd0641e7f810315d68b9d23aae1baf55d"} Jan 27 15:27:59 crc kubenswrapper[4697]: I0127 15:27:59.543247 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c4c66cac-c142-4579-9d13-053d43983229","Type":"ContainerStarted","Data":"1b556a20c1136f220ef47134db3f4e9c1d0a1920da01f296f5bbdfe9e7fdf15a"} Jan 27 15:27:59 crc kubenswrapper[4697]: I0127 15:27:59.579814 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=-9223371958.275002 podStartE2EDuration="1m18.579773601s" podCreationTimestamp="2026-01-27 15:26:41 +0000 UTC" firstStartedPulling="2026-01-27 15:26:43.746368029 +0000 UTC m=+1099.918767800" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:27:59.571452169 +0000 UTC m=+1175.743851970" watchObservedRunningTime="2026-01-27 15:27:59.579773601 +0000 UTC m=+1175.752173382" Jan 27 15:28:00 crc kubenswrapper[4697]: I0127 15:28:00.554401 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c4c66cac-c142-4579-9d13-053d43983229","Type":"ContainerStarted","Data":"32845abecf85c9a46c030c1da5647cbb85083e99c061603a0e393a4016a17ca4"} Jan 27 15:28:00 crc kubenswrapper[4697]: I0127 15:28:00.554925 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c4c66cac-c142-4579-9d13-053d43983229","Type":"ContainerStarted","Data":"fc31460968fad5e33f9f19df858df9eccc59f40755f7b370317886d24b17763a"} Jan 27 15:28:00 crc kubenswrapper[4697]: I0127 15:28:00.581245 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28beb3db-3bed-4d98-b02d-781812a54a1e" path="/var/lib/kubelet/pods/28beb3db-3bed-4d98-b02d-781812a54a1e/volumes" Jan 27 15:28:02 crc kubenswrapper[4697]: I0127 15:28:02.878011 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 27 15:28:03 crc kubenswrapper[4697]: I0127 15:28:03.268409 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-8q5r5"] Jan 27 15:28:03 crc kubenswrapper[4697]: E0127 15:28:03.269138 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28beb3db-3bed-4d98-b02d-781812a54a1e" containerName="mariadb-account-create-update" Jan 27 15:28:03 crc kubenswrapper[4697]: I0127 15:28:03.269162 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="28beb3db-3bed-4d98-b02d-781812a54a1e" containerName="mariadb-account-create-update" Jan 27 15:28:03 crc kubenswrapper[4697]: E0127 15:28:03.269178 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afa66008-cd63-46fa-8ac6-622e2b465eec" containerName="swift-ring-rebalance" Jan 27 15:28:03 crc kubenswrapper[4697]: I0127 15:28:03.269187 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="afa66008-cd63-46fa-8ac6-622e2b465eec" containerName="swift-ring-rebalance" Jan 27 15:28:03 crc kubenswrapper[4697]: E0127 15:28:03.269202 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63de902c-4b1c-4fec-a5d4-573c52ff8c4a" containerName="ovn-config" Jan 27 15:28:03 crc kubenswrapper[4697]: I0127 15:28:03.269212 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="63de902c-4b1c-4fec-a5d4-573c52ff8c4a" containerName="ovn-config" Jan 27 15:28:03 crc kubenswrapper[4697]: I0127 15:28:03.269450 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="afa66008-cd63-46fa-8ac6-622e2b465eec" containerName="swift-ring-rebalance" Jan 27 15:28:03 crc kubenswrapper[4697]: I0127 15:28:03.269479 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="63de902c-4b1c-4fec-a5d4-573c52ff8c4a" containerName="ovn-config" Jan 27 15:28:03 crc kubenswrapper[4697]: I0127 15:28:03.269492 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="28beb3db-3bed-4d98-b02d-781812a54a1e" containerName="mariadb-account-create-update" Jan 27 15:28:03 crc kubenswrapper[4697]: I0127 15:28:03.270176 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-8q5r5" Jan 27 15:28:03 crc kubenswrapper[4697]: I0127 15:28:03.353847 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-8q5r5"] Jan 27 15:28:03 crc kubenswrapper[4697]: I0127 15:28:03.377134 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-02c8-account-create-update-2mlxf"] Jan 27 15:28:03 crc kubenswrapper[4697]: I0127 15:28:03.378379 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-02c8-account-create-update-2mlxf" Jan 27 15:28:03 crc kubenswrapper[4697]: I0127 15:28:03.381497 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 27 15:28:03 crc kubenswrapper[4697]: I0127 15:28:03.416031 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8vtn\" (UniqueName: \"kubernetes.io/projected/586cb948-6d70-4f31-b21b-9088567a2d5c-kube-api-access-g8vtn\") pod \"cinder-db-create-8q5r5\" (UID: \"586cb948-6d70-4f31-b21b-9088567a2d5c\") " pod="openstack/cinder-db-create-8q5r5" Jan 27 15:28:03 crc kubenswrapper[4697]: I0127 15:28:03.416082 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/586cb948-6d70-4f31-b21b-9088567a2d5c-operator-scripts\") pod \"cinder-db-create-8q5r5\" (UID: \"586cb948-6d70-4f31-b21b-9088567a2d5c\") " pod="openstack/cinder-db-create-8q5r5" Jan 27 15:28:03 crc kubenswrapper[4697]: I0127 15:28:03.465747 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-02c8-account-create-update-2mlxf"] Jan 27 15:28:03 crc kubenswrapper[4697]: I0127 15:28:03.518560 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g8vtn\" (UniqueName: \"kubernetes.io/projected/586cb948-6d70-4f31-b21b-9088567a2d5c-kube-api-access-g8vtn\") pod \"cinder-db-create-8q5r5\" (UID: \"586cb948-6d70-4f31-b21b-9088567a2d5c\") " pod="openstack/cinder-db-create-8q5r5" Jan 27 15:28:03 crc kubenswrapper[4697]: I0127 15:28:03.518611 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/586cb948-6d70-4f31-b21b-9088567a2d5c-operator-scripts\") pod \"cinder-db-create-8q5r5\" (UID: \"586cb948-6d70-4f31-b21b-9088567a2d5c\") " pod="openstack/cinder-db-create-8q5r5" Jan 27 15:28:03 crc kubenswrapper[4697]: I0127 15:28:03.518657 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7812f067-3dde-40e3-9a87-7e5d5d7d9597-operator-scripts\") pod \"cinder-02c8-account-create-update-2mlxf\" (UID: \"7812f067-3dde-40e3-9a87-7e5d5d7d9597\") " pod="openstack/cinder-02c8-account-create-update-2mlxf" Jan 27 15:28:03 crc kubenswrapper[4697]: I0127 15:28:03.518689 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vglwh\" (UniqueName: \"kubernetes.io/projected/7812f067-3dde-40e3-9a87-7e5d5d7d9597-kube-api-access-vglwh\") pod \"cinder-02c8-account-create-update-2mlxf\" (UID: \"7812f067-3dde-40e3-9a87-7e5d5d7d9597\") " pod="openstack/cinder-02c8-account-create-update-2mlxf" Jan 27 15:28:03 crc kubenswrapper[4697]: I0127 15:28:03.519712 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/586cb948-6d70-4f31-b21b-9088567a2d5c-operator-scripts\") pod \"cinder-db-create-8q5r5\" (UID: \"586cb948-6d70-4f31-b21b-9088567a2d5c\") " pod="openstack/cinder-db-create-8q5r5" Jan 27 15:28:03 crc kubenswrapper[4697]: I0127 15:28:03.543142 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g8vtn\" (UniqueName: \"kubernetes.io/projected/586cb948-6d70-4f31-b21b-9088567a2d5c-kube-api-access-g8vtn\") pod \"cinder-db-create-8q5r5\" (UID: \"586cb948-6d70-4f31-b21b-9088567a2d5c\") " pod="openstack/cinder-db-create-8q5r5" Jan 27 15:28:03 crc kubenswrapper[4697]: I0127 15:28:03.589013 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-8q5r5" Jan 27 15:28:03 crc kubenswrapper[4697]: I0127 15:28:03.605693 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-m5xbv"] Jan 27 15:28:03 crc kubenswrapper[4697]: I0127 15:28:03.606641 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-m5xbv" Jan 27 15:28:03 crc kubenswrapper[4697]: I0127 15:28:03.620572 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7812f067-3dde-40e3-9a87-7e5d5d7d9597-operator-scripts\") pod \"cinder-02c8-account-create-update-2mlxf\" (UID: \"7812f067-3dde-40e3-9a87-7e5d5d7d9597\") " pod="openstack/cinder-02c8-account-create-update-2mlxf" Jan 27 15:28:03 crc kubenswrapper[4697]: I0127 15:28:03.620641 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vglwh\" (UniqueName: \"kubernetes.io/projected/7812f067-3dde-40e3-9a87-7e5d5d7d9597-kube-api-access-vglwh\") pod \"cinder-02c8-account-create-update-2mlxf\" (UID: \"7812f067-3dde-40e3-9a87-7e5d5d7d9597\") " pod="openstack/cinder-02c8-account-create-update-2mlxf" Jan 27 15:28:03 crc kubenswrapper[4697]: I0127 15:28:03.621989 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7812f067-3dde-40e3-9a87-7e5d5d7d9597-operator-scripts\") pod \"cinder-02c8-account-create-update-2mlxf\" (UID: \"7812f067-3dde-40e3-9a87-7e5d5d7d9597\") " pod="openstack/cinder-02c8-account-create-update-2mlxf" Jan 27 15:28:03 crc kubenswrapper[4697]: I0127 15:28:03.640464 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-3ba9-account-create-update-d6wqk"] Jan 27 15:28:03 crc kubenswrapper[4697]: I0127 15:28:03.642018 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-3ba9-account-create-update-d6wqk" Jan 27 15:28:03 crc kubenswrapper[4697]: I0127 15:28:03.645314 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 27 15:28:03 crc kubenswrapper[4697]: I0127 15:28:03.652991 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-m5xbv"] Jan 27 15:28:03 crc kubenswrapper[4697]: I0127 15:28:03.665733 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-3ba9-account-create-update-d6wqk"] Jan 27 15:28:03 crc kubenswrapper[4697]: I0127 15:28:03.678368 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vglwh\" (UniqueName: \"kubernetes.io/projected/7812f067-3dde-40e3-9a87-7e5d5d7d9597-kube-api-access-vglwh\") pod \"cinder-02c8-account-create-update-2mlxf\" (UID: \"7812f067-3dde-40e3-9a87-7e5d5d7d9597\") " pod="openstack/cinder-02c8-account-create-update-2mlxf" Jan 27 15:28:03 crc kubenswrapper[4697]: I0127 15:28:03.687426 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-ph6xf"] Jan 27 15:28:03 crc kubenswrapper[4697]: I0127 15:28:03.690117 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-ph6xf" Jan 27 15:28:03 crc kubenswrapper[4697]: I0127 15:28:03.715226 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-02c8-account-create-update-2mlxf" Jan 27 15:28:03 crc kubenswrapper[4697]: I0127 15:28:03.716988 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-ph6xf"] Jan 27 15:28:03 crc kubenswrapper[4697]: I0127 15:28:03.724592 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff0f2382-375f-49bd-84e7-2de103947c5e-operator-scripts\") pod \"barbican-db-create-m5xbv\" (UID: \"ff0f2382-375f-49bd-84e7-2de103947c5e\") " pod="openstack/barbican-db-create-m5xbv" Jan 27 15:28:03 crc kubenswrapper[4697]: I0127 15:28:03.724664 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fn8xn\" (UniqueName: \"kubernetes.io/projected/ff0f2382-375f-49bd-84e7-2de103947c5e-kube-api-access-fn8xn\") pod \"barbican-db-create-m5xbv\" (UID: \"ff0f2382-375f-49bd-84e7-2de103947c5e\") " pod="openstack/barbican-db-create-m5xbv" Jan 27 15:28:03 crc kubenswrapper[4697]: I0127 15:28:03.844519 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdc4k\" (UniqueName: \"kubernetes.io/projected/26234d1c-51cd-4578-9ec6-cf5a7e8dc55c-kube-api-access-kdc4k\") pod \"neutron-db-create-ph6xf\" (UID: \"26234d1c-51cd-4578-9ec6-cf5a7e8dc55c\") " pod="openstack/neutron-db-create-ph6xf" Jan 27 15:28:03 crc kubenswrapper[4697]: I0127 15:28:03.844590 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff0f2382-375f-49bd-84e7-2de103947c5e-operator-scripts\") pod \"barbican-db-create-m5xbv\" (UID: \"ff0f2382-375f-49bd-84e7-2de103947c5e\") " pod="openstack/barbican-db-create-m5xbv" Jan 27 15:28:03 crc kubenswrapper[4697]: I0127 15:28:03.844668 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbgwv\" (UniqueName: \"kubernetes.io/projected/d0f61efb-e85d-4d6a-88d3-64b0a22dd759-kube-api-access-vbgwv\") pod \"barbican-3ba9-account-create-update-d6wqk\" (UID: \"d0f61efb-e85d-4d6a-88d3-64b0a22dd759\") " pod="openstack/barbican-3ba9-account-create-update-d6wqk" Jan 27 15:28:03 crc kubenswrapper[4697]: I0127 15:28:03.844739 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fn8xn\" (UniqueName: \"kubernetes.io/projected/ff0f2382-375f-49bd-84e7-2de103947c5e-kube-api-access-fn8xn\") pod \"barbican-db-create-m5xbv\" (UID: \"ff0f2382-375f-49bd-84e7-2de103947c5e\") " pod="openstack/barbican-db-create-m5xbv" Jan 27 15:28:03 crc kubenswrapper[4697]: I0127 15:28:03.844798 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26234d1c-51cd-4578-9ec6-cf5a7e8dc55c-operator-scripts\") pod \"neutron-db-create-ph6xf\" (UID: \"26234d1c-51cd-4578-9ec6-cf5a7e8dc55c\") " pod="openstack/neutron-db-create-ph6xf" Jan 27 15:28:03 crc kubenswrapper[4697]: I0127 15:28:03.844843 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d0f61efb-e85d-4d6a-88d3-64b0a22dd759-operator-scripts\") pod \"barbican-3ba9-account-create-update-d6wqk\" (UID: \"d0f61efb-e85d-4d6a-88d3-64b0a22dd759\") " pod="openstack/barbican-3ba9-account-create-update-d6wqk" Jan 27 15:28:03 crc kubenswrapper[4697]: I0127 15:28:03.845876 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff0f2382-375f-49bd-84e7-2de103947c5e-operator-scripts\") pod \"barbican-db-create-m5xbv\" (UID: \"ff0f2382-375f-49bd-84e7-2de103947c5e\") " pod="openstack/barbican-db-create-m5xbv" Jan 27 15:28:03 crc kubenswrapper[4697]: I0127 15:28:03.852056 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-a724-account-create-update-gldwc"] Jan 27 15:28:03 crc kubenswrapper[4697]: I0127 15:28:03.854431 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-a724-account-create-update-gldwc" Jan 27 15:28:03 crc kubenswrapper[4697]: I0127 15:28:03.858387 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 27 15:28:03 crc kubenswrapper[4697]: I0127 15:28:03.918994 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-a724-account-create-update-gldwc"] Jan 27 15:28:03 crc kubenswrapper[4697]: I0127 15:28:03.921948 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fn8xn\" (UniqueName: \"kubernetes.io/projected/ff0f2382-375f-49bd-84e7-2de103947c5e-kube-api-access-fn8xn\") pod \"barbican-db-create-m5xbv\" (UID: \"ff0f2382-375f-49bd-84e7-2de103947c5e\") " pod="openstack/barbican-db-create-m5xbv" Jan 27 15:28:03 crc kubenswrapper[4697]: I0127 15:28:03.926117 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-4tjc9"] Jan 27 15:28:03 crc kubenswrapper[4697]: I0127 15:28:03.927186 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-4tjc9" Jan 27 15:28:03 crc kubenswrapper[4697]: I0127 15:28:03.930551 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-hr2gd" Jan 27 15:28:03 crc kubenswrapper[4697]: I0127 15:28:03.930995 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 27 15:28:03 crc kubenswrapper[4697]: I0127 15:28:03.931595 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 27 15:28:03 crc kubenswrapper[4697]: I0127 15:28:03.931738 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 27 15:28:03 crc kubenswrapper[4697]: I0127 15:28:03.958122 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kdc4k\" (UniqueName: \"kubernetes.io/projected/26234d1c-51cd-4578-9ec6-cf5a7e8dc55c-kube-api-access-kdc4k\") pod \"neutron-db-create-ph6xf\" (UID: \"26234d1c-51cd-4578-9ec6-cf5a7e8dc55c\") " pod="openstack/neutron-db-create-ph6xf" Jan 27 15:28:03 crc kubenswrapper[4697]: I0127 15:28:03.958204 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nq2vj\" (UniqueName: \"kubernetes.io/projected/8fe4958e-ea1c-4420-939b-1ff0c52690fa-kube-api-access-nq2vj\") pod \"keystone-db-sync-4tjc9\" (UID: \"8fe4958e-ea1c-4420-939b-1ff0c52690fa\") " pod="openstack/keystone-db-sync-4tjc9" Jan 27 15:28:03 crc kubenswrapper[4697]: I0127 15:28:03.958254 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vbgwv\" (UniqueName: \"kubernetes.io/projected/d0f61efb-e85d-4d6a-88d3-64b0a22dd759-kube-api-access-vbgwv\") pod \"barbican-3ba9-account-create-update-d6wqk\" (UID: \"d0f61efb-e85d-4d6a-88d3-64b0a22dd759\") " pod="openstack/barbican-3ba9-account-create-update-d6wqk" Jan 27 15:28:03 crc kubenswrapper[4697]: I0127 15:28:03.958306 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f97k6\" (UniqueName: \"kubernetes.io/projected/45de88c8-a3d8-4d43-84af-0f72cabc6057-kube-api-access-f97k6\") pod \"neutron-a724-account-create-update-gldwc\" (UID: \"45de88c8-a3d8-4d43-84af-0f72cabc6057\") " pod="openstack/neutron-a724-account-create-update-gldwc" Jan 27 15:28:03 crc kubenswrapper[4697]: I0127 15:28:03.958340 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26234d1c-51cd-4578-9ec6-cf5a7e8dc55c-operator-scripts\") pod \"neutron-db-create-ph6xf\" (UID: \"26234d1c-51cd-4578-9ec6-cf5a7e8dc55c\") " pod="openstack/neutron-db-create-ph6xf" Jan 27 15:28:03 crc kubenswrapper[4697]: I0127 15:28:03.958377 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8fe4958e-ea1c-4420-939b-1ff0c52690fa-config-data\") pod \"keystone-db-sync-4tjc9\" (UID: \"8fe4958e-ea1c-4420-939b-1ff0c52690fa\") " pod="openstack/keystone-db-sync-4tjc9" Jan 27 15:28:03 crc kubenswrapper[4697]: I0127 15:28:03.958400 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d0f61efb-e85d-4d6a-88d3-64b0a22dd759-operator-scripts\") pod \"barbican-3ba9-account-create-update-d6wqk\" (UID: \"d0f61efb-e85d-4d6a-88d3-64b0a22dd759\") " pod="openstack/barbican-3ba9-account-create-update-d6wqk" Jan 27 15:28:03 crc kubenswrapper[4697]: I0127 15:28:03.958430 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/45de88c8-a3d8-4d43-84af-0f72cabc6057-operator-scripts\") pod \"neutron-a724-account-create-update-gldwc\" (UID: \"45de88c8-a3d8-4d43-84af-0f72cabc6057\") " pod="openstack/neutron-a724-account-create-update-gldwc" Jan 27 15:28:03 crc kubenswrapper[4697]: I0127 15:28:03.958454 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fe4958e-ea1c-4420-939b-1ff0c52690fa-combined-ca-bundle\") pod \"keystone-db-sync-4tjc9\" (UID: \"8fe4958e-ea1c-4420-939b-1ff0c52690fa\") " pod="openstack/keystone-db-sync-4tjc9" Jan 27 15:28:03 crc kubenswrapper[4697]: I0127 15:28:03.959410 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d0f61efb-e85d-4d6a-88d3-64b0a22dd759-operator-scripts\") pod \"barbican-3ba9-account-create-update-d6wqk\" (UID: \"d0f61efb-e85d-4d6a-88d3-64b0a22dd759\") " pod="openstack/barbican-3ba9-account-create-update-d6wqk" Jan 27 15:28:03 crc kubenswrapper[4697]: I0127 15:28:03.959629 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-m5xbv" Jan 27 15:28:03 crc kubenswrapper[4697]: I0127 15:28:03.960816 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26234d1c-51cd-4578-9ec6-cf5a7e8dc55c-operator-scripts\") pod \"neutron-db-create-ph6xf\" (UID: \"26234d1c-51cd-4578-9ec6-cf5a7e8dc55c\") " pod="openstack/neutron-db-create-ph6xf" Jan 27 15:28:03 crc kubenswrapper[4697]: I0127 15:28:03.973883 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-4tjc9"] Jan 27 15:28:03 crc kubenswrapper[4697]: I0127 15:28:03.987275 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbgwv\" (UniqueName: \"kubernetes.io/projected/d0f61efb-e85d-4d6a-88d3-64b0a22dd759-kube-api-access-vbgwv\") pod \"barbican-3ba9-account-create-update-d6wqk\" (UID: \"d0f61efb-e85d-4d6a-88d3-64b0a22dd759\") " pod="openstack/barbican-3ba9-account-create-update-d6wqk" Jan 27 15:28:03 crc kubenswrapper[4697]: I0127 15:28:03.993600 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kdc4k\" (UniqueName: \"kubernetes.io/projected/26234d1c-51cd-4578-9ec6-cf5a7e8dc55c-kube-api-access-kdc4k\") pod \"neutron-db-create-ph6xf\" (UID: \"26234d1c-51cd-4578-9ec6-cf5a7e8dc55c\") " pod="openstack/neutron-db-create-ph6xf" Jan 27 15:28:04 crc kubenswrapper[4697]: I0127 15:28:04.035000 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-ph6xf" Jan 27 15:28:04 crc kubenswrapper[4697]: I0127 15:28:04.060044 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nq2vj\" (UniqueName: \"kubernetes.io/projected/8fe4958e-ea1c-4420-939b-1ff0c52690fa-kube-api-access-nq2vj\") pod \"keystone-db-sync-4tjc9\" (UID: \"8fe4958e-ea1c-4420-939b-1ff0c52690fa\") " pod="openstack/keystone-db-sync-4tjc9" Jan 27 15:28:04 crc kubenswrapper[4697]: I0127 15:28:04.060139 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f97k6\" (UniqueName: \"kubernetes.io/projected/45de88c8-a3d8-4d43-84af-0f72cabc6057-kube-api-access-f97k6\") pod \"neutron-a724-account-create-update-gldwc\" (UID: \"45de88c8-a3d8-4d43-84af-0f72cabc6057\") " pod="openstack/neutron-a724-account-create-update-gldwc" Jan 27 15:28:04 crc kubenswrapper[4697]: I0127 15:28:04.060188 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8fe4958e-ea1c-4420-939b-1ff0c52690fa-config-data\") pod \"keystone-db-sync-4tjc9\" (UID: \"8fe4958e-ea1c-4420-939b-1ff0c52690fa\") " pod="openstack/keystone-db-sync-4tjc9" Jan 27 15:28:04 crc kubenswrapper[4697]: I0127 15:28:04.060215 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/45de88c8-a3d8-4d43-84af-0f72cabc6057-operator-scripts\") pod \"neutron-a724-account-create-update-gldwc\" (UID: \"45de88c8-a3d8-4d43-84af-0f72cabc6057\") " pod="openstack/neutron-a724-account-create-update-gldwc" Jan 27 15:28:04 crc kubenswrapper[4697]: I0127 15:28:04.060241 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fe4958e-ea1c-4420-939b-1ff0c52690fa-combined-ca-bundle\") pod \"keystone-db-sync-4tjc9\" (UID: \"8fe4958e-ea1c-4420-939b-1ff0c52690fa\") " pod="openstack/keystone-db-sync-4tjc9" Jan 27 15:28:04 crc kubenswrapper[4697]: I0127 15:28:04.061978 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/45de88c8-a3d8-4d43-84af-0f72cabc6057-operator-scripts\") pod \"neutron-a724-account-create-update-gldwc\" (UID: \"45de88c8-a3d8-4d43-84af-0f72cabc6057\") " pod="openstack/neutron-a724-account-create-update-gldwc" Jan 27 15:28:04 crc kubenswrapper[4697]: I0127 15:28:04.067064 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fe4958e-ea1c-4420-939b-1ff0c52690fa-combined-ca-bundle\") pod \"keystone-db-sync-4tjc9\" (UID: \"8fe4958e-ea1c-4420-939b-1ff0c52690fa\") " pod="openstack/keystone-db-sync-4tjc9" Jan 27 15:28:04 crc kubenswrapper[4697]: I0127 15:28:04.067901 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8fe4958e-ea1c-4420-939b-1ff0c52690fa-config-data\") pod \"keystone-db-sync-4tjc9\" (UID: \"8fe4958e-ea1c-4420-939b-1ff0c52690fa\") " pod="openstack/keystone-db-sync-4tjc9" Jan 27 15:28:04 crc kubenswrapper[4697]: I0127 15:28:04.087243 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nq2vj\" (UniqueName: \"kubernetes.io/projected/8fe4958e-ea1c-4420-939b-1ff0c52690fa-kube-api-access-nq2vj\") pod \"keystone-db-sync-4tjc9\" (UID: \"8fe4958e-ea1c-4420-939b-1ff0c52690fa\") " pod="openstack/keystone-db-sync-4tjc9" Jan 27 15:28:04 crc kubenswrapper[4697]: I0127 15:28:04.088457 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f97k6\" (UniqueName: \"kubernetes.io/projected/45de88c8-a3d8-4d43-84af-0f72cabc6057-kube-api-access-f97k6\") pod \"neutron-a724-account-create-update-gldwc\" (UID: \"45de88c8-a3d8-4d43-84af-0f72cabc6057\") " pod="openstack/neutron-a724-account-create-update-gldwc" Jan 27 15:28:04 crc kubenswrapper[4697]: I0127 15:28:04.180827 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-a724-account-create-update-gldwc" Jan 27 15:28:04 crc kubenswrapper[4697]: I0127 15:28:04.260118 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-3ba9-account-create-update-d6wqk" Jan 27 15:28:04 crc kubenswrapper[4697]: I0127 15:28:04.356061 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-4tjc9" Jan 27 15:28:04 crc kubenswrapper[4697]: I0127 15:28:04.518022 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-g8zkt"] Jan 27 15:28:04 crc kubenswrapper[4697]: I0127 15:28:04.518949 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-g8zkt" Jan 27 15:28:04 crc kubenswrapper[4697]: I0127 15:28:04.524213 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 27 15:28:04 crc kubenswrapper[4697]: I0127 15:28:04.526464 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-g8zkt"] Jan 27 15:28:04 crc kubenswrapper[4697]: I0127 15:28:04.570043 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzwxt\" (UniqueName: \"kubernetes.io/projected/17ec00c7-c7ad-4705-bd75-386e42e74100-kube-api-access-hzwxt\") pod \"root-account-create-update-g8zkt\" (UID: \"17ec00c7-c7ad-4705-bd75-386e42e74100\") " pod="openstack/root-account-create-update-g8zkt" Jan 27 15:28:04 crc kubenswrapper[4697]: I0127 15:28:04.570124 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/17ec00c7-c7ad-4705-bd75-386e42e74100-operator-scripts\") pod \"root-account-create-update-g8zkt\" (UID: \"17ec00c7-c7ad-4705-bd75-386e42e74100\") " pod="openstack/root-account-create-update-g8zkt" Jan 27 15:28:04 crc kubenswrapper[4697]: I0127 15:28:04.671885 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hzwxt\" (UniqueName: \"kubernetes.io/projected/17ec00c7-c7ad-4705-bd75-386e42e74100-kube-api-access-hzwxt\") pod \"root-account-create-update-g8zkt\" (UID: \"17ec00c7-c7ad-4705-bd75-386e42e74100\") " pod="openstack/root-account-create-update-g8zkt" Jan 27 15:28:04 crc kubenswrapper[4697]: I0127 15:28:04.671955 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/17ec00c7-c7ad-4705-bd75-386e42e74100-operator-scripts\") pod \"root-account-create-update-g8zkt\" (UID: \"17ec00c7-c7ad-4705-bd75-386e42e74100\") " pod="openstack/root-account-create-update-g8zkt" Jan 27 15:28:04 crc kubenswrapper[4697]: I0127 15:28:04.672619 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/17ec00c7-c7ad-4705-bd75-386e42e74100-operator-scripts\") pod \"root-account-create-update-g8zkt\" (UID: \"17ec00c7-c7ad-4705-bd75-386e42e74100\") " pod="openstack/root-account-create-update-g8zkt" Jan 27 15:28:04 crc kubenswrapper[4697]: I0127 15:28:04.693927 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hzwxt\" (UniqueName: \"kubernetes.io/projected/17ec00c7-c7ad-4705-bd75-386e42e74100-kube-api-access-hzwxt\") pod \"root-account-create-update-g8zkt\" (UID: \"17ec00c7-c7ad-4705-bd75-386e42e74100\") " pod="openstack/root-account-create-update-g8zkt" Jan 27 15:28:04 crc kubenswrapper[4697]: I0127 15:28:04.838079 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-g8zkt" Jan 27 15:28:10 crc kubenswrapper[4697]: E0127 15:28:10.176454 4697 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-glance-api:current-podified" Jan 27 15:28:10 crc kubenswrapper[4697]: E0127 15:28:10.177337 4697 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:glance-db-sync,Image:quay.io/podified-antelope-centos9/openstack-glance-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/glance/glance.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-b89bk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42415,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42415,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-db-sync-xgxk5_openstack(f75c5842-64d4-45c9-a282-b8fb8bea1af6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 15:28:10 crc kubenswrapper[4697]: E0127 15:28:10.178755 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/glance-db-sync-xgxk5" podUID="f75c5842-64d4-45c9-a282-b8fb8bea1af6" Jan 27 15:28:10 crc kubenswrapper[4697]: E0127 15:28:10.652605 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-glance-api:current-podified\\\"\"" pod="openstack/glance-db-sync-xgxk5" podUID="f75c5842-64d4-45c9-a282-b8fb8bea1af6" Jan 27 15:28:10 crc kubenswrapper[4697]: I0127 15:28:10.793570 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-8q5r5"] Jan 27 15:28:10 crc kubenswrapper[4697]: I0127 15:28:10.868191 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-a724-account-create-update-gldwc"] Jan 27 15:28:10 crc kubenswrapper[4697]: I0127 15:28:10.882212 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-02c8-account-create-update-2mlxf"] Jan 27 15:28:10 crc kubenswrapper[4697]: I0127 15:28:10.899691 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-m5xbv"] Jan 27 15:28:10 crc kubenswrapper[4697]: W0127 15:28:10.900682 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod45de88c8_a3d8_4d43_84af_0f72cabc6057.slice/crio-e73340246cd1e8f0f0c87e18e9bfeb35c0196c830a6512442e6b966c51c43b75 WatchSource:0}: Error finding container e73340246cd1e8f0f0c87e18e9bfeb35c0196c830a6512442e6b966c51c43b75: Status 404 returned error can't find the container with id e73340246cd1e8f0f0c87e18e9bfeb35c0196c830a6512442e6b966c51c43b75 Jan 27 15:28:10 crc kubenswrapper[4697]: I0127 15:28:10.905935 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-4tjc9"] Jan 27 15:28:10 crc kubenswrapper[4697]: W0127 15:28:10.925641 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8fe4958e_ea1c_4420_939b_1ff0c52690fa.slice/crio-a8d7272217ee519b71d71ab84a895182f689d56ad9c73d97e2f5f800e8199846 WatchSource:0}: Error finding container a8d7272217ee519b71d71ab84a895182f689d56ad9c73d97e2f5f800e8199846: Status 404 returned error can't find the container with id a8d7272217ee519b71d71ab84a895182f689d56ad9c73d97e2f5f800e8199846 Jan 27 15:28:10 crc kubenswrapper[4697]: W0127 15:28:10.927021 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podff0f2382_375f_49bd_84e7_2de103947c5e.slice/crio-e5c885fb19560b1c2d68f142040fc25fb5b2024ef7c6fea76cb29708839af416 WatchSource:0}: Error finding container e5c885fb19560b1c2d68f142040fc25fb5b2024ef7c6fea76cb29708839af416: Status 404 returned error can't find the container with id e5c885fb19560b1c2d68f142040fc25fb5b2024ef7c6fea76cb29708839af416 Jan 27 15:28:10 crc kubenswrapper[4697]: I0127 15:28:10.989211 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-ph6xf"] Jan 27 15:28:10 crc kubenswrapper[4697]: W0127 15:28:10.994469 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod26234d1c_51cd_4578_9ec6_cf5a7e8dc55c.slice/crio-640cf55dc72658362579cbb73c321ed3c1157ab0a40fa33a15575b741c6a32ef WatchSource:0}: Error finding container 640cf55dc72658362579cbb73c321ed3c1157ab0a40fa33a15575b741c6a32ef: Status 404 returned error can't find the container with id 640cf55dc72658362579cbb73c321ed3c1157ab0a40fa33a15575b741c6a32ef Jan 27 15:28:11 crc kubenswrapper[4697]: I0127 15:28:11.055382 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-g8zkt"] Jan 27 15:28:11 crc kubenswrapper[4697]: I0127 15:28:11.074507 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-3ba9-account-create-update-d6wqk"] Jan 27 15:28:11 crc kubenswrapper[4697]: W0127 15:28:11.086115 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd0f61efb_e85d_4d6a_88d3_64b0a22dd759.slice/crio-ac55ea04a61e6a713b617da795992a3d383f39d2c18497cf3454d14127deb8b6 WatchSource:0}: Error finding container ac55ea04a61e6a713b617da795992a3d383f39d2c18497cf3454d14127deb8b6: Status 404 returned error can't find the container with id ac55ea04a61e6a713b617da795992a3d383f39d2c18497cf3454d14127deb8b6 Jan 27 15:28:11 crc kubenswrapper[4697]: I0127 15:28:11.662702 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-3ba9-account-create-update-d6wqk" event={"ID":"d0f61efb-e85d-4d6a-88d3-64b0a22dd759","Type":"ContainerStarted","Data":"c13661180bdf46ee7e077d53b2ad5b73dcd82175c3391648dd3cc79125a12f6a"} Jan 27 15:28:11 crc kubenswrapper[4697]: I0127 15:28:11.662744 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-3ba9-account-create-update-d6wqk" event={"ID":"d0f61efb-e85d-4d6a-88d3-64b0a22dd759","Type":"ContainerStarted","Data":"ac55ea04a61e6a713b617da795992a3d383f39d2c18497cf3454d14127deb8b6"} Jan 27 15:28:11 crc kubenswrapper[4697]: I0127 15:28:11.664754 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-02c8-account-create-update-2mlxf" event={"ID":"7812f067-3dde-40e3-9a87-7e5d5d7d9597","Type":"ContainerStarted","Data":"1fdfa219d1e84f3159240b033968389ea916f9bb7c05d306752262ba3e6cebb9"} Jan 27 15:28:11 crc kubenswrapper[4697]: I0127 15:28:11.664799 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-02c8-account-create-update-2mlxf" event={"ID":"7812f067-3dde-40e3-9a87-7e5d5d7d9597","Type":"ContainerStarted","Data":"494abfc067140c0de4d34d45245fd5de243fb544375f8e0f3123ce90f38415ec"} Jan 27 15:28:11 crc kubenswrapper[4697]: I0127 15:28:11.666645 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-8q5r5" event={"ID":"586cb948-6d70-4f31-b21b-9088567a2d5c","Type":"ContainerStarted","Data":"a95db96655817a0910af44fe172774ddcf550a70d788ba5d5ee344897c2bb36e"} Jan 27 15:28:11 crc kubenswrapper[4697]: I0127 15:28:11.666668 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-8q5r5" event={"ID":"586cb948-6d70-4f31-b21b-9088567a2d5c","Type":"ContainerStarted","Data":"eb210ced6710cf20584b50705772a143271423dbd498f0224b12c2016041531c"} Jan 27 15:28:11 crc kubenswrapper[4697]: I0127 15:28:11.668711 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-ph6xf" event={"ID":"26234d1c-51cd-4578-9ec6-cf5a7e8dc55c","Type":"ContainerStarted","Data":"9b76737ca08d333ff296f533d7727d1b52c2558ce441d9f993c6440d4bf4857b"} Jan 27 15:28:11 crc kubenswrapper[4697]: I0127 15:28:11.668756 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-ph6xf" event={"ID":"26234d1c-51cd-4578-9ec6-cf5a7e8dc55c","Type":"ContainerStarted","Data":"640cf55dc72658362579cbb73c321ed3c1157ab0a40fa33a15575b741c6a32ef"} Jan 27 15:28:11 crc kubenswrapper[4697]: I0127 15:28:11.671359 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-m5xbv" event={"ID":"ff0f2382-375f-49bd-84e7-2de103947c5e","Type":"ContainerStarted","Data":"db2d41f8c7593cec6a2c4b6ed52c911f342ade7f340e0e5d2dc7aadf73e99594"} Jan 27 15:28:11 crc kubenswrapper[4697]: I0127 15:28:11.671417 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-m5xbv" event={"ID":"ff0f2382-375f-49bd-84e7-2de103947c5e","Type":"ContainerStarted","Data":"e5c885fb19560b1c2d68f142040fc25fb5b2024ef7c6fea76cb29708839af416"} Jan 27 15:28:11 crc kubenswrapper[4697]: I0127 15:28:11.672740 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-a724-account-create-update-gldwc" event={"ID":"45de88c8-a3d8-4d43-84af-0f72cabc6057","Type":"ContainerStarted","Data":"7e0e9df04c17427c4702d056553fc24f66ea4348c32523995153df39a1c2af62"} Jan 27 15:28:11 crc kubenswrapper[4697]: I0127 15:28:11.672817 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-a724-account-create-update-gldwc" event={"ID":"45de88c8-a3d8-4d43-84af-0f72cabc6057","Type":"ContainerStarted","Data":"e73340246cd1e8f0f0c87e18e9bfeb35c0196c830a6512442e6b966c51c43b75"} Jan 27 15:28:11 crc kubenswrapper[4697]: I0127 15:28:11.674724 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-g8zkt" event={"ID":"17ec00c7-c7ad-4705-bd75-386e42e74100","Type":"ContainerStarted","Data":"f68b7eb1707fe08e5fd67cb3c11131787a1326e928789e44ace2163d4dacc47b"} Jan 27 15:28:11 crc kubenswrapper[4697]: I0127 15:28:11.674774 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-g8zkt" event={"ID":"17ec00c7-c7ad-4705-bd75-386e42e74100","Type":"ContainerStarted","Data":"d88d9c604274284ed5237832b950a69ea7af56a7b2ba801ee9b8903fe39fcee2"} Jan 27 15:28:11 crc kubenswrapper[4697]: I0127 15:28:11.675666 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-4tjc9" event={"ID":"8fe4958e-ea1c-4420-939b-1ff0c52690fa","Type":"ContainerStarted","Data":"a8d7272217ee519b71d71ab84a895182f689d56ad9c73d97e2f5f800e8199846"} Jan 27 15:28:11 crc kubenswrapper[4697]: I0127 15:28:11.690545 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-create-8q5r5" podStartSLOduration=8.69052405 podStartE2EDuration="8.69052405s" podCreationTimestamp="2026-01-27 15:28:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:28:11.684936654 +0000 UTC m=+1187.857336435" watchObservedRunningTime="2026-01-27 15:28:11.69052405 +0000 UTC m=+1187.862923841" Jan 27 15:28:12 crc kubenswrapper[4697]: I0127 15:28:12.685081 4697 generic.go:334] "Generic (PLEG): container finished" podID="17ec00c7-c7ad-4705-bd75-386e42e74100" containerID="f68b7eb1707fe08e5fd67cb3c11131787a1326e928789e44ace2163d4dacc47b" exitCode=0 Jan 27 15:28:12 crc kubenswrapper[4697]: I0127 15:28:12.685496 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-g8zkt" event={"ID":"17ec00c7-c7ad-4705-bd75-386e42e74100","Type":"ContainerDied","Data":"f68b7eb1707fe08e5fd67cb3c11131787a1326e928789e44ace2163d4dacc47b"} Jan 27 15:28:12 crc kubenswrapper[4697]: I0127 15:28:12.687691 4697 generic.go:334] "Generic (PLEG): container finished" podID="586cb948-6d70-4f31-b21b-9088567a2d5c" containerID="a95db96655817a0910af44fe172774ddcf550a70d788ba5d5ee344897c2bb36e" exitCode=0 Jan 27 15:28:12 crc kubenswrapper[4697]: I0127 15:28:12.687756 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-8q5r5" event={"ID":"586cb948-6d70-4f31-b21b-9088567a2d5c","Type":"ContainerDied","Data":"a95db96655817a0910af44fe172774ddcf550a70d788ba5d5ee344897c2bb36e"} Jan 27 15:28:12 crc kubenswrapper[4697]: I0127 15:28:12.689941 4697 generic.go:334] "Generic (PLEG): container finished" podID="26234d1c-51cd-4578-9ec6-cf5a7e8dc55c" containerID="9b76737ca08d333ff296f533d7727d1b52c2558ce441d9f993c6440d4bf4857b" exitCode=0 Jan 27 15:28:12 crc kubenswrapper[4697]: I0127 15:28:12.690844 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-ph6xf" event={"ID":"26234d1c-51cd-4578-9ec6-cf5a7e8dc55c","Type":"ContainerDied","Data":"9b76737ca08d333ff296f533d7727d1b52c2558ce441d9f993c6440d4bf4857b"} Jan 27 15:28:12 crc kubenswrapper[4697]: I0127 15:28:12.746121 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-02c8-account-create-update-2mlxf" podStartSLOduration=9.746103135 podStartE2EDuration="9.746103135s" podCreationTimestamp="2026-01-27 15:28:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:28:12.724832158 +0000 UTC m=+1188.897231949" watchObservedRunningTime="2026-01-27 15:28:12.746103135 +0000 UTC m=+1188.918502916" Jan 27 15:28:12 crc kubenswrapper[4697]: I0127 15:28:12.753056 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-a724-account-create-update-gldwc" podStartSLOduration=9.753039074 podStartE2EDuration="9.753039074s" podCreationTimestamp="2026-01-27 15:28:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:28:12.737096016 +0000 UTC m=+1188.909495797" watchObservedRunningTime="2026-01-27 15:28:12.753039074 +0000 UTC m=+1188.925438855" Jan 27 15:28:12 crc kubenswrapper[4697]: I0127 15:28:12.771191 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-3ba9-account-create-update-d6wqk" podStartSLOduration=9.771171804 podStartE2EDuration="9.771171804s" podCreationTimestamp="2026-01-27 15:28:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:28:12.769195406 +0000 UTC m=+1188.941595187" watchObservedRunningTime="2026-01-27 15:28:12.771171804 +0000 UTC m=+1188.943571585" Jan 27 15:28:12 crc kubenswrapper[4697]: I0127 15:28:12.802746 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-m5xbv" podStartSLOduration=9.80272289 podStartE2EDuration="9.80272289s" podCreationTimestamp="2026-01-27 15:28:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:28:12.796332255 +0000 UTC m=+1188.968732036" watchObservedRunningTime="2026-01-27 15:28:12.80272289 +0000 UTC m=+1188.975122671" Jan 27 15:28:13 crc kubenswrapper[4697]: I0127 15:28:13.157879 4697 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="eda501db-ef38-4c1f-b2d6-3e009fe24e40" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.99:5671: connect: connection refused" Jan 27 15:28:13 crc kubenswrapper[4697]: I0127 15:28:13.705767 4697 generic.go:334] "Generic (PLEG): container finished" podID="45de88c8-a3d8-4d43-84af-0f72cabc6057" containerID="7e0e9df04c17427c4702d056553fc24f66ea4348c32523995153df39a1c2af62" exitCode=0 Jan 27 15:28:13 crc kubenswrapper[4697]: I0127 15:28:13.705885 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-a724-account-create-update-gldwc" event={"ID":"45de88c8-a3d8-4d43-84af-0f72cabc6057","Type":"ContainerDied","Data":"7e0e9df04c17427c4702d056553fc24f66ea4348c32523995153df39a1c2af62"} Jan 27 15:28:13 crc kubenswrapper[4697]: I0127 15:28:13.720113 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c4c66cac-c142-4579-9d13-053d43983229","Type":"ContainerStarted","Data":"c82af29838dbf39955d089e31d691c54a77e521ce1111af963ee50f2ab79df6a"} Jan 27 15:28:13 crc kubenswrapper[4697]: I0127 15:28:13.722160 4697 generic.go:334] "Generic (PLEG): container finished" podID="ff0f2382-375f-49bd-84e7-2de103947c5e" containerID="db2d41f8c7593cec6a2c4b6ed52c911f342ade7f340e0e5d2dc7aadf73e99594" exitCode=0 Jan 27 15:28:13 crc kubenswrapper[4697]: I0127 15:28:13.722342 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-m5xbv" event={"ID":"ff0f2382-375f-49bd-84e7-2de103947c5e","Type":"ContainerDied","Data":"db2d41f8c7593cec6a2c4b6ed52c911f342ade7f340e0e5d2dc7aadf73e99594"} Jan 27 15:28:14 crc kubenswrapper[4697]: I0127 15:28:14.103733 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-8q5r5" Jan 27 15:28:14 crc kubenswrapper[4697]: I0127 15:28:14.163354 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-g8zkt" Jan 27 15:28:14 crc kubenswrapper[4697]: I0127 15:28:14.175972 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-ph6xf" Jan 27 15:28:14 crc kubenswrapper[4697]: I0127 15:28:14.248113 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26234d1c-51cd-4578-9ec6-cf5a7e8dc55c-operator-scripts\") pod \"26234d1c-51cd-4578-9ec6-cf5a7e8dc55c\" (UID: \"26234d1c-51cd-4578-9ec6-cf5a7e8dc55c\") " Jan 27 15:28:14 crc kubenswrapper[4697]: I0127 15:28:14.248177 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/586cb948-6d70-4f31-b21b-9088567a2d5c-operator-scripts\") pod \"586cb948-6d70-4f31-b21b-9088567a2d5c\" (UID: \"586cb948-6d70-4f31-b21b-9088567a2d5c\") " Jan 27 15:28:14 crc kubenswrapper[4697]: I0127 15:28:14.248207 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g8vtn\" (UniqueName: \"kubernetes.io/projected/586cb948-6d70-4f31-b21b-9088567a2d5c-kube-api-access-g8vtn\") pod \"586cb948-6d70-4f31-b21b-9088567a2d5c\" (UID: \"586cb948-6d70-4f31-b21b-9088567a2d5c\") " Jan 27 15:28:14 crc kubenswrapper[4697]: I0127 15:28:14.248287 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hzwxt\" (UniqueName: \"kubernetes.io/projected/17ec00c7-c7ad-4705-bd75-386e42e74100-kube-api-access-hzwxt\") pod \"17ec00c7-c7ad-4705-bd75-386e42e74100\" (UID: \"17ec00c7-c7ad-4705-bd75-386e42e74100\") " Jan 27 15:28:14 crc kubenswrapper[4697]: I0127 15:28:14.248321 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/17ec00c7-c7ad-4705-bd75-386e42e74100-operator-scripts\") pod \"17ec00c7-c7ad-4705-bd75-386e42e74100\" (UID: \"17ec00c7-c7ad-4705-bd75-386e42e74100\") " Jan 27 15:28:14 crc kubenswrapper[4697]: I0127 15:28:14.248405 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kdc4k\" (UniqueName: \"kubernetes.io/projected/26234d1c-51cd-4578-9ec6-cf5a7e8dc55c-kube-api-access-kdc4k\") pod \"26234d1c-51cd-4578-9ec6-cf5a7e8dc55c\" (UID: \"26234d1c-51cd-4578-9ec6-cf5a7e8dc55c\") " Jan 27 15:28:14 crc kubenswrapper[4697]: I0127 15:28:14.249837 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/586cb948-6d70-4f31-b21b-9088567a2d5c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "586cb948-6d70-4f31-b21b-9088567a2d5c" (UID: "586cb948-6d70-4f31-b21b-9088567a2d5c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:28:14 crc kubenswrapper[4697]: I0127 15:28:14.250673 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17ec00c7-c7ad-4705-bd75-386e42e74100-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "17ec00c7-c7ad-4705-bd75-386e42e74100" (UID: "17ec00c7-c7ad-4705-bd75-386e42e74100"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:28:14 crc kubenswrapper[4697]: I0127 15:28:14.251598 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26234d1c-51cd-4578-9ec6-cf5a7e8dc55c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "26234d1c-51cd-4578-9ec6-cf5a7e8dc55c" (UID: "26234d1c-51cd-4578-9ec6-cf5a7e8dc55c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:28:14 crc kubenswrapper[4697]: I0127 15:28:14.255060 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/586cb948-6d70-4f31-b21b-9088567a2d5c-kube-api-access-g8vtn" (OuterVolumeSpecName: "kube-api-access-g8vtn") pod "586cb948-6d70-4f31-b21b-9088567a2d5c" (UID: "586cb948-6d70-4f31-b21b-9088567a2d5c"). InnerVolumeSpecName "kube-api-access-g8vtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:28:14 crc kubenswrapper[4697]: I0127 15:28:14.255203 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17ec00c7-c7ad-4705-bd75-386e42e74100-kube-api-access-hzwxt" (OuterVolumeSpecName: "kube-api-access-hzwxt") pod "17ec00c7-c7ad-4705-bd75-386e42e74100" (UID: "17ec00c7-c7ad-4705-bd75-386e42e74100"). InnerVolumeSpecName "kube-api-access-hzwxt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:28:14 crc kubenswrapper[4697]: I0127 15:28:14.255871 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26234d1c-51cd-4578-9ec6-cf5a7e8dc55c-kube-api-access-kdc4k" (OuterVolumeSpecName: "kube-api-access-kdc4k") pod "26234d1c-51cd-4578-9ec6-cf5a7e8dc55c" (UID: "26234d1c-51cd-4578-9ec6-cf5a7e8dc55c"). InnerVolumeSpecName "kube-api-access-kdc4k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:28:14 crc kubenswrapper[4697]: I0127 15:28:14.350243 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hzwxt\" (UniqueName: \"kubernetes.io/projected/17ec00c7-c7ad-4705-bd75-386e42e74100-kube-api-access-hzwxt\") on node \"crc\" DevicePath \"\"" Jan 27 15:28:14 crc kubenswrapper[4697]: I0127 15:28:14.350271 4697 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/17ec00c7-c7ad-4705-bd75-386e42e74100-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 15:28:14 crc kubenswrapper[4697]: I0127 15:28:14.350282 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kdc4k\" (UniqueName: \"kubernetes.io/projected/26234d1c-51cd-4578-9ec6-cf5a7e8dc55c-kube-api-access-kdc4k\") on node \"crc\" DevicePath \"\"" Jan 27 15:28:14 crc kubenswrapper[4697]: I0127 15:28:14.350290 4697 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26234d1c-51cd-4578-9ec6-cf5a7e8dc55c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 15:28:14 crc kubenswrapper[4697]: I0127 15:28:14.350299 4697 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/586cb948-6d70-4f31-b21b-9088567a2d5c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 15:28:14 crc kubenswrapper[4697]: I0127 15:28:14.350308 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g8vtn\" (UniqueName: \"kubernetes.io/projected/586cb948-6d70-4f31-b21b-9088567a2d5c-kube-api-access-g8vtn\") on node \"crc\" DevicePath \"\"" Jan 27 15:28:14 crc kubenswrapper[4697]: I0127 15:28:14.739742 4697 generic.go:334] "Generic (PLEG): container finished" podID="d0f61efb-e85d-4d6a-88d3-64b0a22dd759" containerID="c13661180bdf46ee7e077d53b2ad5b73dcd82175c3391648dd3cc79125a12f6a" exitCode=0 Jan 27 15:28:14 crc kubenswrapper[4697]: I0127 15:28:14.739827 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-3ba9-account-create-update-d6wqk" event={"ID":"d0f61efb-e85d-4d6a-88d3-64b0a22dd759","Type":"ContainerDied","Data":"c13661180bdf46ee7e077d53b2ad5b73dcd82175c3391648dd3cc79125a12f6a"} Jan 27 15:28:14 crc kubenswrapper[4697]: I0127 15:28:14.742920 4697 generic.go:334] "Generic (PLEG): container finished" podID="7812f067-3dde-40e3-9a87-7e5d5d7d9597" containerID="1fdfa219d1e84f3159240b033968389ea916f9bb7c05d306752262ba3e6cebb9" exitCode=0 Jan 27 15:28:14 crc kubenswrapper[4697]: I0127 15:28:14.743007 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-02c8-account-create-update-2mlxf" event={"ID":"7812f067-3dde-40e3-9a87-7e5d5d7d9597","Type":"ContainerDied","Data":"1fdfa219d1e84f3159240b033968389ea916f9bb7c05d306752262ba3e6cebb9"} Jan 27 15:28:14 crc kubenswrapper[4697]: I0127 15:28:14.768149 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-8q5r5" event={"ID":"586cb948-6d70-4f31-b21b-9088567a2d5c","Type":"ContainerDied","Data":"eb210ced6710cf20584b50705772a143271423dbd498f0224b12c2016041531c"} Jan 27 15:28:14 crc kubenswrapper[4697]: I0127 15:28:14.768198 4697 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eb210ced6710cf20584b50705772a143271423dbd498f0224b12c2016041531c" Jan 27 15:28:14 crc kubenswrapper[4697]: I0127 15:28:14.768270 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-8q5r5" Jan 27 15:28:14 crc kubenswrapper[4697]: I0127 15:28:14.782571 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-ph6xf" Jan 27 15:28:14 crc kubenswrapper[4697]: I0127 15:28:14.782568 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-ph6xf" event={"ID":"26234d1c-51cd-4578-9ec6-cf5a7e8dc55c","Type":"ContainerDied","Data":"640cf55dc72658362579cbb73c321ed3c1157ab0a40fa33a15575b741c6a32ef"} Jan 27 15:28:14 crc kubenswrapper[4697]: I0127 15:28:14.782859 4697 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="640cf55dc72658362579cbb73c321ed3c1157ab0a40fa33a15575b741c6a32ef" Jan 27 15:28:14 crc kubenswrapper[4697]: I0127 15:28:14.786119 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-g8zkt" event={"ID":"17ec00c7-c7ad-4705-bd75-386e42e74100","Type":"ContainerDied","Data":"d88d9c604274284ed5237832b950a69ea7af56a7b2ba801ee9b8903fe39fcee2"} Jan 27 15:28:14 crc kubenswrapper[4697]: I0127 15:28:14.786126 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-g8zkt" Jan 27 15:28:14 crc kubenswrapper[4697]: I0127 15:28:14.786136 4697 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d88d9c604274284ed5237832b950a69ea7af56a7b2ba801ee9b8903fe39fcee2" Jan 27 15:28:14 crc kubenswrapper[4697]: I0127 15:28:14.798469 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c4c66cac-c142-4579-9d13-053d43983229","Type":"ContainerStarted","Data":"31786d07f346b62c45595b8949ab70ec453fad7697c3775228dc5cfac77a5fc3"} Jan 27 15:28:14 crc kubenswrapper[4697]: I0127 15:28:14.798508 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c4c66cac-c142-4579-9d13-053d43983229","Type":"ContainerStarted","Data":"6f1da676b320835eedb729470da9c7afed329ff638bc003d6300f969c56d6ea4"} Jan 27 15:28:14 crc kubenswrapper[4697]: I0127 15:28:14.798519 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c4c66cac-c142-4579-9d13-053d43983229","Type":"ContainerStarted","Data":"d513a22b19854ce5aacc03b1ab2db64ff18a5519a600d75b8447b4e494675f7e"} Jan 27 15:28:15 crc kubenswrapper[4697]: I0127 15:28:15.269616 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-m5xbv" Jan 27 15:28:15 crc kubenswrapper[4697]: I0127 15:28:15.337756 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-a724-account-create-update-gldwc" Jan 27 15:28:15 crc kubenswrapper[4697]: I0127 15:28:15.391592 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff0f2382-375f-49bd-84e7-2de103947c5e-operator-scripts\") pod \"ff0f2382-375f-49bd-84e7-2de103947c5e\" (UID: \"ff0f2382-375f-49bd-84e7-2de103947c5e\") " Jan 27 15:28:15 crc kubenswrapper[4697]: I0127 15:28:15.391898 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fn8xn\" (UniqueName: \"kubernetes.io/projected/ff0f2382-375f-49bd-84e7-2de103947c5e-kube-api-access-fn8xn\") pod \"ff0f2382-375f-49bd-84e7-2de103947c5e\" (UID: \"ff0f2382-375f-49bd-84e7-2de103947c5e\") " Jan 27 15:28:15 crc kubenswrapper[4697]: I0127 15:28:15.392493 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff0f2382-375f-49bd-84e7-2de103947c5e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ff0f2382-375f-49bd-84e7-2de103947c5e" (UID: "ff0f2382-375f-49bd-84e7-2de103947c5e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:28:15 crc kubenswrapper[4697]: I0127 15:28:15.396650 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff0f2382-375f-49bd-84e7-2de103947c5e-kube-api-access-fn8xn" (OuterVolumeSpecName: "kube-api-access-fn8xn") pod "ff0f2382-375f-49bd-84e7-2de103947c5e" (UID: "ff0f2382-375f-49bd-84e7-2de103947c5e"). InnerVolumeSpecName "kube-api-access-fn8xn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:28:15 crc kubenswrapper[4697]: I0127 15:28:15.493315 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f97k6\" (UniqueName: \"kubernetes.io/projected/45de88c8-a3d8-4d43-84af-0f72cabc6057-kube-api-access-f97k6\") pod \"45de88c8-a3d8-4d43-84af-0f72cabc6057\" (UID: \"45de88c8-a3d8-4d43-84af-0f72cabc6057\") " Jan 27 15:28:15 crc kubenswrapper[4697]: I0127 15:28:15.493570 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/45de88c8-a3d8-4d43-84af-0f72cabc6057-operator-scripts\") pod \"45de88c8-a3d8-4d43-84af-0f72cabc6057\" (UID: \"45de88c8-a3d8-4d43-84af-0f72cabc6057\") " Jan 27 15:28:15 crc kubenswrapper[4697]: I0127 15:28:15.493975 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fn8xn\" (UniqueName: \"kubernetes.io/projected/ff0f2382-375f-49bd-84e7-2de103947c5e-kube-api-access-fn8xn\") on node \"crc\" DevicePath \"\"" Jan 27 15:28:15 crc kubenswrapper[4697]: I0127 15:28:15.493996 4697 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff0f2382-375f-49bd-84e7-2de103947c5e-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 15:28:15 crc kubenswrapper[4697]: I0127 15:28:15.494830 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45de88c8-a3d8-4d43-84af-0f72cabc6057-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "45de88c8-a3d8-4d43-84af-0f72cabc6057" (UID: "45de88c8-a3d8-4d43-84af-0f72cabc6057"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:28:15 crc kubenswrapper[4697]: I0127 15:28:15.498022 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45de88c8-a3d8-4d43-84af-0f72cabc6057-kube-api-access-f97k6" (OuterVolumeSpecName: "kube-api-access-f97k6") pod "45de88c8-a3d8-4d43-84af-0f72cabc6057" (UID: "45de88c8-a3d8-4d43-84af-0f72cabc6057"). InnerVolumeSpecName "kube-api-access-f97k6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:28:15 crc kubenswrapper[4697]: I0127 15:28:15.594917 4697 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/45de88c8-a3d8-4d43-84af-0f72cabc6057-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 15:28:15 crc kubenswrapper[4697]: I0127 15:28:15.594945 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f97k6\" (UniqueName: \"kubernetes.io/projected/45de88c8-a3d8-4d43-84af-0f72cabc6057-kube-api-access-f97k6\") on node \"crc\" DevicePath \"\"" Jan 27 15:28:15 crc kubenswrapper[4697]: I0127 15:28:15.823028 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-a724-account-create-update-gldwc" event={"ID":"45de88c8-a3d8-4d43-84af-0f72cabc6057","Type":"ContainerDied","Data":"e73340246cd1e8f0f0c87e18e9bfeb35c0196c830a6512442e6b966c51c43b75"} Jan 27 15:28:15 crc kubenswrapper[4697]: I0127 15:28:15.823078 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-a724-account-create-update-gldwc" Jan 27 15:28:15 crc kubenswrapper[4697]: I0127 15:28:15.823088 4697 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e73340246cd1e8f0f0c87e18e9bfeb35c0196c830a6512442e6b966c51c43b75" Jan 27 15:28:15 crc kubenswrapper[4697]: I0127 15:28:15.825068 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-m5xbv" event={"ID":"ff0f2382-375f-49bd-84e7-2de103947c5e","Type":"ContainerDied","Data":"e5c885fb19560b1c2d68f142040fc25fb5b2024ef7c6fea76cb29708839af416"} Jan 27 15:28:15 crc kubenswrapper[4697]: I0127 15:28:15.825106 4697 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e5c885fb19560b1c2d68f142040fc25fb5b2024ef7c6fea76cb29708839af416" Jan 27 15:28:15 crc kubenswrapper[4697]: I0127 15:28:15.825223 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-m5xbv" Jan 27 15:28:21 crc kubenswrapper[4697]: I0127 15:28:16.209753 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-02c8-account-create-update-2mlxf" Jan 27 15:28:21 crc kubenswrapper[4697]: I0127 15:28:16.219518 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-3ba9-account-create-update-d6wqk" Jan 27 15:28:21 crc kubenswrapper[4697]: I0127 15:28:16.318765 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vglwh\" (UniqueName: \"kubernetes.io/projected/7812f067-3dde-40e3-9a87-7e5d5d7d9597-kube-api-access-vglwh\") pod \"7812f067-3dde-40e3-9a87-7e5d5d7d9597\" (UID: \"7812f067-3dde-40e3-9a87-7e5d5d7d9597\") " Jan 27 15:28:21 crc kubenswrapper[4697]: I0127 15:28:16.318925 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d0f61efb-e85d-4d6a-88d3-64b0a22dd759-operator-scripts\") pod \"d0f61efb-e85d-4d6a-88d3-64b0a22dd759\" (UID: \"d0f61efb-e85d-4d6a-88d3-64b0a22dd759\") " Jan 27 15:28:21 crc kubenswrapper[4697]: I0127 15:28:16.319059 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vbgwv\" (UniqueName: \"kubernetes.io/projected/d0f61efb-e85d-4d6a-88d3-64b0a22dd759-kube-api-access-vbgwv\") pod \"d0f61efb-e85d-4d6a-88d3-64b0a22dd759\" (UID: \"d0f61efb-e85d-4d6a-88d3-64b0a22dd759\") " Jan 27 15:28:21 crc kubenswrapper[4697]: I0127 15:28:16.319087 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7812f067-3dde-40e3-9a87-7e5d5d7d9597-operator-scripts\") pod \"7812f067-3dde-40e3-9a87-7e5d5d7d9597\" (UID: \"7812f067-3dde-40e3-9a87-7e5d5d7d9597\") " Jan 27 15:28:21 crc kubenswrapper[4697]: I0127 15:28:16.319499 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d0f61efb-e85d-4d6a-88d3-64b0a22dd759-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d0f61efb-e85d-4d6a-88d3-64b0a22dd759" (UID: "d0f61efb-e85d-4d6a-88d3-64b0a22dd759"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:28:21 crc kubenswrapper[4697]: I0127 15:28:16.320089 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7812f067-3dde-40e3-9a87-7e5d5d7d9597-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7812f067-3dde-40e3-9a87-7e5d5d7d9597" (UID: "7812f067-3dde-40e3-9a87-7e5d5d7d9597"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:28:21 crc kubenswrapper[4697]: I0127 15:28:16.324083 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7812f067-3dde-40e3-9a87-7e5d5d7d9597-kube-api-access-vglwh" (OuterVolumeSpecName: "kube-api-access-vglwh") pod "7812f067-3dde-40e3-9a87-7e5d5d7d9597" (UID: "7812f067-3dde-40e3-9a87-7e5d5d7d9597"). InnerVolumeSpecName "kube-api-access-vglwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:28:21 crc kubenswrapper[4697]: I0127 15:28:16.324147 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0f61efb-e85d-4d6a-88d3-64b0a22dd759-kube-api-access-vbgwv" (OuterVolumeSpecName: "kube-api-access-vbgwv") pod "d0f61efb-e85d-4d6a-88d3-64b0a22dd759" (UID: "d0f61efb-e85d-4d6a-88d3-64b0a22dd759"). InnerVolumeSpecName "kube-api-access-vbgwv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:28:21 crc kubenswrapper[4697]: I0127 15:28:16.421114 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vbgwv\" (UniqueName: \"kubernetes.io/projected/d0f61efb-e85d-4d6a-88d3-64b0a22dd759-kube-api-access-vbgwv\") on node \"crc\" DevicePath \"\"" Jan 27 15:28:21 crc kubenswrapper[4697]: I0127 15:28:16.421142 4697 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7812f067-3dde-40e3-9a87-7e5d5d7d9597-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 15:28:21 crc kubenswrapper[4697]: I0127 15:28:16.421151 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vglwh\" (UniqueName: \"kubernetes.io/projected/7812f067-3dde-40e3-9a87-7e5d5d7d9597-kube-api-access-vglwh\") on node \"crc\" DevicePath \"\"" Jan 27 15:28:21 crc kubenswrapper[4697]: I0127 15:28:16.421160 4697 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d0f61efb-e85d-4d6a-88d3-64b0a22dd759-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 15:28:21 crc kubenswrapper[4697]: I0127 15:28:16.834011 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-3ba9-account-create-update-d6wqk" event={"ID":"d0f61efb-e85d-4d6a-88d3-64b0a22dd759","Type":"ContainerDied","Data":"ac55ea04a61e6a713b617da795992a3d383f39d2c18497cf3454d14127deb8b6"} Jan 27 15:28:21 crc kubenswrapper[4697]: I0127 15:28:16.834287 4697 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ac55ea04a61e6a713b617da795992a3d383f39d2c18497cf3454d14127deb8b6" Jan 27 15:28:21 crc kubenswrapper[4697]: I0127 15:28:16.834373 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-3ba9-account-create-update-d6wqk" Jan 27 15:28:21 crc kubenswrapper[4697]: I0127 15:28:16.836735 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-02c8-account-create-update-2mlxf" event={"ID":"7812f067-3dde-40e3-9a87-7e5d5d7d9597","Type":"ContainerDied","Data":"494abfc067140c0de4d34d45245fd5de243fb544375f8e0f3123ce90f38415ec"} Jan 27 15:28:21 crc kubenswrapper[4697]: I0127 15:28:16.836770 4697 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="494abfc067140c0de4d34d45245fd5de243fb544375f8e0f3123ce90f38415ec" Jan 27 15:28:21 crc kubenswrapper[4697]: I0127 15:28:16.837069 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-02c8-account-create-update-2mlxf" Jan 27 15:28:22 crc kubenswrapper[4697]: I0127 15:28:22.888487 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-4tjc9" event={"ID":"8fe4958e-ea1c-4420-939b-1ff0c52690fa","Type":"ContainerStarted","Data":"d574d3462c9c9ced213e0bc8a80d48e45569e0116e26de633b987ffd2ebb4464"} Jan 27 15:28:22 crc kubenswrapper[4697]: I0127 15:28:22.898286 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c4c66cac-c142-4579-9d13-053d43983229","Type":"ContainerStarted","Data":"930f93920d564a6368dc2721c74d5b753b8c217421ffc67823ea949fac8e8dac"} Jan 27 15:28:22 crc kubenswrapper[4697]: I0127 15:28:22.898327 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c4c66cac-c142-4579-9d13-053d43983229","Type":"ContainerStarted","Data":"3c9607b9bc9055524bb44242d94d7e8a631d99f3d4c3481c4a0d8b63b0ffdd53"} Jan 27 15:28:22 crc kubenswrapper[4697]: I0127 15:28:22.898337 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c4c66cac-c142-4579-9d13-053d43983229","Type":"ContainerStarted","Data":"147da4e11c6fb936171616cd8f159bb73bcad7b6b874f594be8e13611637b65c"} Jan 27 15:28:22 crc kubenswrapper[4697]: I0127 15:28:22.898347 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c4c66cac-c142-4579-9d13-053d43983229","Type":"ContainerStarted","Data":"0fa330f5a9ad9c2626cc36d67df1766df24e91963863a8381f38efc7985e1322"} Jan 27 15:28:22 crc kubenswrapper[4697]: I0127 15:28:22.911662 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-4tjc9" podStartSLOduration=8.820330332 podStartE2EDuration="19.911642554s" podCreationTimestamp="2026-01-27 15:28:03 +0000 UTC" firstStartedPulling="2026-01-27 15:28:10.940270952 +0000 UTC m=+1187.112670743" lastFinishedPulling="2026-01-27 15:28:22.031583184 +0000 UTC m=+1198.203982965" observedRunningTime="2026-01-27 15:28:22.905641689 +0000 UTC m=+1199.078041460" watchObservedRunningTime="2026-01-27 15:28:22.911642554 +0000 UTC m=+1199.084042335" Jan 27 15:28:23 crc kubenswrapper[4697]: I0127 15:28:23.161002 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 27 15:28:23 crc kubenswrapper[4697]: I0127 15:28:23.926577 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c4c66cac-c142-4579-9d13-053d43983229","Type":"ContainerStarted","Data":"183aa19a4f19f52c7626c4b2ca7ca84238a14ec7c653edb705c3243caabb9024"} Jan 27 15:28:23 crc kubenswrapper[4697]: I0127 15:28:23.926677 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c4c66cac-c142-4579-9d13-053d43983229","Type":"ContainerStarted","Data":"fc7620fff76249139663bdc233ef3c0d976f9942aff23617906e0cd22c9bfc37"} Jan 27 15:28:24 crc kubenswrapper[4697]: I0127 15:28:24.940701 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c4c66cac-c142-4579-9d13-053d43983229","Type":"ContainerStarted","Data":"ac9bbb7ac4e34e4806a7fd9c1c451599ea978b0a3ce62c1b1cfe958af6f95216"} Jan 27 15:28:24 crc kubenswrapper[4697]: I0127 15:28:24.994281 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=21.354837992 podStartE2EDuration="46.994261182s" podCreationTimestamp="2026-01-27 15:27:38 +0000 UTC" firstStartedPulling="2026-01-27 15:27:56.394680806 +0000 UTC m=+1172.567080597" lastFinishedPulling="2026-01-27 15:28:22.034104006 +0000 UTC m=+1198.206503787" observedRunningTime="2026-01-27 15:28:24.98718161 +0000 UTC m=+1201.159581401" watchObservedRunningTime="2026-01-27 15:28:24.994261182 +0000 UTC m=+1201.166660963" Jan 27 15:28:25 crc kubenswrapper[4697]: I0127 15:28:25.400597 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-nqsmh"] Jan 27 15:28:25 crc kubenswrapper[4697]: E0127 15:28:25.401071 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff0f2382-375f-49bd-84e7-2de103947c5e" containerName="mariadb-database-create" Jan 27 15:28:25 crc kubenswrapper[4697]: I0127 15:28:25.401097 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff0f2382-375f-49bd-84e7-2de103947c5e" containerName="mariadb-database-create" Jan 27 15:28:25 crc kubenswrapper[4697]: E0127 15:28:25.401125 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45de88c8-a3d8-4d43-84af-0f72cabc6057" containerName="mariadb-account-create-update" Jan 27 15:28:25 crc kubenswrapper[4697]: I0127 15:28:25.401134 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="45de88c8-a3d8-4d43-84af-0f72cabc6057" containerName="mariadb-account-create-update" Jan 27 15:28:25 crc kubenswrapper[4697]: E0127 15:28:25.401151 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17ec00c7-c7ad-4705-bd75-386e42e74100" containerName="mariadb-account-create-update" Jan 27 15:28:25 crc kubenswrapper[4697]: I0127 15:28:25.401159 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="17ec00c7-c7ad-4705-bd75-386e42e74100" containerName="mariadb-account-create-update" Jan 27 15:28:25 crc kubenswrapper[4697]: E0127 15:28:25.401173 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7812f067-3dde-40e3-9a87-7e5d5d7d9597" containerName="mariadb-account-create-update" Jan 27 15:28:25 crc kubenswrapper[4697]: I0127 15:28:25.401180 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="7812f067-3dde-40e3-9a87-7e5d5d7d9597" containerName="mariadb-account-create-update" Jan 27 15:28:25 crc kubenswrapper[4697]: E0127 15:28:25.401197 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26234d1c-51cd-4578-9ec6-cf5a7e8dc55c" containerName="mariadb-database-create" Jan 27 15:28:25 crc kubenswrapper[4697]: I0127 15:28:25.401205 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="26234d1c-51cd-4578-9ec6-cf5a7e8dc55c" containerName="mariadb-database-create" Jan 27 15:28:25 crc kubenswrapper[4697]: E0127 15:28:25.401220 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="586cb948-6d70-4f31-b21b-9088567a2d5c" containerName="mariadb-database-create" Jan 27 15:28:25 crc kubenswrapper[4697]: I0127 15:28:25.401227 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="586cb948-6d70-4f31-b21b-9088567a2d5c" containerName="mariadb-database-create" Jan 27 15:28:25 crc kubenswrapper[4697]: E0127 15:28:25.401248 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0f61efb-e85d-4d6a-88d3-64b0a22dd759" containerName="mariadb-account-create-update" Jan 27 15:28:25 crc kubenswrapper[4697]: I0127 15:28:25.401256 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0f61efb-e85d-4d6a-88d3-64b0a22dd759" containerName="mariadb-account-create-update" Jan 27 15:28:25 crc kubenswrapper[4697]: I0127 15:28:25.401436 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0f61efb-e85d-4d6a-88d3-64b0a22dd759" containerName="mariadb-account-create-update" Jan 27 15:28:25 crc kubenswrapper[4697]: I0127 15:28:25.401454 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="586cb948-6d70-4f31-b21b-9088567a2d5c" containerName="mariadb-database-create" Jan 27 15:28:25 crc kubenswrapper[4697]: I0127 15:28:25.401466 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="26234d1c-51cd-4578-9ec6-cf5a7e8dc55c" containerName="mariadb-database-create" Jan 27 15:28:25 crc kubenswrapper[4697]: I0127 15:28:25.401478 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff0f2382-375f-49bd-84e7-2de103947c5e" containerName="mariadb-database-create" Jan 27 15:28:25 crc kubenswrapper[4697]: I0127 15:28:25.401491 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="17ec00c7-c7ad-4705-bd75-386e42e74100" containerName="mariadb-account-create-update" Jan 27 15:28:25 crc kubenswrapper[4697]: I0127 15:28:25.401507 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="7812f067-3dde-40e3-9a87-7e5d5d7d9597" containerName="mariadb-account-create-update" Jan 27 15:28:25 crc kubenswrapper[4697]: I0127 15:28:25.401517 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="45de88c8-a3d8-4d43-84af-0f72cabc6057" containerName="mariadb-account-create-update" Jan 27 15:28:25 crc kubenswrapper[4697]: I0127 15:28:25.402467 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-nqsmh" Jan 27 15:28:25 crc kubenswrapper[4697]: I0127 15:28:25.405473 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Jan 27 15:28:25 crc kubenswrapper[4697]: I0127 15:28:25.428097 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-nqsmh"] Jan 27 15:28:25 crc kubenswrapper[4697]: I0127 15:28:25.476564 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a43e2028-864c-4cb7-b20a-cc9e01417436-dns-svc\") pod \"dnsmasq-dns-764c5664d7-nqsmh\" (UID: \"a43e2028-864c-4cb7-b20a-cc9e01417436\") " pod="openstack/dnsmasq-dns-764c5664d7-nqsmh" Jan 27 15:28:25 crc kubenswrapper[4697]: I0127 15:28:25.476616 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkrh8\" (UniqueName: \"kubernetes.io/projected/a43e2028-864c-4cb7-b20a-cc9e01417436-kube-api-access-nkrh8\") pod \"dnsmasq-dns-764c5664d7-nqsmh\" (UID: \"a43e2028-864c-4cb7-b20a-cc9e01417436\") " pod="openstack/dnsmasq-dns-764c5664d7-nqsmh" Jan 27 15:28:25 crc kubenswrapper[4697]: I0127 15:28:25.476665 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a43e2028-864c-4cb7-b20a-cc9e01417436-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-nqsmh\" (UID: \"a43e2028-864c-4cb7-b20a-cc9e01417436\") " pod="openstack/dnsmasq-dns-764c5664d7-nqsmh" Jan 27 15:28:25 crc kubenswrapper[4697]: I0127 15:28:25.476701 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a43e2028-864c-4cb7-b20a-cc9e01417436-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-nqsmh\" (UID: \"a43e2028-864c-4cb7-b20a-cc9e01417436\") " pod="openstack/dnsmasq-dns-764c5664d7-nqsmh" Jan 27 15:28:25 crc kubenswrapper[4697]: I0127 15:28:25.476812 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a43e2028-864c-4cb7-b20a-cc9e01417436-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-nqsmh\" (UID: \"a43e2028-864c-4cb7-b20a-cc9e01417436\") " pod="openstack/dnsmasq-dns-764c5664d7-nqsmh" Jan 27 15:28:25 crc kubenswrapper[4697]: I0127 15:28:25.476844 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a43e2028-864c-4cb7-b20a-cc9e01417436-config\") pod \"dnsmasq-dns-764c5664d7-nqsmh\" (UID: \"a43e2028-864c-4cb7-b20a-cc9e01417436\") " pod="openstack/dnsmasq-dns-764c5664d7-nqsmh" Jan 27 15:28:25 crc kubenswrapper[4697]: I0127 15:28:25.577626 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a43e2028-864c-4cb7-b20a-cc9e01417436-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-nqsmh\" (UID: \"a43e2028-864c-4cb7-b20a-cc9e01417436\") " pod="openstack/dnsmasq-dns-764c5664d7-nqsmh" Jan 27 15:28:25 crc kubenswrapper[4697]: I0127 15:28:25.577717 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a43e2028-864c-4cb7-b20a-cc9e01417436-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-nqsmh\" (UID: \"a43e2028-864c-4cb7-b20a-cc9e01417436\") " pod="openstack/dnsmasq-dns-764c5664d7-nqsmh" Jan 27 15:28:25 crc kubenswrapper[4697]: I0127 15:28:25.577745 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a43e2028-864c-4cb7-b20a-cc9e01417436-config\") pod \"dnsmasq-dns-764c5664d7-nqsmh\" (UID: \"a43e2028-864c-4cb7-b20a-cc9e01417436\") " pod="openstack/dnsmasq-dns-764c5664d7-nqsmh" Jan 27 15:28:25 crc kubenswrapper[4697]: I0127 15:28:25.577766 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a43e2028-864c-4cb7-b20a-cc9e01417436-dns-svc\") pod \"dnsmasq-dns-764c5664d7-nqsmh\" (UID: \"a43e2028-864c-4cb7-b20a-cc9e01417436\") " pod="openstack/dnsmasq-dns-764c5664d7-nqsmh" Jan 27 15:28:25 crc kubenswrapper[4697]: I0127 15:28:25.577805 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nkrh8\" (UniqueName: \"kubernetes.io/projected/a43e2028-864c-4cb7-b20a-cc9e01417436-kube-api-access-nkrh8\") pod \"dnsmasq-dns-764c5664d7-nqsmh\" (UID: \"a43e2028-864c-4cb7-b20a-cc9e01417436\") " pod="openstack/dnsmasq-dns-764c5664d7-nqsmh" Jan 27 15:28:25 crc kubenswrapper[4697]: I0127 15:28:25.577838 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a43e2028-864c-4cb7-b20a-cc9e01417436-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-nqsmh\" (UID: \"a43e2028-864c-4cb7-b20a-cc9e01417436\") " pod="openstack/dnsmasq-dns-764c5664d7-nqsmh" Jan 27 15:28:25 crc kubenswrapper[4697]: I0127 15:28:25.579488 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a43e2028-864c-4cb7-b20a-cc9e01417436-config\") pod \"dnsmasq-dns-764c5664d7-nqsmh\" (UID: \"a43e2028-864c-4cb7-b20a-cc9e01417436\") " pod="openstack/dnsmasq-dns-764c5664d7-nqsmh" Jan 27 15:28:25 crc kubenswrapper[4697]: I0127 15:28:25.579709 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a43e2028-864c-4cb7-b20a-cc9e01417436-dns-svc\") pod \"dnsmasq-dns-764c5664d7-nqsmh\" (UID: \"a43e2028-864c-4cb7-b20a-cc9e01417436\") " pod="openstack/dnsmasq-dns-764c5664d7-nqsmh" Jan 27 15:28:25 crc kubenswrapper[4697]: I0127 15:28:25.580198 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a43e2028-864c-4cb7-b20a-cc9e01417436-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-nqsmh\" (UID: \"a43e2028-864c-4cb7-b20a-cc9e01417436\") " pod="openstack/dnsmasq-dns-764c5664d7-nqsmh" Jan 27 15:28:25 crc kubenswrapper[4697]: I0127 15:28:25.580228 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a43e2028-864c-4cb7-b20a-cc9e01417436-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-nqsmh\" (UID: \"a43e2028-864c-4cb7-b20a-cc9e01417436\") " pod="openstack/dnsmasq-dns-764c5664d7-nqsmh" Jan 27 15:28:25 crc kubenswrapper[4697]: I0127 15:28:25.580301 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a43e2028-864c-4cb7-b20a-cc9e01417436-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-nqsmh\" (UID: \"a43e2028-864c-4cb7-b20a-cc9e01417436\") " pod="openstack/dnsmasq-dns-764c5664d7-nqsmh" Jan 27 15:28:25 crc kubenswrapper[4697]: I0127 15:28:25.597510 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nkrh8\" (UniqueName: \"kubernetes.io/projected/a43e2028-864c-4cb7-b20a-cc9e01417436-kube-api-access-nkrh8\") pod \"dnsmasq-dns-764c5664d7-nqsmh\" (UID: \"a43e2028-864c-4cb7-b20a-cc9e01417436\") " pod="openstack/dnsmasq-dns-764c5664d7-nqsmh" Jan 27 15:28:25 crc kubenswrapper[4697]: I0127 15:28:25.731763 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-nqsmh" Jan 27 15:28:25 crc kubenswrapper[4697]: I0127 15:28:25.956269 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-xgxk5" event={"ID":"f75c5842-64d4-45c9-a282-b8fb8bea1af6","Type":"ContainerStarted","Data":"7c000ddf2638ad23872e6f733e1f5c537c95bf9ee2fd1f801eac52b3a2c28342"} Jan 27 15:28:25 crc kubenswrapper[4697]: I0127 15:28:25.979090 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-xgxk5" podStartSLOduration=2.158788732 podStartE2EDuration="34.979056557s" podCreationTimestamp="2026-01-27 15:27:51 +0000 UTC" firstStartedPulling="2026-01-27 15:27:52.346015336 +0000 UTC m=+1168.518415127" lastFinishedPulling="2026-01-27 15:28:25.166283161 +0000 UTC m=+1201.338682952" observedRunningTime="2026-01-27 15:28:25.975195193 +0000 UTC m=+1202.147594974" watchObservedRunningTime="2026-01-27 15:28:25.979056557 +0000 UTC m=+1202.151456338" Jan 27 15:28:26 crc kubenswrapper[4697]: I0127 15:28:26.286381 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-nqsmh"] Jan 27 15:28:26 crc kubenswrapper[4697]: I0127 15:28:26.965544 4697 generic.go:334] "Generic (PLEG): container finished" podID="a43e2028-864c-4cb7-b20a-cc9e01417436" containerID="18a0cba2b03049e4f7a5785305c82ef68559cd03fcf3dbc4e6e26c213e3d2553" exitCode=0 Jan 27 15:28:26 crc kubenswrapper[4697]: I0127 15:28:26.965599 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-nqsmh" event={"ID":"a43e2028-864c-4cb7-b20a-cc9e01417436","Type":"ContainerDied","Data":"18a0cba2b03049e4f7a5785305c82ef68559cd03fcf3dbc4e6e26c213e3d2553"} Jan 27 15:28:26 crc kubenswrapper[4697]: I0127 15:28:26.965881 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-nqsmh" event={"ID":"a43e2028-864c-4cb7-b20a-cc9e01417436","Type":"ContainerStarted","Data":"fa44932c41321033c2894ca290d29ec7bf7e983da6c9079236db5caf8af5e808"} Jan 27 15:28:27 crc kubenswrapper[4697]: I0127 15:28:27.975518 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-nqsmh" event={"ID":"a43e2028-864c-4cb7-b20a-cc9e01417436","Type":"ContainerStarted","Data":"177c51e20a03bb61eb9238fb000c07b6864bdc36abc15f53c2c51d3b0d92094f"} Jan 27 15:28:27 crc kubenswrapper[4697]: I0127 15:28:27.976162 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-764c5664d7-nqsmh" Jan 27 15:28:28 crc kubenswrapper[4697]: I0127 15:28:28.004386 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-764c5664d7-nqsmh" podStartSLOduration=3.004368932 podStartE2EDuration="3.004368932s" podCreationTimestamp="2026-01-27 15:28:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:28:27.99981091 +0000 UTC m=+1204.172210721" watchObservedRunningTime="2026-01-27 15:28:28.004368932 +0000 UTC m=+1204.176768703" Jan 27 15:28:28 crc kubenswrapper[4697]: I0127 15:28:28.990365 4697 generic.go:334] "Generic (PLEG): container finished" podID="8fe4958e-ea1c-4420-939b-1ff0c52690fa" containerID="d574d3462c9c9ced213e0bc8a80d48e45569e0116e26de633b987ffd2ebb4464" exitCode=0 Jan 27 15:28:28 crc kubenswrapper[4697]: I0127 15:28:28.990444 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-4tjc9" event={"ID":"8fe4958e-ea1c-4420-939b-1ff0c52690fa","Type":"ContainerDied","Data":"d574d3462c9c9ced213e0bc8a80d48e45569e0116e26de633b987ffd2ebb4464"} Jan 27 15:28:30 crc kubenswrapper[4697]: I0127 15:28:30.270439 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-4tjc9" Jan 27 15:28:30 crc kubenswrapper[4697]: I0127 15:28:30.434741 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8fe4958e-ea1c-4420-939b-1ff0c52690fa-config-data\") pod \"8fe4958e-ea1c-4420-939b-1ff0c52690fa\" (UID: \"8fe4958e-ea1c-4420-939b-1ff0c52690fa\") " Jan 27 15:28:30 crc kubenswrapper[4697]: I0127 15:28:30.434808 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nq2vj\" (UniqueName: \"kubernetes.io/projected/8fe4958e-ea1c-4420-939b-1ff0c52690fa-kube-api-access-nq2vj\") pod \"8fe4958e-ea1c-4420-939b-1ff0c52690fa\" (UID: \"8fe4958e-ea1c-4420-939b-1ff0c52690fa\") " Jan 27 15:28:30 crc kubenswrapper[4697]: I0127 15:28:30.434876 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fe4958e-ea1c-4420-939b-1ff0c52690fa-combined-ca-bundle\") pod \"8fe4958e-ea1c-4420-939b-1ff0c52690fa\" (UID: \"8fe4958e-ea1c-4420-939b-1ff0c52690fa\") " Jan 27 15:28:30 crc kubenswrapper[4697]: I0127 15:28:30.450208 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8fe4958e-ea1c-4420-939b-1ff0c52690fa-kube-api-access-nq2vj" (OuterVolumeSpecName: "kube-api-access-nq2vj") pod "8fe4958e-ea1c-4420-939b-1ff0c52690fa" (UID: "8fe4958e-ea1c-4420-939b-1ff0c52690fa"). InnerVolumeSpecName "kube-api-access-nq2vj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:28:30 crc kubenswrapper[4697]: I0127 15:28:30.459171 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8fe4958e-ea1c-4420-939b-1ff0c52690fa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8fe4958e-ea1c-4420-939b-1ff0c52690fa" (UID: "8fe4958e-ea1c-4420-939b-1ff0c52690fa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:28:30 crc kubenswrapper[4697]: I0127 15:28:30.477584 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8fe4958e-ea1c-4420-939b-1ff0c52690fa-config-data" (OuterVolumeSpecName: "config-data") pod "8fe4958e-ea1c-4420-939b-1ff0c52690fa" (UID: "8fe4958e-ea1c-4420-939b-1ff0c52690fa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:28:30 crc kubenswrapper[4697]: I0127 15:28:30.537599 4697 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fe4958e-ea1c-4420-939b-1ff0c52690fa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 15:28:30 crc kubenswrapper[4697]: I0127 15:28:30.537666 4697 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8fe4958e-ea1c-4420-939b-1ff0c52690fa-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 15:28:30 crc kubenswrapper[4697]: I0127 15:28:30.537685 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nq2vj\" (UniqueName: \"kubernetes.io/projected/8fe4958e-ea1c-4420-939b-1ff0c52690fa-kube-api-access-nq2vj\") on node \"crc\" DevicePath \"\"" Jan 27 15:28:31 crc kubenswrapper[4697]: I0127 15:28:31.008636 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-4tjc9" event={"ID":"8fe4958e-ea1c-4420-939b-1ff0c52690fa","Type":"ContainerDied","Data":"a8d7272217ee519b71d71ab84a895182f689d56ad9c73d97e2f5f800e8199846"} Jan 27 15:28:31 crc kubenswrapper[4697]: I0127 15:28:31.008676 4697 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a8d7272217ee519b71d71ab84a895182f689d56ad9c73d97e2f5f800e8199846" Jan 27 15:28:31 crc kubenswrapper[4697]: I0127 15:28:31.008728 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-4tjc9" Jan 27 15:28:31 crc kubenswrapper[4697]: I0127 15:28:31.291168 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-mwqgr"] Jan 27 15:28:31 crc kubenswrapper[4697]: E0127 15:28:31.292464 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fe4958e-ea1c-4420-939b-1ff0c52690fa" containerName="keystone-db-sync" Jan 27 15:28:31 crc kubenswrapper[4697]: I0127 15:28:31.292552 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fe4958e-ea1c-4420-939b-1ff0c52690fa" containerName="keystone-db-sync" Jan 27 15:28:31 crc kubenswrapper[4697]: I0127 15:28:31.292770 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="8fe4958e-ea1c-4420-939b-1ff0c52690fa" containerName="keystone-db-sync" Jan 27 15:28:31 crc kubenswrapper[4697]: I0127 15:28:31.296642 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-mwqgr" Jan 27 15:28:31 crc kubenswrapper[4697]: I0127 15:28:31.311192 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 27 15:28:31 crc kubenswrapper[4697]: I0127 15:28:31.311920 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 27 15:28:31 crc kubenswrapper[4697]: I0127 15:28:31.312521 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 27 15:28:31 crc kubenswrapper[4697]: I0127 15:28:31.313312 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-hr2gd" Jan 27 15:28:31 crc kubenswrapper[4697]: I0127 15:28:31.314888 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 27 15:28:31 crc kubenswrapper[4697]: I0127 15:28:31.335466 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-mwqgr"] Jan 27 15:28:31 crc kubenswrapper[4697]: I0127 15:28:31.345489 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-nqsmh"] Jan 27 15:28:31 crc kubenswrapper[4697]: I0127 15:28:31.345896 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-764c5664d7-nqsmh" podUID="a43e2028-864c-4cb7-b20a-cc9e01417436" containerName="dnsmasq-dns" containerID="cri-o://177c51e20a03bb61eb9238fb000c07b6864bdc36abc15f53c2c51d3b0d92094f" gracePeriod=10 Jan 27 15:28:31 crc kubenswrapper[4697]: I0127 15:28:31.427344 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-5xj92"] Jan 27 15:28:31 crc kubenswrapper[4697]: I0127 15:28:31.432310 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-5xj92" Jan 27 15:28:31 crc kubenswrapper[4697]: I0127 15:28:31.453703 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09bebc2c-092b-415e-b4b3-80296620ce1b-combined-ca-bundle\") pod \"keystone-bootstrap-mwqgr\" (UID: \"09bebc2c-092b-415e-b4b3-80296620ce1b\") " pod="openstack/keystone-bootstrap-mwqgr" Jan 27 15:28:31 crc kubenswrapper[4697]: I0127 15:28:31.454089 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zt5n\" (UniqueName: \"kubernetes.io/projected/09bebc2c-092b-415e-b4b3-80296620ce1b-kube-api-access-4zt5n\") pod \"keystone-bootstrap-mwqgr\" (UID: \"09bebc2c-092b-415e-b4b3-80296620ce1b\") " pod="openstack/keystone-bootstrap-mwqgr" Jan 27 15:28:31 crc kubenswrapper[4697]: I0127 15:28:31.454217 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09bebc2c-092b-415e-b4b3-80296620ce1b-config-data\") pod \"keystone-bootstrap-mwqgr\" (UID: \"09bebc2c-092b-415e-b4b3-80296620ce1b\") " pod="openstack/keystone-bootstrap-mwqgr" Jan 27 15:28:31 crc kubenswrapper[4697]: I0127 15:28:31.454364 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/09bebc2c-092b-415e-b4b3-80296620ce1b-fernet-keys\") pod \"keystone-bootstrap-mwqgr\" (UID: \"09bebc2c-092b-415e-b4b3-80296620ce1b\") " pod="openstack/keystone-bootstrap-mwqgr" Jan 27 15:28:31 crc kubenswrapper[4697]: I0127 15:28:31.454521 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/09bebc2c-092b-415e-b4b3-80296620ce1b-credential-keys\") pod \"keystone-bootstrap-mwqgr\" (UID: \"09bebc2c-092b-415e-b4b3-80296620ce1b\") " pod="openstack/keystone-bootstrap-mwqgr" Jan 27 15:28:31 crc kubenswrapper[4697]: I0127 15:28:31.454657 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/09bebc2c-092b-415e-b4b3-80296620ce1b-scripts\") pod \"keystone-bootstrap-mwqgr\" (UID: \"09bebc2c-092b-415e-b4b3-80296620ce1b\") " pod="openstack/keystone-bootstrap-mwqgr" Jan 27 15:28:31 crc kubenswrapper[4697]: I0127 15:28:31.461835 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-5xj92"] Jan 27 15:28:31 crc kubenswrapper[4697]: I0127 15:28:31.556048 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/09bebc2c-092b-415e-b4b3-80296620ce1b-fernet-keys\") pod \"keystone-bootstrap-mwqgr\" (UID: \"09bebc2c-092b-415e-b4b3-80296620ce1b\") " pod="openstack/keystone-bootstrap-mwqgr" Jan 27 15:28:31 crc kubenswrapper[4697]: I0127 15:28:31.556803 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmdkz\" (UniqueName: \"kubernetes.io/projected/a7bd5fd5-3c0d-4ff4-ae1e-933d95c02e40-kube-api-access-gmdkz\") pod \"dnsmasq-dns-5959f8865f-5xj92\" (UID: \"a7bd5fd5-3c0d-4ff4-ae1e-933d95c02e40\") " pod="openstack/dnsmasq-dns-5959f8865f-5xj92" Jan 27 15:28:31 crc kubenswrapper[4697]: I0127 15:28:31.556858 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/09bebc2c-092b-415e-b4b3-80296620ce1b-credential-keys\") pod \"keystone-bootstrap-mwqgr\" (UID: \"09bebc2c-092b-415e-b4b3-80296620ce1b\") " pod="openstack/keystone-bootstrap-mwqgr" Jan 27 15:28:31 crc kubenswrapper[4697]: I0127 15:28:31.556921 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/09bebc2c-092b-415e-b4b3-80296620ce1b-scripts\") pod \"keystone-bootstrap-mwqgr\" (UID: \"09bebc2c-092b-415e-b4b3-80296620ce1b\") " pod="openstack/keystone-bootstrap-mwqgr" Jan 27 15:28:31 crc kubenswrapper[4697]: I0127 15:28:31.556953 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a7bd5fd5-3c0d-4ff4-ae1e-933d95c02e40-dns-svc\") pod \"dnsmasq-dns-5959f8865f-5xj92\" (UID: \"a7bd5fd5-3c0d-4ff4-ae1e-933d95c02e40\") " pod="openstack/dnsmasq-dns-5959f8865f-5xj92" Jan 27 15:28:31 crc kubenswrapper[4697]: I0127 15:28:31.556976 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09bebc2c-092b-415e-b4b3-80296620ce1b-combined-ca-bundle\") pod \"keystone-bootstrap-mwqgr\" (UID: \"09bebc2c-092b-415e-b4b3-80296620ce1b\") " pod="openstack/keystone-bootstrap-mwqgr" Jan 27 15:28:31 crc kubenswrapper[4697]: I0127 15:28:31.556997 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4zt5n\" (UniqueName: \"kubernetes.io/projected/09bebc2c-092b-415e-b4b3-80296620ce1b-kube-api-access-4zt5n\") pod \"keystone-bootstrap-mwqgr\" (UID: \"09bebc2c-092b-415e-b4b3-80296620ce1b\") " pod="openstack/keystone-bootstrap-mwqgr" Jan 27 15:28:31 crc kubenswrapper[4697]: I0127 15:28:31.557017 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a7bd5fd5-3c0d-4ff4-ae1e-933d95c02e40-dns-swift-storage-0\") pod \"dnsmasq-dns-5959f8865f-5xj92\" (UID: \"a7bd5fd5-3c0d-4ff4-ae1e-933d95c02e40\") " pod="openstack/dnsmasq-dns-5959f8865f-5xj92" Jan 27 15:28:31 crc kubenswrapper[4697]: I0127 15:28:31.557042 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a7bd5fd5-3c0d-4ff4-ae1e-933d95c02e40-ovsdbserver-nb\") pod \"dnsmasq-dns-5959f8865f-5xj92\" (UID: \"a7bd5fd5-3c0d-4ff4-ae1e-933d95c02e40\") " pod="openstack/dnsmasq-dns-5959f8865f-5xj92" Jan 27 15:28:31 crc kubenswrapper[4697]: I0127 15:28:31.557068 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7bd5fd5-3c0d-4ff4-ae1e-933d95c02e40-config\") pod \"dnsmasq-dns-5959f8865f-5xj92\" (UID: \"a7bd5fd5-3c0d-4ff4-ae1e-933d95c02e40\") " pod="openstack/dnsmasq-dns-5959f8865f-5xj92" Jan 27 15:28:31 crc kubenswrapper[4697]: I0127 15:28:31.558712 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09bebc2c-092b-415e-b4b3-80296620ce1b-config-data\") pod \"keystone-bootstrap-mwqgr\" (UID: \"09bebc2c-092b-415e-b4b3-80296620ce1b\") " pod="openstack/keystone-bootstrap-mwqgr" Jan 27 15:28:31 crc kubenswrapper[4697]: I0127 15:28:31.558754 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a7bd5fd5-3c0d-4ff4-ae1e-933d95c02e40-ovsdbserver-sb\") pod \"dnsmasq-dns-5959f8865f-5xj92\" (UID: \"a7bd5fd5-3c0d-4ff4-ae1e-933d95c02e40\") " pod="openstack/dnsmasq-dns-5959f8865f-5xj92" Jan 27 15:28:31 crc kubenswrapper[4697]: I0127 15:28:31.567664 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09bebc2c-092b-415e-b4b3-80296620ce1b-config-data\") pod \"keystone-bootstrap-mwqgr\" (UID: \"09bebc2c-092b-415e-b4b3-80296620ce1b\") " pod="openstack/keystone-bootstrap-mwqgr" Jan 27 15:28:31 crc kubenswrapper[4697]: I0127 15:28:31.567972 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/09bebc2c-092b-415e-b4b3-80296620ce1b-fernet-keys\") pod \"keystone-bootstrap-mwqgr\" (UID: \"09bebc2c-092b-415e-b4b3-80296620ce1b\") " pod="openstack/keystone-bootstrap-mwqgr" Jan 27 15:28:31 crc kubenswrapper[4697]: I0127 15:28:31.569202 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/09bebc2c-092b-415e-b4b3-80296620ce1b-credential-keys\") pod \"keystone-bootstrap-mwqgr\" (UID: \"09bebc2c-092b-415e-b4b3-80296620ce1b\") " pod="openstack/keystone-bootstrap-mwqgr" Jan 27 15:28:31 crc kubenswrapper[4697]: I0127 15:28:31.578192 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/09bebc2c-092b-415e-b4b3-80296620ce1b-scripts\") pod \"keystone-bootstrap-mwqgr\" (UID: \"09bebc2c-092b-415e-b4b3-80296620ce1b\") " pod="openstack/keystone-bootstrap-mwqgr" Jan 27 15:28:31 crc kubenswrapper[4697]: I0127 15:28:31.578667 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09bebc2c-092b-415e-b4b3-80296620ce1b-combined-ca-bundle\") pod \"keystone-bootstrap-mwqgr\" (UID: \"09bebc2c-092b-415e-b4b3-80296620ce1b\") " pod="openstack/keystone-bootstrap-mwqgr" Jan 27 15:28:31 crc kubenswrapper[4697]: I0127 15:28:31.593059 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4zt5n\" (UniqueName: \"kubernetes.io/projected/09bebc2c-092b-415e-b4b3-80296620ce1b-kube-api-access-4zt5n\") pod \"keystone-bootstrap-mwqgr\" (UID: \"09bebc2c-092b-415e-b4b3-80296620ce1b\") " pod="openstack/keystone-bootstrap-mwqgr" Jan 27 15:28:31 crc kubenswrapper[4697]: I0127 15:28:31.613200 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-mwqgr" Jan 27 15:28:31 crc kubenswrapper[4697]: I0127 15:28:31.664076 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a7bd5fd5-3c0d-4ff4-ae1e-933d95c02e40-dns-swift-storage-0\") pod \"dnsmasq-dns-5959f8865f-5xj92\" (UID: \"a7bd5fd5-3c0d-4ff4-ae1e-933d95c02e40\") " pod="openstack/dnsmasq-dns-5959f8865f-5xj92" Jan 27 15:28:31 crc kubenswrapper[4697]: I0127 15:28:31.664126 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a7bd5fd5-3c0d-4ff4-ae1e-933d95c02e40-ovsdbserver-nb\") pod \"dnsmasq-dns-5959f8865f-5xj92\" (UID: \"a7bd5fd5-3c0d-4ff4-ae1e-933d95c02e40\") " pod="openstack/dnsmasq-dns-5959f8865f-5xj92" Jan 27 15:28:31 crc kubenswrapper[4697]: I0127 15:28:31.664166 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7bd5fd5-3c0d-4ff4-ae1e-933d95c02e40-config\") pod \"dnsmasq-dns-5959f8865f-5xj92\" (UID: \"a7bd5fd5-3c0d-4ff4-ae1e-933d95c02e40\") " pod="openstack/dnsmasq-dns-5959f8865f-5xj92" Jan 27 15:28:31 crc kubenswrapper[4697]: I0127 15:28:31.664193 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a7bd5fd5-3c0d-4ff4-ae1e-933d95c02e40-ovsdbserver-sb\") pod \"dnsmasq-dns-5959f8865f-5xj92\" (UID: \"a7bd5fd5-3c0d-4ff4-ae1e-933d95c02e40\") " pod="openstack/dnsmasq-dns-5959f8865f-5xj92" Jan 27 15:28:31 crc kubenswrapper[4697]: I0127 15:28:31.664234 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gmdkz\" (UniqueName: \"kubernetes.io/projected/a7bd5fd5-3c0d-4ff4-ae1e-933d95c02e40-kube-api-access-gmdkz\") pod \"dnsmasq-dns-5959f8865f-5xj92\" (UID: \"a7bd5fd5-3c0d-4ff4-ae1e-933d95c02e40\") " pod="openstack/dnsmasq-dns-5959f8865f-5xj92" Jan 27 15:28:31 crc kubenswrapper[4697]: I0127 15:28:31.664315 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a7bd5fd5-3c0d-4ff4-ae1e-933d95c02e40-dns-svc\") pod \"dnsmasq-dns-5959f8865f-5xj92\" (UID: \"a7bd5fd5-3c0d-4ff4-ae1e-933d95c02e40\") " pod="openstack/dnsmasq-dns-5959f8865f-5xj92" Jan 27 15:28:31 crc kubenswrapper[4697]: I0127 15:28:31.665156 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a7bd5fd5-3c0d-4ff4-ae1e-933d95c02e40-dns-svc\") pod \"dnsmasq-dns-5959f8865f-5xj92\" (UID: \"a7bd5fd5-3c0d-4ff4-ae1e-933d95c02e40\") " pod="openstack/dnsmasq-dns-5959f8865f-5xj92" Jan 27 15:28:31 crc kubenswrapper[4697]: I0127 15:28:31.665647 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a7bd5fd5-3c0d-4ff4-ae1e-933d95c02e40-dns-swift-storage-0\") pod \"dnsmasq-dns-5959f8865f-5xj92\" (UID: \"a7bd5fd5-3c0d-4ff4-ae1e-933d95c02e40\") " pod="openstack/dnsmasq-dns-5959f8865f-5xj92" Jan 27 15:28:31 crc kubenswrapper[4697]: I0127 15:28:31.666170 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a7bd5fd5-3c0d-4ff4-ae1e-933d95c02e40-ovsdbserver-nb\") pod \"dnsmasq-dns-5959f8865f-5xj92\" (UID: \"a7bd5fd5-3c0d-4ff4-ae1e-933d95c02e40\") " pod="openstack/dnsmasq-dns-5959f8865f-5xj92" Jan 27 15:28:31 crc kubenswrapper[4697]: I0127 15:28:31.666636 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7bd5fd5-3c0d-4ff4-ae1e-933d95c02e40-config\") pod \"dnsmasq-dns-5959f8865f-5xj92\" (UID: \"a7bd5fd5-3c0d-4ff4-ae1e-933d95c02e40\") " pod="openstack/dnsmasq-dns-5959f8865f-5xj92" Jan 27 15:28:31 crc kubenswrapper[4697]: I0127 15:28:31.667149 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a7bd5fd5-3c0d-4ff4-ae1e-933d95c02e40-ovsdbserver-sb\") pod \"dnsmasq-dns-5959f8865f-5xj92\" (UID: \"a7bd5fd5-3c0d-4ff4-ae1e-933d95c02e40\") " pod="openstack/dnsmasq-dns-5959f8865f-5xj92" Jan 27 15:28:31 crc kubenswrapper[4697]: I0127 15:28:31.706413 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-789b55bb8f-pdnjn"] Jan 27 15:28:31 crc kubenswrapper[4697]: I0127 15:28:31.707604 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-789b55bb8f-pdnjn" Jan 27 15:28:31 crc kubenswrapper[4697]: I0127 15:28:31.715462 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gmdkz\" (UniqueName: \"kubernetes.io/projected/a7bd5fd5-3c0d-4ff4-ae1e-933d95c02e40-kube-api-access-gmdkz\") pod \"dnsmasq-dns-5959f8865f-5xj92\" (UID: \"a7bd5fd5-3c0d-4ff4-ae1e-933d95c02e40\") " pod="openstack/dnsmasq-dns-5959f8865f-5xj92" Jan 27 15:28:31 crc kubenswrapper[4697]: I0127 15:28:31.716082 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Jan 27 15:28:31 crc kubenswrapper[4697]: I0127 15:28:31.716420 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-lf4sq" Jan 27 15:28:31 crc kubenswrapper[4697]: I0127 15:28:31.716614 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Jan 27 15:28:31 crc kubenswrapper[4697]: I0127 15:28:31.716796 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Jan 27 15:28:31 crc kubenswrapper[4697]: I0127 15:28:31.760149 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-5xj92" Jan 27 15:28:31 crc kubenswrapper[4697]: I0127 15:28:31.773022 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-789b55bb8f-pdnjn"] Jan 27 15:28:31 crc kubenswrapper[4697]: I0127 15:28:31.870468 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f716119d-b8f8-4bdf-87de-e4452080d972-config-data\") pod \"horizon-789b55bb8f-pdnjn\" (UID: \"f716119d-b8f8-4bdf-87de-e4452080d972\") " pod="openstack/horizon-789b55bb8f-pdnjn" Jan 27 15:28:31 crc kubenswrapper[4697]: I0127 15:28:31.870922 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dw6kg\" (UniqueName: \"kubernetes.io/projected/f716119d-b8f8-4bdf-87de-e4452080d972-kube-api-access-dw6kg\") pod \"horizon-789b55bb8f-pdnjn\" (UID: \"f716119d-b8f8-4bdf-87de-e4452080d972\") " pod="openstack/horizon-789b55bb8f-pdnjn" Jan 27 15:28:31 crc kubenswrapper[4697]: I0127 15:28:31.871066 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f716119d-b8f8-4bdf-87de-e4452080d972-horizon-secret-key\") pod \"horizon-789b55bb8f-pdnjn\" (UID: \"f716119d-b8f8-4bdf-87de-e4452080d972\") " pod="openstack/horizon-789b55bb8f-pdnjn" Jan 27 15:28:31 crc kubenswrapper[4697]: I0127 15:28:31.871157 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f716119d-b8f8-4bdf-87de-e4452080d972-logs\") pod \"horizon-789b55bb8f-pdnjn\" (UID: \"f716119d-b8f8-4bdf-87de-e4452080d972\") " pod="openstack/horizon-789b55bb8f-pdnjn" Jan 27 15:28:31 crc kubenswrapper[4697]: I0127 15:28:31.871238 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f716119d-b8f8-4bdf-87de-e4452080d972-scripts\") pod \"horizon-789b55bb8f-pdnjn\" (UID: \"f716119d-b8f8-4bdf-87de-e4452080d972\") " pod="openstack/horizon-789b55bb8f-pdnjn" Jan 27 15:28:31 crc kubenswrapper[4697]: I0127 15:28:31.889839 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-n5g7m"] Jan 27 15:28:31 crc kubenswrapper[4697]: I0127 15:28:31.891120 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-n5g7m" Jan 27 15:28:31 crc kubenswrapper[4697]: I0127 15:28:31.893928 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-h7rhn" Jan 27 15:28:31 crc kubenswrapper[4697]: I0127 15:28:31.894116 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 27 15:28:31 crc kubenswrapper[4697]: I0127 15:28:31.894504 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 27 15:28:31 crc kubenswrapper[4697]: I0127 15:28:31.906841 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-5c6j2"] Jan 27 15:28:31 crc kubenswrapper[4697]: I0127 15:28:31.907862 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-5c6j2" Jan 27 15:28:31 crc kubenswrapper[4697]: I0127 15:28:31.922237 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-n5g7m"] Jan 27 15:28:31 crc kubenswrapper[4697]: I0127 15:28:31.938394 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-4sphp" Jan 27 15:28:31 crc kubenswrapper[4697]: I0127 15:28:31.938585 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 27 15:28:31 crc kubenswrapper[4697]: I0127 15:28:31.941027 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-5xj92"] Jan 27 15:28:31 crc kubenswrapper[4697]: I0127 15:28:31.972676 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f716119d-b8f8-4bdf-87de-e4452080d972-config-data\") pod \"horizon-789b55bb8f-pdnjn\" (UID: \"f716119d-b8f8-4bdf-87de-e4452080d972\") " pod="openstack/horizon-789b55bb8f-pdnjn" Jan 27 15:28:31 crc kubenswrapper[4697]: I0127 15:28:31.972943 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dw6kg\" (UniqueName: \"kubernetes.io/projected/f716119d-b8f8-4bdf-87de-e4452080d972-kube-api-access-dw6kg\") pod \"horizon-789b55bb8f-pdnjn\" (UID: \"f716119d-b8f8-4bdf-87de-e4452080d972\") " pod="openstack/horizon-789b55bb8f-pdnjn" Jan 27 15:28:31 crc kubenswrapper[4697]: I0127 15:28:31.973081 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f716119d-b8f8-4bdf-87de-e4452080d972-horizon-secret-key\") pod \"horizon-789b55bb8f-pdnjn\" (UID: \"f716119d-b8f8-4bdf-87de-e4452080d972\") " pod="openstack/horizon-789b55bb8f-pdnjn" Jan 27 15:28:31 crc kubenswrapper[4697]: I0127 15:28:31.973160 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f716119d-b8f8-4bdf-87de-e4452080d972-logs\") pod \"horizon-789b55bb8f-pdnjn\" (UID: \"f716119d-b8f8-4bdf-87de-e4452080d972\") " pod="openstack/horizon-789b55bb8f-pdnjn" Jan 27 15:28:31 crc kubenswrapper[4697]: I0127 15:28:31.973230 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f716119d-b8f8-4bdf-87de-e4452080d972-scripts\") pod \"horizon-789b55bb8f-pdnjn\" (UID: \"f716119d-b8f8-4bdf-87de-e4452080d972\") " pod="openstack/horizon-789b55bb8f-pdnjn" Jan 27 15:28:31 crc kubenswrapper[4697]: I0127 15:28:31.973866 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f716119d-b8f8-4bdf-87de-e4452080d972-scripts\") pod \"horizon-789b55bb8f-pdnjn\" (UID: \"f716119d-b8f8-4bdf-87de-e4452080d972\") " pod="openstack/horizon-789b55bb8f-pdnjn" Jan 27 15:28:31 crc kubenswrapper[4697]: I0127 15:28:31.974017 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f716119d-b8f8-4bdf-87de-e4452080d972-config-data\") pod \"horizon-789b55bb8f-pdnjn\" (UID: \"f716119d-b8f8-4bdf-87de-e4452080d972\") " pod="openstack/horizon-789b55bb8f-pdnjn" Jan 27 15:28:31 crc kubenswrapper[4697]: I0127 15:28:31.974426 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f716119d-b8f8-4bdf-87de-e4452080d972-logs\") pod \"horizon-789b55bb8f-pdnjn\" (UID: \"f716119d-b8f8-4bdf-87de-e4452080d972\") " pod="openstack/horizon-789b55bb8f-pdnjn" Jan 27 15:28:31 crc kubenswrapper[4697]: I0127 15:28:31.978990 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f716119d-b8f8-4bdf-87de-e4452080d972-horizon-secret-key\") pod \"horizon-789b55bb8f-pdnjn\" (UID: \"f716119d-b8f8-4bdf-87de-e4452080d972\") " pod="openstack/horizon-789b55bb8f-pdnjn" Jan 27 15:28:31 crc kubenswrapper[4697]: I0127 15:28:31.984221 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-5c6j2"] Jan 27 15:28:32 crc kubenswrapper[4697]: I0127 15:28:32.032960 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-pbx5g"] Jan 27 15:28:32 crc kubenswrapper[4697]: I0127 15:28:32.044096 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-pbx5g" Jan 27 15:28:32 crc kubenswrapper[4697]: I0127 15:28:32.049608 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 27 15:28:32 crc kubenswrapper[4697]: I0127 15:28:32.050388 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-dbs24" Jan 27 15:28:32 crc kubenswrapper[4697]: I0127 15:28:32.051537 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 27 15:28:32 crc kubenswrapper[4697]: I0127 15:28:32.059542 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dw6kg\" (UniqueName: \"kubernetes.io/projected/f716119d-b8f8-4bdf-87de-e4452080d972-kube-api-access-dw6kg\") pod \"horizon-789b55bb8f-pdnjn\" (UID: \"f716119d-b8f8-4bdf-87de-e4452080d972\") " pod="openstack/horizon-789b55bb8f-pdnjn" Jan 27 15:28:32 crc kubenswrapper[4697]: I0127 15:28:32.082080 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgvx9\" (UniqueName: \"kubernetes.io/projected/09a835cc-5807-48ce-a9f8-354d3182603f-kube-api-access-hgvx9\") pod \"barbican-db-sync-5c6j2\" (UID: \"09a835cc-5807-48ce-a9f8-354d3182603f\") " pod="openstack/barbican-db-sync-5c6j2" Jan 27 15:28:32 crc kubenswrapper[4697]: I0127 15:28:32.097379 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ba2a2abf-806a-4708-8f03-9e68c85c6c6c-etc-machine-id\") pod \"cinder-db-sync-n5g7m\" (UID: \"ba2a2abf-806a-4708-8f03-9e68c85c6c6c\") " pod="openstack/cinder-db-sync-n5g7m" Jan 27 15:28:32 crc kubenswrapper[4697]: I0127 15:28:32.097604 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09a835cc-5807-48ce-a9f8-354d3182603f-combined-ca-bundle\") pod \"barbican-db-sync-5c6j2\" (UID: \"09a835cc-5807-48ce-a9f8-354d3182603f\") " pod="openstack/barbican-db-sync-5c6j2" Jan 27 15:28:32 crc kubenswrapper[4697]: I0127 15:28:32.097723 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ba2a2abf-806a-4708-8f03-9e68c85c6c6c-scripts\") pod \"cinder-db-sync-n5g7m\" (UID: \"ba2a2abf-806a-4708-8f03-9e68c85c6c6c\") " pod="openstack/cinder-db-sync-n5g7m" Jan 27 15:28:32 crc kubenswrapper[4697]: I0127 15:28:32.097877 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ba2a2abf-806a-4708-8f03-9e68c85c6c6c-db-sync-config-data\") pod \"cinder-db-sync-n5g7m\" (UID: \"ba2a2abf-806a-4708-8f03-9e68c85c6c6c\") " pod="openstack/cinder-db-sync-n5g7m" Jan 27 15:28:32 crc kubenswrapper[4697]: I0127 15:28:32.097984 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba2a2abf-806a-4708-8f03-9e68c85c6c6c-combined-ca-bundle\") pod \"cinder-db-sync-n5g7m\" (UID: \"ba2a2abf-806a-4708-8f03-9e68c85c6c6c\") " pod="openstack/cinder-db-sync-n5g7m" Jan 27 15:28:32 crc kubenswrapper[4697]: I0127 15:28:32.098133 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/09a835cc-5807-48ce-a9f8-354d3182603f-db-sync-config-data\") pod \"barbican-db-sync-5c6j2\" (UID: \"09a835cc-5807-48ce-a9f8-354d3182603f\") " pod="openstack/barbican-db-sync-5c6j2" Jan 27 15:28:32 crc kubenswrapper[4697]: I0127 15:28:32.098258 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mjlc\" (UniqueName: \"kubernetes.io/projected/ba2a2abf-806a-4708-8f03-9e68c85c6c6c-kube-api-access-6mjlc\") pod \"cinder-db-sync-n5g7m\" (UID: \"ba2a2abf-806a-4708-8f03-9e68c85c6c6c\") " pod="openstack/cinder-db-sync-n5g7m" Jan 27 15:28:32 crc kubenswrapper[4697]: I0127 15:28:32.098395 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba2a2abf-806a-4708-8f03-9e68c85c6c6c-config-data\") pod \"cinder-db-sync-n5g7m\" (UID: \"ba2a2abf-806a-4708-8f03-9e68c85c6c6c\") " pod="openstack/cinder-db-sync-n5g7m" Jan 27 15:28:32 crc kubenswrapper[4697]: I0127 15:28:32.121999 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 27 15:28:32 crc kubenswrapper[4697]: I0127 15:28:32.135341 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-789b55bb8f-pdnjn" Jan 27 15:28:32 crc kubenswrapper[4697]: I0127 15:28:32.151866 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-lnmrx"] Jan 27 15:28:32 crc kubenswrapper[4697]: I0127 15:28:32.166903 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-lnmrx" Jan 27 15:28:32 crc kubenswrapper[4697]: I0127 15:28:32.167344 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 15:28:32 crc kubenswrapper[4697]: I0127 15:28:32.169638 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 27 15:28:32 crc kubenswrapper[4697]: I0127 15:28:32.170032 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 27 15:28:32 crc kubenswrapper[4697]: I0127 15:28:32.200077 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-lnmrx"] Jan 27 15:28:32 crc kubenswrapper[4697]: I0127 15:28:32.206706 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/98cd1413-1ae7-49dd-91b9-d30f7947c4ea-ovsdbserver-nb\") pod \"dnsmasq-dns-58dd9ff6bc-lnmrx\" (UID: \"98cd1413-1ae7-49dd-91b9-d30f7947c4ea\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-lnmrx" Jan 27 15:28:32 crc kubenswrapper[4697]: I0127 15:28:32.206769 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98cd1413-1ae7-49dd-91b9-d30f7947c4ea-config\") pod \"dnsmasq-dns-58dd9ff6bc-lnmrx\" (UID: \"98cd1413-1ae7-49dd-91b9-d30f7947c4ea\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-lnmrx" Jan 27 15:28:32 crc kubenswrapper[4697]: I0127 15:28:32.206802 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/70da0843-011d-422d-bc59-479d90e689a8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"70da0843-011d-422d-bc59-479d90e689a8\") " pod="openstack/ceilometer-0" Jan 27 15:28:32 crc kubenswrapper[4697]: I0127 15:28:32.206840 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/70da0843-011d-422d-bc59-479d90e689a8-run-httpd\") pod \"ceilometer-0\" (UID: \"70da0843-011d-422d-bc59-479d90e689a8\") " pod="openstack/ceilometer-0" Jan 27 15:28:32 crc kubenswrapper[4697]: I0127 15:28:32.206897 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hgvx9\" (UniqueName: \"kubernetes.io/projected/09a835cc-5807-48ce-a9f8-354d3182603f-kube-api-access-hgvx9\") pod \"barbican-db-sync-5c6j2\" (UID: \"09a835cc-5807-48ce-a9f8-354d3182603f\") " pod="openstack/barbican-db-sync-5c6j2" Jan 27 15:28:32 crc kubenswrapper[4697]: I0127 15:28:32.206913 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70da0843-011d-422d-bc59-479d90e689a8-config-data\") pod \"ceilometer-0\" (UID: \"70da0843-011d-422d-bc59-479d90e689a8\") " pod="openstack/ceilometer-0" Jan 27 15:28:32 crc kubenswrapper[4697]: I0127 15:28:32.206978 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ba2a2abf-806a-4708-8f03-9e68c85c6c6c-etc-machine-id\") pod \"cinder-db-sync-n5g7m\" (UID: \"ba2a2abf-806a-4708-8f03-9e68c85c6c6c\") " pod="openstack/cinder-db-sync-n5g7m" Jan 27 15:28:32 crc kubenswrapper[4697]: I0127 15:28:32.207049 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/98cd1413-1ae7-49dd-91b9-d30f7947c4ea-dns-swift-storage-0\") pod \"dnsmasq-dns-58dd9ff6bc-lnmrx\" (UID: \"98cd1413-1ae7-49dd-91b9-d30f7947c4ea\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-lnmrx" Jan 27 15:28:32 crc kubenswrapper[4697]: I0127 15:28:32.207068 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09a835cc-5807-48ce-a9f8-354d3182603f-combined-ca-bundle\") pod \"barbican-db-sync-5c6j2\" (UID: \"09a835cc-5807-48ce-a9f8-354d3182603f\") " pod="openstack/barbican-db-sync-5c6j2" Jan 27 15:28:32 crc kubenswrapper[4697]: I0127 15:28:32.207083 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkzbk\" (UniqueName: \"kubernetes.io/projected/8fac9142-cfe0-4849-b6d4-3315ce2475ef-kube-api-access-zkzbk\") pod \"neutron-db-sync-pbx5g\" (UID: \"8fac9142-cfe0-4849-b6d4-3315ce2475ef\") " pod="openstack/neutron-db-sync-pbx5g" Jan 27 15:28:32 crc kubenswrapper[4697]: I0127 15:28:32.209798 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/98cd1413-1ae7-49dd-91b9-d30f7947c4ea-dns-svc\") pod \"dnsmasq-dns-58dd9ff6bc-lnmrx\" (UID: \"98cd1413-1ae7-49dd-91b9-d30f7947c4ea\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-lnmrx" Jan 27 15:28:32 crc kubenswrapper[4697]: I0127 15:28:32.209976 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ba2a2abf-806a-4708-8f03-9e68c85c6c6c-etc-machine-id\") pod \"cinder-db-sync-n5g7m\" (UID: \"ba2a2abf-806a-4708-8f03-9e68c85c6c6c\") " pod="openstack/cinder-db-sync-n5g7m" Jan 27 15:28:32 crc kubenswrapper[4697]: I0127 15:28:32.210509 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8fac9142-cfe0-4849-b6d4-3315ce2475ef-config\") pod \"neutron-db-sync-pbx5g\" (UID: \"8fac9142-cfe0-4849-b6d4-3315ce2475ef\") " pod="openstack/neutron-db-sync-pbx5g" Jan 27 15:28:32 crc kubenswrapper[4697]: I0127 15:28:32.210543 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ba2a2abf-806a-4708-8f03-9e68c85c6c6c-scripts\") pod \"cinder-db-sync-n5g7m\" (UID: \"ba2a2abf-806a-4708-8f03-9e68c85c6c6c\") " pod="openstack/cinder-db-sync-n5g7m" Jan 27 15:28:32 crc kubenswrapper[4697]: I0127 15:28:32.210846 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ba2a2abf-806a-4708-8f03-9e68c85c6c6c-db-sync-config-data\") pod \"cinder-db-sync-n5g7m\" (UID: \"ba2a2abf-806a-4708-8f03-9e68c85c6c6c\") " pod="openstack/cinder-db-sync-n5g7m" Jan 27 15:28:32 crc kubenswrapper[4697]: I0127 15:28:32.210867 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba2a2abf-806a-4708-8f03-9e68c85c6c6c-combined-ca-bundle\") pod \"cinder-db-sync-n5g7m\" (UID: \"ba2a2abf-806a-4708-8f03-9e68c85c6c6c\") " pod="openstack/cinder-db-sync-n5g7m" Jan 27 15:28:32 crc kubenswrapper[4697]: I0127 15:28:32.216087 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/70da0843-011d-422d-bc59-479d90e689a8-log-httpd\") pod \"ceilometer-0\" (UID: \"70da0843-011d-422d-bc59-479d90e689a8\") " pod="openstack/ceilometer-0" Jan 27 15:28:32 crc kubenswrapper[4697]: I0127 15:28:32.216491 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fac9142-cfe0-4849-b6d4-3315ce2475ef-combined-ca-bundle\") pod \"neutron-db-sync-pbx5g\" (UID: \"8fac9142-cfe0-4849-b6d4-3315ce2475ef\") " pod="openstack/neutron-db-sync-pbx5g" Jan 27 15:28:32 crc kubenswrapper[4697]: I0127 15:28:32.216538 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/09a835cc-5807-48ce-a9f8-354d3182603f-db-sync-config-data\") pod \"barbican-db-sync-5c6j2\" (UID: \"09a835cc-5807-48ce-a9f8-354d3182603f\") " pod="openstack/barbican-db-sync-5c6j2" Jan 27 15:28:32 crc kubenswrapper[4697]: I0127 15:28:32.216556 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/98cd1413-1ae7-49dd-91b9-d30f7947c4ea-ovsdbserver-sb\") pod \"dnsmasq-dns-58dd9ff6bc-lnmrx\" (UID: \"98cd1413-1ae7-49dd-91b9-d30f7947c4ea\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-lnmrx" Jan 27 15:28:32 crc kubenswrapper[4697]: I0127 15:28:32.216586 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6mjlc\" (UniqueName: \"kubernetes.io/projected/ba2a2abf-806a-4708-8f03-9e68c85c6c6c-kube-api-access-6mjlc\") pod \"cinder-db-sync-n5g7m\" (UID: \"ba2a2abf-806a-4708-8f03-9e68c85c6c6c\") " pod="openstack/cinder-db-sync-n5g7m" Jan 27 15:28:32 crc kubenswrapper[4697]: I0127 15:28:32.216614 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/70da0843-011d-422d-bc59-479d90e689a8-scripts\") pod \"ceilometer-0\" (UID: \"70da0843-011d-422d-bc59-479d90e689a8\") " pod="openstack/ceilometer-0" Jan 27 15:28:32 crc kubenswrapper[4697]: I0127 15:28:32.216637 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j62jp\" (UniqueName: \"kubernetes.io/projected/70da0843-011d-422d-bc59-479d90e689a8-kube-api-access-j62jp\") pod \"ceilometer-0\" (UID: \"70da0843-011d-422d-bc59-479d90e689a8\") " pod="openstack/ceilometer-0" Jan 27 15:28:32 crc kubenswrapper[4697]: I0127 15:28:32.216682 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghclv\" (UniqueName: \"kubernetes.io/projected/98cd1413-1ae7-49dd-91b9-d30f7947c4ea-kube-api-access-ghclv\") pod \"dnsmasq-dns-58dd9ff6bc-lnmrx\" (UID: \"98cd1413-1ae7-49dd-91b9-d30f7947c4ea\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-lnmrx" Jan 27 15:28:32 crc kubenswrapper[4697]: I0127 15:28:32.216697 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70da0843-011d-422d-bc59-479d90e689a8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"70da0843-011d-422d-bc59-479d90e689a8\") " pod="openstack/ceilometer-0" Jan 27 15:28:32 crc kubenswrapper[4697]: I0127 15:28:32.216717 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba2a2abf-806a-4708-8f03-9e68c85c6c6c-config-data\") pod \"cinder-db-sync-n5g7m\" (UID: \"ba2a2abf-806a-4708-8f03-9e68c85c6c6c\") " pod="openstack/cinder-db-sync-n5g7m" Jan 27 15:28:32 crc kubenswrapper[4697]: I0127 15:28:32.221835 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09a835cc-5807-48ce-a9f8-354d3182603f-combined-ca-bundle\") pod \"barbican-db-sync-5c6j2\" (UID: \"09a835cc-5807-48ce-a9f8-354d3182603f\") " pod="openstack/barbican-db-sync-5c6j2" Jan 27 15:28:32 crc kubenswrapper[4697]: I0127 15:28:32.221902 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 15:28:32 crc kubenswrapper[4697]: I0127 15:28:32.227291 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/09a835cc-5807-48ce-a9f8-354d3182603f-db-sync-config-data\") pod \"barbican-db-sync-5c6j2\" (UID: \"09a835cc-5807-48ce-a9f8-354d3182603f\") " pod="openstack/barbican-db-sync-5c6j2" Jan 27 15:28:32 crc kubenswrapper[4697]: I0127 15:28:32.228600 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba2a2abf-806a-4708-8f03-9e68c85c6c6c-combined-ca-bundle\") pod \"cinder-db-sync-n5g7m\" (UID: \"ba2a2abf-806a-4708-8f03-9e68c85c6c6c\") " pod="openstack/cinder-db-sync-n5g7m" Jan 27 15:28:32 crc kubenswrapper[4697]: I0127 15:28:32.229007 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ba2a2abf-806a-4708-8f03-9e68c85c6c6c-db-sync-config-data\") pod \"cinder-db-sync-n5g7m\" (UID: \"ba2a2abf-806a-4708-8f03-9e68c85c6c6c\") " pod="openstack/cinder-db-sync-n5g7m" Jan 27 15:28:32 crc kubenswrapper[4697]: I0127 15:28:32.231140 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ba2a2abf-806a-4708-8f03-9e68c85c6c6c-scripts\") pod \"cinder-db-sync-n5g7m\" (UID: \"ba2a2abf-806a-4708-8f03-9e68c85c6c6c\") " pod="openstack/cinder-db-sync-n5g7m" Jan 27 15:28:32 crc kubenswrapper[4697]: I0127 15:28:32.236321 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba2a2abf-806a-4708-8f03-9e68c85c6c6c-config-data\") pod \"cinder-db-sync-n5g7m\" (UID: \"ba2a2abf-806a-4708-8f03-9e68c85c6c6c\") " pod="openstack/cinder-db-sync-n5g7m" Jan 27 15:28:32 crc kubenswrapper[4697]: I0127 15:28:32.250845 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-pbx5g"] Jan 27 15:28:32 crc kubenswrapper[4697]: I0127 15:28:32.262250 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6mjlc\" (UniqueName: \"kubernetes.io/projected/ba2a2abf-806a-4708-8f03-9e68c85c6c6c-kube-api-access-6mjlc\") pod \"cinder-db-sync-n5g7m\" (UID: \"ba2a2abf-806a-4708-8f03-9e68c85c6c6c\") " pod="openstack/cinder-db-sync-n5g7m" Jan 27 15:28:32 crc kubenswrapper[4697]: I0127 15:28:32.264321 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hgvx9\" (UniqueName: \"kubernetes.io/projected/09a835cc-5807-48ce-a9f8-354d3182603f-kube-api-access-hgvx9\") pod \"barbican-db-sync-5c6j2\" (UID: \"09a835cc-5807-48ce-a9f8-354d3182603f\") " pod="openstack/barbican-db-sync-5c6j2" Jan 27 15:28:32 crc kubenswrapper[4697]: I0127 15:28:32.264980 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-xc9hp"] Jan 27 15:28:32 crc kubenswrapper[4697]: I0127 15:28:32.265993 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-xc9hp" Jan 27 15:28:32 crc kubenswrapper[4697]: I0127 15:28:32.284203 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-999wh" Jan 27 15:28:32 crc kubenswrapper[4697]: I0127 15:28:32.284389 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 27 15:28:32 crc kubenswrapper[4697]: I0127 15:28:32.284518 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 27 15:28:32 crc kubenswrapper[4697]: I0127 15:28:32.307974 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-56b6fb4dd9-vzq9d"] Jan 27 15:28:32 crc kubenswrapper[4697]: I0127 15:28:32.309404 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-56b6fb4dd9-vzq9d" Jan 27 15:28:32 crc kubenswrapper[4697]: I0127 15:28:32.317945 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/98cd1413-1ae7-49dd-91b9-d30f7947c4ea-dns-swift-storage-0\") pod \"dnsmasq-dns-58dd9ff6bc-lnmrx\" (UID: \"98cd1413-1ae7-49dd-91b9-d30f7947c4ea\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-lnmrx" Jan 27 15:28:32 crc kubenswrapper[4697]: I0127 15:28:32.317982 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zkzbk\" (UniqueName: \"kubernetes.io/projected/8fac9142-cfe0-4849-b6d4-3315ce2475ef-kube-api-access-zkzbk\") pod \"neutron-db-sync-pbx5g\" (UID: \"8fac9142-cfe0-4849-b6d4-3315ce2475ef\") " pod="openstack/neutron-db-sync-pbx5g" Jan 27 15:28:32 crc kubenswrapper[4697]: I0127 15:28:32.317999 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/98cd1413-1ae7-49dd-91b9-d30f7947c4ea-dns-svc\") pod \"dnsmasq-dns-58dd9ff6bc-lnmrx\" (UID: \"98cd1413-1ae7-49dd-91b9-d30f7947c4ea\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-lnmrx" Jan 27 15:28:32 crc kubenswrapper[4697]: I0127 15:28:32.318018 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8fac9142-cfe0-4849-b6d4-3315ce2475ef-config\") pod \"neutron-db-sync-pbx5g\" (UID: \"8fac9142-cfe0-4849-b6d4-3315ce2475ef\") " pod="openstack/neutron-db-sync-pbx5g" Jan 27 15:28:32 crc kubenswrapper[4697]: I0127 15:28:32.318040 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c11e83f3-61e4-4f13-89e2-cf9209760247-scripts\") pod \"placement-db-sync-xc9hp\" (UID: \"c11e83f3-61e4-4f13-89e2-cf9209760247\") " pod="openstack/placement-db-sync-xc9hp" Jan 27 15:28:32 crc kubenswrapper[4697]: I0127 15:28:32.318062 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c11e83f3-61e4-4f13-89e2-cf9209760247-logs\") pod \"placement-db-sync-xc9hp\" (UID: \"c11e83f3-61e4-4f13-89e2-cf9209760247\") " pod="openstack/placement-db-sync-xc9hp" Jan 27 15:28:32 crc kubenswrapper[4697]: I0127 15:28:32.318080 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwtc9\" (UniqueName: \"kubernetes.io/projected/c11e83f3-61e4-4f13-89e2-cf9209760247-kube-api-access-pwtc9\") pod \"placement-db-sync-xc9hp\" (UID: \"c11e83f3-61e4-4f13-89e2-cf9209760247\") " pod="openstack/placement-db-sync-xc9hp" Jan 27 15:28:32 crc kubenswrapper[4697]: I0127 15:28:32.318106 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/70da0843-011d-422d-bc59-479d90e689a8-log-httpd\") pod \"ceilometer-0\" (UID: \"70da0843-011d-422d-bc59-479d90e689a8\") " pod="openstack/ceilometer-0" Jan 27 15:28:32 crc kubenswrapper[4697]: I0127 15:28:32.318122 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fac9142-cfe0-4849-b6d4-3315ce2475ef-combined-ca-bundle\") pod \"neutron-db-sync-pbx5g\" (UID: \"8fac9142-cfe0-4849-b6d4-3315ce2475ef\") " pod="openstack/neutron-db-sync-pbx5g" Jan 27 15:28:32 crc kubenswrapper[4697]: I0127 15:28:32.318145 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/98cd1413-1ae7-49dd-91b9-d30f7947c4ea-ovsdbserver-sb\") pod \"dnsmasq-dns-58dd9ff6bc-lnmrx\" (UID: \"98cd1413-1ae7-49dd-91b9-d30f7947c4ea\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-lnmrx" Jan 27 15:28:32 crc kubenswrapper[4697]: I0127 15:28:32.318173 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/70da0843-011d-422d-bc59-479d90e689a8-scripts\") pod \"ceilometer-0\" (UID: \"70da0843-011d-422d-bc59-479d90e689a8\") " pod="openstack/ceilometer-0" Jan 27 15:28:32 crc kubenswrapper[4697]: I0127 15:28:32.318192 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j62jp\" (UniqueName: \"kubernetes.io/projected/70da0843-011d-422d-bc59-479d90e689a8-kube-api-access-j62jp\") pod \"ceilometer-0\" (UID: \"70da0843-011d-422d-bc59-479d90e689a8\") " pod="openstack/ceilometer-0" Jan 27 15:28:32 crc kubenswrapper[4697]: I0127 15:28:32.318213 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ghclv\" (UniqueName: \"kubernetes.io/projected/98cd1413-1ae7-49dd-91b9-d30f7947c4ea-kube-api-access-ghclv\") pod \"dnsmasq-dns-58dd9ff6bc-lnmrx\" (UID: \"98cd1413-1ae7-49dd-91b9-d30f7947c4ea\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-lnmrx" Jan 27 15:28:32 crc kubenswrapper[4697]: I0127 15:28:32.318231 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c11e83f3-61e4-4f13-89e2-cf9209760247-combined-ca-bundle\") pod \"placement-db-sync-xc9hp\" (UID: \"c11e83f3-61e4-4f13-89e2-cf9209760247\") " pod="openstack/placement-db-sync-xc9hp" Jan 27 15:28:32 crc kubenswrapper[4697]: I0127 15:28:32.318250 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70da0843-011d-422d-bc59-479d90e689a8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"70da0843-011d-422d-bc59-479d90e689a8\") " pod="openstack/ceilometer-0" Jan 27 15:28:32 crc kubenswrapper[4697]: I0127 15:28:32.318278 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/98cd1413-1ae7-49dd-91b9-d30f7947c4ea-ovsdbserver-nb\") pod \"dnsmasq-dns-58dd9ff6bc-lnmrx\" (UID: \"98cd1413-1ae7-49dd-91b9-d30f7947c4ea\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-lnmrx" Jan 27 15:28:32 crc kubenswrapper[4697]: I0127 15:28:32.318301 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98cd1413-1ae7-49dd-91b9-d30f7947c4ea-config\") pod \"dnsmasq-dns-58dd9ff6bc-lnmrx\" (UID: \"98cd1413-1ae7-49dd-91b9-d30f7947c4ea\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-lnmrx" Jan 27 15:28:32 crc kubenswrapper[4697]: I0127 15:28:32.318317 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/70da0843-011d-422d-bc59-479d90e689a8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"70da0843-011d-422d-bc59-479d90e689a8\") " pod="openstack/ceilometer-0" Jan 27 15:28:32 crc kubenswrapper[4697]: I0127 15:28:32.318340 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/70da0843-011d-422d-bc59-479d90e689a8-run-httpd\") pod \"ceilometer-0\" (UID: \"70da0843-011d-422d-bc59-479d90e689a8\") " pod="openstack/ceilometer-0" Jan 27 15:28:32 crc kubenswrapper[4697]: I0127 15:28:32.318367 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70da0843-011d-422d-bc59-479d90e689a8-config-data\") pod \"ceilometer-0\" (UID: \"70da0843-011d-422d-bc59-479d90e689a8\") " pod="openstack/ceilometer-0" Jan 27 15:28:32 crc kubenswrapper[4697]: I0127 15:28:32.318409 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c11e83f3-61e4-4f13-89e2-cf9209760247-config-data\") pod \"placement-db-sync-xc9hp\" (UID: \"c11e83f3-61e4-4f13-89e2-cf9209760247\") " pod="openstack/placement-db-sync-xc9hp" Jan 27 15:28:32 crc kubenswrapper[4697]: I0127 15:28:32.319172 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/98cd1413-1ae7-49dd-91b9-d30f7947c4ea-dns-swift-storage-0\") pod \"dnsmasq-dns-58dd9ff6bc-lnmrx\" (UID: \"98cd1413-1ae7-49dd-91b9-d30f7947c4ea\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-lnmrx" Jan 27 15:28:32 crc kubenswrapper[4697]: I0127 15:28:32.320001 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/98cd1413-1ae7-49dd-91b9-d30f7947c4ea-dns-svc\") pod \"dnsmasq-dns-58dd9ff6bc-lnmrx\" (UID: \"98cd1413-1ae7-49dd-91b9-d30f7947c4ea\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-lnmrx" Jan 27 15:28:32 crc kubenswrapper[4697]: I0127 15:28:32.320746 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/70da0843-011d-422d-bc59-479d90e689a8-log-httpd\") pod \"ceilometer-0\" (UID: \"70da0843-011d-422d-bc59-479d90e689a8\") " pod="openstack/ceilometer-0" Jan 27 15:28:32 crc kubenswrapper[4697]: I0127 15:28:32.321548 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/98cd1413-1ae7-49dd-91b9-d30f7947c4ea-ovsdbserver-sb\") pod \"dnsmasq-dns-58dd9ff6bc-lnmrx\" (UID: \"98cd1413-1ae7-49dd-91b9-d30f7947c4ea\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-lnmrx" Jan 27 15:28:32 crc kubenswrapper[4697]: I0127 15:28:32.322702 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/98cd1413-1ae7-49dd-91b9-d30f7947c4ea-ovsdbserver-nb\") pod \"dnsmasq-dns-58dd9ff6bc-lnmrx\" (UID: \"98cd1413-1ae7-49dd-91b9-d30f7947c4ea\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-lnmrx" Jan 27 15:28:32 crc kubenswrapper[4697]: I0127 15:28:32.323285 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/70da0843-011d-422d-bc59-479d90e689a8-run-httpd\") pod \"ceilometer-0\" (UID: \"70da0843-011d-422d-bc59-479d90e689a8\") " pod="openstack/ceilometer-0" Jan 27 15:28:32 crc kubenswrapper[4697]: I0127 15:28:32.323818 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98cd1413-1ae7-49dd-91b9-d30f7947c4ea-config\") pod \"dnsmasq-dns-58dd9ff6bc-lnmrx\" (UID: \"98cd1413-1ae7-49dd-91b9-d30f7947c4ea\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-lnmrx" Jan 27 15:28:32 crc kubenswrapper[4697]: I0127 15:28:32.331703 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/70da0843-011d-422d-bc59-479d90e689a8-scripts\") pod \"ceilometer-0\" (UID: \"70da0843-011d-422d-bc59-479d90e689a8\") " pod="openstack/ceilometer-0" Jan 27 15:28:32 crc kubenswrapper[4697]: I0127 15:28:32.332548 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fac9142-cfe0-4849-b6d4-3315ce2475ef-combined-ca-bundle\") pod \"neutron-db-sync-pbx5g\" (UID: \"8fac9142-cfe0-4849-b6d4-3315ce2475ef\") " pod="openstack/neutron-db-sync-pbx5g" Jan 27 15:28:32 crc kubenswrapper[4697]: I0127 15:28:32.332562 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70da0843-011d-422d-bc59-479d90e689a8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"70da0843-011d-422d-bc59-479d90e689a8\") " pod="openstack/ceilometer-0" Jan 27 15:28:32 crc kubenswrapper[4697]: I0127 15:28:32.333367 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/70da0843-011d-422d-bc59-479d90e689a8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"70da0843-011d-422d-bc59-479d90e689a8\") " pod="openstack/ceilometer-0" Jan 27 15:28:32 crc kubenswrapper[4697]: I0127 15:28:32.335907 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70da0843-011d-422d-bc59-479d90e689a8-config-data\") pod \"ceilometer-0\" (UID: \"70da0843-011d-422d-bc59-479d90e689a8\") " pod="openstack/ceilometer-0" Jan 27 15:28:32 crc kubenswrapper[4697]: I0127 15:28:32.344820 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/8fac9142-cfe0-4849-b6d4-3315ce2475ef-config\") pod \"neutron-db-sync-pbx5g\" (UID: \"8fac9142-cfe0-4849-b6d4-3315ce2475ef\") " pod="openstack/neutron-db-sync-pbx5g" Jan 27 15:28:32 crc kubenswrapper[4697]: I0127 15:28:32.346744 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-xc9hp"] Jan 27 15:28:32 crc kubenswrapper[4697]: I0127 15:28:32.363631 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j62jp\" (UniqueName: \"kubernetes.io/projected/70da0843-011d-422d-bc59-479d90e689a8-kube-api-access-j62jp\") pod \"ceilometer-0\" (UID: \"70da0843-011d-422d-bc59-479d90e689a8\") " pod="openstack/ceilometer-0" Jan 27 15:28:32 crc kubenswrapper[4697]: I0127 15:28:32.370454 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zkzbk\" (UniqueName: \"kubernetes.io/projected/8fac9142-cfe0-4849-b6d4-3315ce2475ef-kube-api-access-zkzbk\") pod \"neutron-db-sync-pbx5g\" (UID: \"8fac9142-cfe0-4849-b6d4-3315ce2475ef\") " pod="openstack/neutron-db-sync-pbx5g" Jan 27 15:28:32 crc kubenswrapper[4697]: I0127 15:28:32.371032 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ghclv\" (UniqueName: \"kubernetes.io/projected/98cd1413-1ae7-49dd-91b9-d30f7947c4ea-kube-api-access-ghclv\") pod \"dnsmasq-dns-58dd9ff6bc-lnmrx\" (UID: \"98cd1413-1ae7-49dd-91b9-d30f7947c4ea\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-lnmrx" Jan 27 15:28:32 crc kubenswrapper[4697]: I0127 15:28:32.387509 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-56b6fb4dd9-vzq9d"] Jan 27 15:28:32 crc kubenswrapper[4697]: I0127 15:28:32.413323 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-pbx5g" Jan 27 15:28:32 crc kubenswrapper[4697]: I0127 15:28:32.422699 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c11e83f3-61e4-4f13-89e2-cf9209760247-scripts\") pod \"placement-db-sync-xc9hp\" (UID: \"c11e83f3-61e4-4f13-89e2-cf9209760247\") " pod="openstack/placement-db-sync-xc9hp" Jan 27 15:28:32 crc kubenswrapper[4697]: I0127 15:28:32.422754 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6ceeb921-c7c4-41df-bed1-a082bdcb6e79-scripts\") pod \"horizon-56b6fb4dd9-vzq9d\" (UID: \"6ceeb921-c7c4-41df-bed1-a082bdcb6e79\") " pod="openstack/horizon-56b6fb4dd9-vzq9d" Jan 27 15:28:32 crc kubenswrapper[4697]: I0127 15:28:32.422791 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c11e83f3-61e4-4f13-89e2-cf9209760247-logs\") pod \"placement-db-sync-xc9hp\" (UID: \"c11e83f3-61e4-4f13-89e2-cf9209760247\") " pod="openstack/placement-db-sync-xc9hp" Jan 27 15:28:32 crc kubenswrapper[4697]: I0127 15:28:32.422812 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pwtc9\" (UniqueName: \"kubernetes.io/projected/c11e83f3-61e4-4f13-89e2-cf9209760247-kube-api-access-pwtc9\") pod \"placement-db-sync-xc9hp\" (UID: \"c11e83f3-61e4-4f13-89e2-cf9209760247\") " pod="openstack/placement-db-sync-xc9hp" Jan 27 15:28:32 crc kubenswrapper[4697]: I0127 15:28:32.422844 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9mgn\" (UniqueName: \"kubernetes.io/projected/6ceeb921-c7c4-41df-bed1-a082bdcb6e79-kube-api-access-p9mgn\") pod \"horizon-56b6fb4dd9-vzq9d\" (UID: \"6ceeb921-c7c4-41df-bed1-a082bdcb6e79\") " pod="openstack/horizon-56b6fb4dd9-vzq9d" Jan 27 15:28:32 crc kubenswrapper[4697]: I0127 15:28:32.422871 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6ceeb921-c7c4-41df-bed1-a082bdcb6e79-horizon-secret-key\") pod \"horizon-56b6fb4dd9-vzq9d\" (UID: \"6ceeb921-c7c4-41df-bed1-a082bdcb6e79\") " pod="openstack/horizon-56b6fb4dd9-vzq9d" Jan 27 15:28:32 crc kubenswrapper[4697]: I0127 15:28:32.422897 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6ceeb921-c7c4-41df-bed1-a082bdcb6e79-logs\") pod \"horizon-56b6fb4dd9-vzq9d\" (UID: \"6ceeb921-c7c4-41df-bed1-a082bdcb6e79\") " pod="openstack/horizon-56b6fb4dd9-vzq9d" Jan 27 15:28:32 crc kubenswrapper[4697]: I0127 15:28:32.422920 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c11e83f3-61e4-4f13-89e2-cf9209760247-combined-ca-bundle\") pod \"placement-db-sync-xc9hp\" (UID: \"c11e83f3-61e4-4f13-89e2-cf9209760247\") " pod="openstack/placement-db-sync-xc9hp" Jan 27 15:28:32 crc kubenswrapper[4697]: I0127 15:28:32.422968 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6ceeb921-c7c4-41df-bed1-a082bdcb6e79-config-data\") pod \"horizon-56b6fb4dd9-vzq9d\" (UID: \"6ceeb921-c7c4-41df-bed1-a082bdcb6e79\") " pod="openstack/horizon-56b6fb4dd9-vzq9d" Jan 27 15:28:32 crc kubenswrapper[4697]: I0127 15:28:32.423021 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c11e83f3-61e4-4f13-89e2-cf9209760247-config-data\") pod \"placement-db-sync-xc9hp\" (UID: \"c11e83f3-61e4-4f13-89e2-cf9209760247\") " pod="openstack/placement-db-sync-xc9hp" Jan 27 15:28:32 crc kubenswrapper[4697]: I0127 15:28:32.423968 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c11e83f3-61e4-4f13-89e2-cf9209760247-logs\") pod \"placement-db-sync-xc9hp\" (UID: \"c11e83f3-61e4-4f13-89e2-cf9209760247\") " pod="openstack/placement-db-sync-xc9hp" Jan 27 15:28:32 crc kubenswrapper[4697]: I0127 15:28:32.429304 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c11e83f3-61e4-4f13-89e2-cf9209760247-combined-ca-bundle\") pod \"placement-db-sync-xc9hp\" (UID: \"c11e83f3-61e4-4f13-89e2-cf9209760247\") " pod="openstack/placement-db-sync-xc9hp" Jan 27 15:28:32 crc kubenswrapper[4697]: I0127 15:28:32.434634 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c11e83f3-61e4-4f13-89e2-cf9209760247-scripts\") pod \"placement-db-sync-xc9hp\" (UID: \"c11e83f3-61e4-4f13-89e2-cf9209760247\") " pod="openstack/placement-db-sync-xc9hp" Jan 27 15:28:32 crc kubenswrapper[4697]: I0127 15:28:32.439535 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c11e83f3-61e4-4f13-89e2-cf9209760247-config-data\") pod \"placement-db-sync-xc9hp\" (UID: \"c11e83f3-61e4-4f13-89e2-cf9209760247\") " pod="openstack/placement-db-sync-xc9hp" Jan 27 15:28:32 crc kubenswrapper[4697]: I0127 15:28:32.460408 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pwtc9\" (UniqueName: \"kubernetes.io/projected/c11e83f3-61e4-4f13-89e2-cf9209760247-kube-api-access-pwtc9\") pod \"placement-db-sync-xc9hp\" (UID: \"c11e83f3-61e4-4f13-89e2-cf9209760247\") " pod="openstack/placement-db-sync-xc9hp" Jan 27 15:28:32 crc kubenswrapper[4697]: I0127 15:28:32.506240 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-lnmrx" Jan 27 15:28:32 crc kubenswrapper[4697]: I0127 15:28:32.524850 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p9mgn\" (UniqueName: \"kubernetes.io/projected/6ceeb921-c7c4-41df-bed1-a082bdcb6e79-kube-api-access-p9mgn\") pod \"horizon-56b6fb4dd9-vzq9d\" (UID: \"6ceeb921-c7c4-41df-bed1-a082bdcb6e79\") " pod="openstack/horizon-56b6fb4dd9-vzq9d" Jan 27 15:28:32 crc kubenswrapper[4697]: I0127 15:28:32.524905 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6ceeb921-c7c4-41df-bed1-a082bdcb6e79-horizon-secret-key\") pod \"horizon-56b6fb4dd9-vzq9d\" (UID: \"6ceeb921-c7c4-41df-bed1-a082bdcb6e79\") " pod="openstack/horizon-56b6fb4dd9-vzq9d" Jan 27 15:28:32 crc kubenswrapper[4697]: I0127 15:28:32.524932 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6ceeb921-c7c4-41df-bed1-a082bdcb6e79-logs\") pod \"horizon-56b6fb4dd9-vzq9d\" (UID: \"6ceeb921-c7c4-41df-bed1-a082bdcb6e79\") " pod="openstack/horizon-56b6fb4dd9-vzq9d" Jan 27 15:28:32 crc kubenswrapper[4697]: I0127 15:28:32.524979 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6ceeb921-c7c4-41df-bed1-a082bdcb6e79-config-data\") pod \"horizon-56b6fb4dd9-vzq9d\" (UID: \"6ceeb921-c7c4-41df-bed1-a082bdcb6e79\") " pod="openstack/horizon-56b6fb4dd9-vzq9d" Jan 27 15:28:32 crc kubenswrapper[4697]: I0127 15:28:32.525059 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6ceeb921-c7c4-41df-bed1-a082bdcb6e79-scripts\") pod \"horizon-56b6fb4dd9-vzq9d\" (UID: \"6ceeb921-c7c4-41df-bed1-a082bdcb6e79\") " pod="openstack/horizon-56b6fb4dd9-vzq9d" Jan 27 15:28:32 crc kubenswrapper[4697]: I0127 15:28:32.525558 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6ceeb921-c7c4-41df-bed1-a082bdcb6e79-logs\") pod \"horizon-56b6fb4dd9-vzq9d\" (UID: \"6ceeb921-c7c4-41df-bed1-a082bdcb6e79\") " pod="openstack/horizon-56b6fb4dd9-vzq9d" Jan 27 15:28:32 crc kubenswrapper[4697]: I0127 15:28:32.525660 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6ceeb921-c7c4-41df-bed1-a082bdcb6e79-scripts\") pod \"horizon-56b6fb4dd9-vzq9d\" (UID: \"6ceeb921-c7c4-41df-bed1-a082bdcb6e79\") " pod="openstack/horizon-56b6fb4dd9-vzq9d" Jan 27 15:28:32 crc kubenswrapper[4697]: I0127 15:28:32.530958 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 15:28:32 crc kubenswrapper[4697]: I0127 15:28:32.532126 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6ceeb921-c7c4-41df-bed1-a082bdcb6e79-config-data\") pod \"horizon-56b6fb4dd9-vzq9d\" (UID: \"6ceeb921-c7c4-41df-bed1-a082bdcb6e79\") " pod="openstack/horizon-56b6fb4dd9-vzq9d" Jan 27 15:28:32 crc kubenswrapper[4697]: I0127 15:28:32.532693 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-n5g7m" Jan 27 15:28:32 crc kubenswrapper[4697]: I0127 15:28:32.543308 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p9mgn\" (UniqueName: \"kubernetes.io/projected/6ceeb921-c7c4-41df-bed1-a082bdcb6e79-kube-api-access-p9mgn\") pod \"horizon-56b6fb4dd9-vzq9d\" (UID: \"6ceeb921-c7c4-41df-bed1-a082bdcb6e79\") " pod="openstack/horizon-56b6fb4dd9-vzq9d" Jan 27 15:28:32 crc kubenswrapper[4697]: I0127 15:28:32.544392 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6ceeb921-c7c4-41df-bed1-a082bdcb6e79-horizon-secret-key\") pod \"horizon-56b6fb4dd9-vzq9d\" (UID: \"6ceeb921-c7c4-41df-bed1-a082bdcb6e79\") " pod="openstack/horizon-56b6fb4dd9-vzq9d" Jan 27 15:28:32 crc kubenswrapper[4697]: I0127 15:28:32.560185 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-5c6j2" Jan 27 15:28:32 crc kubenswrapper[4697]: I0127 15:28:32.587751 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-xc9hp" Jan 27 15:28:32 crc kubenswrapper[4697]: I0127 15:28:32.665893 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-56b6fb4dd9-vzq9d" Jan 27 15:28:32 crc kubenswrapper[4697]: I0127 15:28:32.680625 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-mwqgr"] Jan 27 15:28:32 crc kubenswrapper[4697]: I0127 15:28:32.926317 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-5xj92"] Jan 27 15:28:33 crc kubenswrapper[4697]: I0127 15:28:33.102383 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-789b55bb8f-pdnjn"] Jan 27 15:28:33 crc kubenswrapper[4697]: I0127 15:28:33.110209 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5959f8865f-5xj92" event={"ID":"a7bd5fd5-3c0d-4ff4-ae1e-933d95c02e40","Type":"ContainerStarted","Data":"7a4970c902a3997eec8c81c6d6928ca838627e9b4a0624bce54a7cbef11a7063"} Jan 27 15:28:33 crc kubenswrapper[4697]: I0127 15:28:33.135717 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-mwqgr" event={"ID":"09bebc2c-092b-415e-b4b3-80296620ce1b","Type":"ContainerStarted","Data":"07dcf0441e80f347c4d7cad8654be241eb9ed2b44c442c6623c297d9b407dc74"} Jan 27 15:28:33 crc kubenswrapper[4697]: I0127 15:28:33.345690 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-lnmrx"] Jan 27 15:28:33 crc kubenswrapper[4697]: I0127 15:28:33.379559 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 15:28:33 crc kubenswrapper[4697]: I0127 15:28:33.411340 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-pbx5g"] Jan 27 15:28:33 crc kubenswrapper[4697]: I0127 15:28:33.538603 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-5c6j2"] Jan 27 15:28:33 crc kubenswrapper[4697]: I0127 15:28:33.822883 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-56b6fb4dd9-vzq9d"] Jan 27 15:28:33 crc kubenswrapper[4697]: I0127 15:28:33.878079 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-n5g7m"] Jan 27 15:28:33 crc kubenswrapper[4697]: W0127 15:28:33.891244 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podba2a2abf_806a_4708_8f03_9e68c85c6c6c.slice/crio-2aa0ff461f3adbdfdc82aad5f7b5540544320bb4a0072566931df2dd64d8ae47 WatchSource:0}: Error finding container 2aa0ff461f3adbdfdc82aad5f7b5540544320bb4a0072566931df2dd64d8ae47: Status 404 returned error can't find the container with id 2aa0ff461f3adbdfdc82aad5f7b5540544320bb4a0072566931df2dd64d8ae47 Jan 27 15:28:34 crc kubenswrapper[4697]: I0127 15:28:34.062630 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-xc9hp"] Jan 27 15:28:34 crc kubenswrapper[4697]: W0127 15:28:34.064228 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc11e83f3_61e4_4f13_89e2_cf9209760247.slice/crio-8eda167125e7fa07dcab714d9553f94345c90fbfb19beb504cf2a2e4ea07bee6 WatchSource:0}: Error finding container 8eda167125e7fa07dcab714d9553f94345c90fbfb19beb504cf2a2e4ea07bee6: Status 404 returned error can't find the container with id 8eda167125e7fa07dcab714d9553f94345c90fbfb19beb504cf2a2e4ea07bee6 Jan 27 15:28:34 crc kubenswrapper[4697]: I0127 15:28:34.144322 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-56b6fb4dd9-vzq9d" event={"ID":"6ceeb921-c7c4-41df-bed1-a082bdcb6e79","Type":"ContainerStarted","Data":"8e3c916b4bbe2d56f93b3375840b4c9e8a5f2f9784327ff50fe12c09dfcf47ef"} Jan 27 15:28:34 crc kubenswrapper[4697]: I0127 15:28:34.146056 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-lnmrx" event={"ID":"98cd1413-1ae7-49dd-91b9-d30f7947c4ea","Type":"ContainerStarted","Data":"371c6ec92a973351219930774e641200efaa1ca582388d332a524b6148494c90"} Jan 27 15:28:34 crc kubenswrapper[4697]: I0127 15:28:34.147434 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-789b55bb8f-pdnjn" event={"ID":"f716119d-b8f8-4bdf-87de-e4452080d972","Type":"ContainerStarted","Data":"581d9414a5dbcec64939a3fcbc60962bb3dd8729b3a2d996cc09ee86bbf8cf7f"} Jan 27 15:28:34 crc kubenswrapper[4697]: I0127 15:28:34.148453 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-xc9hp" event={"ID":"c11e83f3-61e4-4f13-89e2-cf9209760247","Type":"ContainerStarted","Data":"8eda167125e7fa07dcab714d9553f94345c90fbfb19beb504cf2a2e4ea07bee6"} Jan 27 15:28:34 crc kubenswrapper[4697]: I0127 15:28:34.149630 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-n5g7m" event={"ID":"ba2a2abf-806a-4708-8f03-9e68c85c6c6c","Type":"ContainerStarted","Data":"2aa0ff461f3adbdfdc82aad5f7b5540544320bb4a0072566931df2dd64d8ae47"} Jan 27 15:28:34 crc kubenswrapper[4697]: I0127 15:28:34.150745 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"70da0843-011d-422d-bc59-479d90e689a8","Type":"ContainerStarted","Data":"91106f703021d9c55ce074eb58a4fcca0a67b20e934e2cea01f117e7070b17b6"} Jan 27 15:28:34 crc kubenswrapper[4697]: I0127 15:28:34.151771 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-5c6j2" event={"ID":"09a835cc-5807-48ce-a9f8-354d3182603f","Type":"ContainerStarted","Data":"5105b6a0f7dadb3760c94a88c4372981669e160e61a7e82b3bb341cb466c55c2"} Jan 27 15:28:34 crc kubenswrapper[4697]: I0127 15:28:34.152716 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-pbx5g" event={"ID":"8fac9142-cfe0-4849-b6d4-3315ce2475ef","Type":"ContainerStarted","Data":"46ad3c48299713728442b73f3951287fc90938046d7ba8ac0aa2009957e9a040"} Jan 27 15:28:34 crc kubenswrapper[4697]: I0127 15:28:34.452815 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-789b55bb8f-pdnjn"] Jan 27 15:28:34 crc kubenswrapper[4697]: I0127 15:28:34.511949 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 15:28:34 crc kubenswrapper[4697]: I0127 15:28:34.528354 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-58cf9979b5-9tn4x"] Jan 27 15:28:34 crc kubenswrapper[4697]: I0127 15:28:34.536485 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-58cf9979b5-9tn4x" Jan 27 15:28:34 crc kubenswrapper[4697]: I0127 15:28:34.556671 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-58cf9979b5-9tn4x"] Jan 27 15:28:34 crc kubenswrapper[4697]: I0127 15:28:34.591090 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a8b36f3c-50c2-400c-bd10-0dfe3ac4a01d-logs\") pod \"horizon-58cf9979b5-9tn4x\" (UID: \"a8b36f3c-50c2-400c-bd10-0dfe3ac4a01d\") " pod="openstack/horizon-58cf9979b5-9tn4x" Jan 27 15:28:34 crc kubenswrapper[4697]: I0127 15:28:34.591131 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a8b36f3c-50c2-400c-bd10-0dfe3ac4a01d-config-data\") pod \"horizon-58cf9979b5-9tn4x\" (UID: \"a8b36f3c-50c2-400c-bd10-0dfe3ac4a01d\") " pod="openstack/horizon-58cf9979b5-9tn4x" Jan 27 15:28:34 crc kubenswrapper[4697]: I0127 15:28:34.591182 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a8b36f3c-50c2-400c-bd10-0dfe3ac4a01d-horizon-secret-key\") pod \"horizon-58cf9979b5-9tn4x\" (UID: \"a8b36f3c-50c2-400c-bd10-0dfe3ac4a01d\") " pod="openstack/horizon-58cf9979b5-9tn4x" Jan 27 15:28:34 crc kubenswrapper[4697]: I0127 15:28:34.591262 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a8b36f3c-50c2-400c-bd10-0dfe3ac4a01d-scripts\") pod \"horizon-58cf9979b5-9tn4x\" (UID: \"a8b36f3c-50c2-400c-bd10-0dfe3ac4a01d\") " pod="openstack/horizon-58cf9979b5-9tn4x" Jan 27 15:28:34 crc kubenswrapper[4697]: I0127 15:28:34.591309 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trfdq\" (UniqueName: \"kubernetes.io/projected/a8b36f3c-50c2-400c-bd10-0dfe3ac4a01d-kube-api-access-trfdq\") pod \"horizon-58cf9979b5-9tn4x\" (UID: \"a8b36f3c-50c2-400c-bd10-0dfe3ac4a01d\") " pod="openstack/horizon-58cf9979b5-9tn4x" Jan 27 15:28:34 crc kubenswrapper[4697]: I0127 15:28:34.692767 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-trfdq\" (UniqueName: \"kubernetes.io/projected/a8b36f3c-50c2-400c-bd10-0dfe3ac4a01d-kube-api-access-trfdq\") pod \"horizon-58cf9979b5-9tn4x\" (UID: \"a8b36f3c-50c2-400c-bd10-0dfe3ac4a01d\") " pod="openstack/horizon-58cf9979b5-9tn4x" Jan 27 15:28:34 crc kubenswrapper[4697]: I0127 15:28:34.692835 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a8b36f3c-50c2-400c-bd10-0dfe3ac4a01d-logs\") pod \"horizon-58cf9979b5-9tn4x\" (UID: \"a8b36f3c-50c2-400c-bd10-0dfe3ac4a01d\") " pod="openstack/horizon-58cf9979b5-9tn4x" Jan 27 15:28:34 crc kubenswrapper[4697]: I0127 15:28:34.692863 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a8b36f3c-50c2-400c-bd10-0dfe3ac4a01d-config-data\") pod \"horizon-58cf9979b5-9tn4x\" (UID: \"a8b36f3c-50c2-400c-bd10-0dfe3ac4a01d\") " pod="openstack/horizon-58cf9979b5-9tn4x" Jan 27 15:28:34 crc kubenswrapper[4697]: I0127 15:28:34.692901 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a8b36f3c-50c2-400c-bd10-0dfe3ac4a01d-horizon-secret-key\") pod \"horizon-58cf9979b5-9tn4x\" (UID: \"a8b36f3c-50c2-400c-bd10-0dfe3ac4a01d\") " pod="openstack/horizon-58cf9979b5-9tn4x" Jan 27 15:28:34 crc kubenswrapper[4697]: I0127 15:28:34.692973 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a8b36f3c-50c2-400c-bd10-0dfe3ac4a01d-scripts\") pod \"horizon-58cf9979b5-9tn4x\" (UID: \"a8b36f3c-50c2-400c-bd10-0dfe3ac4a01d\") " pod="openstack/horizon-58cf9979b5-9tn4x" Jan 27 15:28:34 crc kubenswrapper[4697]: I0127 15:28:34.693536 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a8b36f3c-50c2-400c-bd10-0dfe3ac4a01d-logs\") pod \"horizon-58cf9979b5-9tn4x\" (UID: \"a8b36f3c-50c2-400c-bd10-0dfe3ac4a01d\") " pod="openstack/horizon-58cf9979b5-9tn4x" Jan 27 15:28:34 crc kubenswrapper[4697]: I0127 15:28:34.693901 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a8b36f3c-50c2-400c-bd10-0dfe3ac4a01d-scripts\") pod \"horizon-58cf9979b5-9tn4x\" (UID: \"a8b36f3c-50c2-400c-bd10-0dfe3ac4a01d\") " pod="openstack/horizon-58cf9979b5-9tn4x" Jan 27 15:28:34 crc kubenswrapper[4697]: I0127 15:28:34.694447 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a8b36f3c-50c2-400c-bd10-0dfe3ac4a01d-config-data\") pod \"horizon-58cf9979b5-9tn4x\" (UID: \"a8b36f3c-50c2-400c-bd10-0dfe3ac4a01d\") " pod="openstack/horizon-58cf9979b5-9tn4x" Jan 27 15:28:34 crc kubenswrapper[4697]: I0127 15:28:34.709988 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-trfdq\" (UniqueName: \"kubernetes.io/projected/a8b36f3c-50c2-400c-bd10-0dfe3ac4a01d-kube-api-access-trfdq\") pod \"horizon-58cf9979b5-9tn4x\" (UID: \"a8b36f3c-50c2-400c-bd10-0dfe3ac4a01d\") " pod="openstack/horizon-58cf9979b5-9tn4x" Jan 27 15:28:34 crc kubenswrapper[4697]: I0127 15:28:34.716544 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a8b36f3c-50c2-400c-bd10-0dfe3ac4a01d-horizon-secret-key\") pod \"horizon-58cf9979b5-9tn4x\" (UID: \"a8b36f3c-50c2-400c-bd10-0dfe3ac4a01d\") " pod="openstack/horizon-58cf9979b5-9tn4x" Jan 27 15:28:34 crc kubenswrapper[4697]: I0127 15:28:34.856831 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-58cf9979b5-9tn4x" Jan 27 15:28:35 crc kubenswrapper[4697]: I0127 15:28:35.352858 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-58cf9979b5-9tn4x"] Jan 27 15:28:35 crc kubenswrapper[4697]: W0127 15:28:35.362338 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda8b36f3c_50c2_400c_bd10_0dfe3ac4a01d.slice/crio-7991fb1895c3da4f700b312332c89f6ff9adce01b3e2cc08b31fd74ea3ace8cf WatchSource:0}: Error finding container 7991fb1895c3da4f700b312332c89f6ff9adce01b3e2cc08b31fd74ea3ace8cf: Status 404 returned error can't find the container with id 7991fb1895c3da4f700b312332c89f6ff9adce01b3e2cc08b31fd74ea3ace8cf Jan 27 15:28:35 crc kubenswrapper[4697]: I0127 15:28:35.734248 4697 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-764c5664d7-nqsmh" podUID="a43e2028-864c-4cb7-b20a-cc9e01417436" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.135:5353: connect: connection refused" Jan 27 15:28:37 crc kubenswrapper[4697]: I0127 15:28:37.301404 4697 generic.go:334] "Generic (PLEG): container finished" podID="a43e2028-864c-4cb7-b20a-cc9e01417436" containerID="177c51e20a03bb61eb9238fb000c07b6864bdc36abc15f53c2c51d3b0d92094f" exitCode=0 Jan 27 15:28:37 crc kubenswrapper[4697]: I0127 15:28:37.301842 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-nqsmh" event={"ID":"a43e2028-864c-4cb7-b20a-cc9e01417436","Type":"ContainerDied","Data":"177c51e20a03bb61eb9238fb000c07b6864bdc36abc15f53c2c51d3b0d92094f"} Jan 27 15:28:37 crc kubenswrapper[4697]: I0127 15:28:37.307071 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-58cf9979b5-9tn4x" event={"ID":"a8b36f3c-50c2-400c-bd10-0dfe3ac4a01d","Type":"ContainerStarted","Data":"7991fb1895c3da4f700b312332c89f6ff9adce01b3e2cc08b31fd74ea3ace8cf"} Jan 27 15:28:37 crc kubenswrapper[4697]: I0127 15:28:37.634335 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-nqsmh" Jan 27 15:28:37 crc kubenswrapper[4697]: I0127 15:28:37.665609 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a43e2028-864c-4cb7-b20a-cc9e01417436-config\") pod \"a43e2028-864c-4cb7-b20a-cc9e01417436\" (UID: \"a43e2028-864c-4cb7-b20a-cc9e01417436\") " Jan 27 15:28:37 crc kubenswrapper[4697]: I0127 15:28:37.665751 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a43e2028-864c-4cb7-b20a-cc9e01417436-ovsdbserver-nb\") pod \"a43e2028-864c-4cb7-b20a-cc9e01417436\" (UID: \"a43e2028-864c-4cb7-b20a-cc9e01417436\") " Jan 27 15:28:37 crc kubenswrapper[4697]: I0127 15:28:37.665815 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a43e2028-864c-4cb7-b20a-cc9e01417436-ovsdbserver-sb\") pod \"a43e2028-864c-4cb7-b20a-cc9e01417436\" (UID: \"a43e2028-864c-4cb7-b20a-cc9e01417436\") " Jan 27 15:28:37 crc kubenswrapper[4697]: I0127 15:28:37.665895 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a43e2028-864c-4cb7-b20a-cc9e01417436-dns-svc\") pod \"a43e2028-864c-4cb7-b20a-cc9e01417436\" (UID: \"a43e2028-864c-4cb7-b20a-cc9e01417436\") " Jan 27 15:28:37 crc kubenswrapper[4697]: I0127 15:28:37.665941 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a43e2028-864c-4cb7-b20a-cc9e01417436-dns-swift-storage-0\") pod \"a43e2028-864c-4cb7-b20a-cc9e01417436\" (UID: \"a43e2028-864c-4cb7-b20a-cc9e01417436\") " Jan 27 15:28:37 crc kubenswrapper[4697]: I0127 15:28:37.666025 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nkrh8\" (UniqueName: \"kubernetes.io/projected/a43e2028-864c-4cb7-b20a-cc9e01417436-kube-api-access-nkrh8\") pod \"a43e2028-864c-4cb7-b20a-cc9e01417436\" (UID: \"a43e2028-864c-4cb7-b20a-cc9e01417436\") " Jan 27 15:28:37 crc kubenswrapper[4697]: I0127 15:28:37.760075 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a43e2028-864c-4cb7-b20a-cc9e01417436-kube-api-access-nkrh8" (OuterVolumeSpecName: "kube-api-access-nkrh8") pod "a43e2028-864c-4cb7-b20a-cc9e01417436" (UID: "a43e2028-864c-4cb7-b20a-cc9e01417436"). InnerVolumeSpecName "kube-api-access-nkrh8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:28:37 crc kubenswrapper[4697]: I0127 15:28:37.769480 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nkrh8\" (UniqueName: \"kubernetes.io/projected/a43e2028-864c-4cb7-b20a-cc9e01417436-kube-api-access-nkrh8\") on node \"crc\" DevicePath \"\"" Jan 27 15:28:37 crc kubenswrapper[4697]: I0127 15:28:37.796565 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a43e2028-864c-4cb7-b20a-cc9e01417436-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a43e2028-864c-4cb7-b20a-cc9e01417436" (UID: "a43e2028-864c-4cb7-b20a-cc9e01417436"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:28:37 crc kubenswrapper[4697]: I0127 15:28:37.831764 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a43e2028-864c-4cb7-b20a-cc9e01417436-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a43e2028-864c-4cb7-b20a-cc9e01417436" (UID: "a43e2028-864c-4cb7-b20a-cc9e01417436"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:28:37 crc kubenswrapper[4697]: I0127 15:28:37.832537 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a43e2028-864c-4cb7-b20a-cc9e01417436-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a43e2028-864c-4cb7-b20a-cc9e01417436" (UID: "a43e2028-864c-4cb7-b20a-cc9e01417436"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:28:37 crc kubenswrapper[4697]: I0127 15:28:37.843691 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a43e2028-864c-4cb7-b20a-cc9e01417436-config" (OuterVolumeSpecName: "config") pod "a43e2028-864c-4cb7-b20a-cc9e01417436" (UID: "a43e2028-864c-4cb7-b20a-cc9e01417436"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:28:37 crc kubenswrapper[4697]: I0127 15:28:37.850408 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a43e2028-864c-4cb7-b20a-cc9e01417436-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "a43e2028-864c-4cb7-b20a-cc9e01417436" (UID: "a43e2028-864c-4cb7-b20a-cc9e01417436"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:28:37 crc kubenswrapper[4697]: I0127 15:28:37.871778 4697 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a43e2028-864c-4cb7-b20a-cc9e01417436-config\") on node \"crc\" DevicePath \"\"" Jan 27 15:28:37 crc kubenswrapper[4697]: I0127 15:28:37.871839 4697 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a43e2028-864c-4cb7-b20a-cc9e01417436-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 15:28:37 crc kubenswrapper[4697]: I0127 15:28:37.871852 4697 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a43e2028-864c-4cb7-b20a-cc9e01417436-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 15:28:37 crc kubenswrapper[4697]: I0127 15:28:37.871866 4697 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a43e2028-864c-4cb7-b20a-cc9e01417436-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 15:28:37 crc kubenswrapper[4697]: I0127 15:28:37.871878 4697 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a43e2028-864c-4cb7-b20a-cc9e01417436-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 27 15:28:38 crc kubenswrapper[4697]: I0127 15:28:38.354677 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-nqsmh" event={"ID":"a43e2028-864c-4cb7-b20a-cc9e01417436","Type":"ContainerDied","Data":"fa44932c41321033c2894ca290d29ec7bf7e983da6c9079236db5caf8af5e808"} Jan 27 15:28:38 crc kubenswrapper[4697]: I0127 15:28:38.355027 4697 scope.go:117] "RemoveContainer" containerID="177c51e20a03bb61eb9238fb000c07b6864bdc36abc15f53c2c51d3b0d92094f" Jan 27 15:28:38 crc kubenswrapper[4697]: I0127 15:28:38.355172 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-nqsmh" Jan 27 15:28:38 crc kubenswrapper[4697]: I0127 15:28:38.375113 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-pbx5g" event={"ID":"8fac9142-cfe0-4849-b6d4-3315ce2475ef","Type":"ContainerStarted","Data":"b415f2de3fd0a8e4b4e4315c410125fe1f25008c46f379f9a295599c46f44730"} Jan 27 15:28:38 crc kubenswrapper[4697]: I0127 15:28:38.394094 4697 generic.go:334] "Generic (PLEG): container finished" podID="98cd1413-1ae7-49dd-91b9-d30f7947c4ea" containerID="4e200fb4cff5eba7c81253d76dd7f2d682567abe1ed975e856f6bac846a345ca" exitCode=0 Jan 27 15:28:38 crc kubenswrapper[4697]: I0127 15:28:38.394313 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-lnmrx" event={"ID":"98cd1413-1ae7-49dd-91b9-d30f7947c4ea","Type":"ContainerDied","Data":"4e200fb4cff5eba7c81253d76dd7f2d682567abe1ed975e856f6bac846a345ca"} Jan 27 15:28:38 crc kubenswrapper[4697]: I0127 15:28:38.418069 4697 generic.go:334] "Generic (PLEG): container finished" podID="a7bd5fd5-3c0d-4ff4-ae1e-933d95c02e40" containerID="4faed5d7dc7500bdfedae676c651bf46b1fd078dff802e3e1770d053496ae81e" exitCode=0 Jan 27 15:28:38 crc kubenswrapper[4697]: I0127 15:28:38.418174 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5959f8865f-5xj92" event={"ID":"a7bd5fd5-3c0d-4ff4-ae1e-933d95c02e40","Type":"ContainerDied","Data":"4faed5d7dc7500bdfedae676c651bf46b1fd078dff802e3e1770d053496ae81e"} Jan 27 15:28:38 crc kubenswrapper[4697]: I0127 15:28:38.420816 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-pbx5g" podStartSLOduration=7.420795476 podStartE2EDuration="7.420795476s" podCreationTimestamp="2026-01-27 15:28:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:28:38.407866932 +0000 UTC m=+1214.580266713" watchObservedRunningTime="2026-01-27 15:28:38.420795476 +0000 UTC m=+1214.593195257" Jan 27 15:28:38 crc kubenswrapper[4697]: I0127 15:28:38.430061 4697 scope.go:117] "RemoveContainer" containerID="18a0cba2b03049e4f7a5785305c82ef68559cd03fcf3dbc4e6e26c213e3d2553" Jan 27 15:28:38 crc kubenswrapper[4697]: I0127 15:28:38.437395 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-mwqgr" event={"ID":"09bebc2c-092b-415e-b4b3-80296620ce1b","Type":"ContainerStarted","Data":"bd61487fd854802c2e320e4d05469eb46e19091ab34cfe84eec437e4c137a414"} Jan 27 15:28:38 crc kubenswrapper[4697]: I0127 15:28:38.497986 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-nqsmh"] Jan 27 15:28:38 crc kubenswrapper[4697]: I0127 15:28:38.540851 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-nqsmh"] Jan 27 15:28:38 crc kubenswrapper[4697]: I0127 15:28:38.561303 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-mwqgr" podStartSLOduration=7.561284929 podStartE2EDuration="7.561284929s" podCreationTimestamp="2026-01-27 15:28:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:28:38.554366002 +0000 UTC m=+1214.726765783" watchObservedRunningTime="2026-01-27 15:28:38.561284929 +0000 UTC m=+1214.733684700" Jan 27 15:28:38 crc kubenswrapper[4697]: I0127 15:28:38.610058 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a43e2028-864c-4cb7-b20a-cc9e01417436" path="/var/lib/kubelet/pods/a43e2028-864c-4cb7-b20a-cc9e01417436/volumes" Jan 27 15:28:39 crc kubenswrapper[4697]: I0127 15:28:39.004474 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-5xj92" Jan 27 15:28:39 crc kubenswrapper[4697]: I0127 15:28:39.107749 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a7bd5fd5-3c0d-4ff4-ae1e-933d95c02e40-dns-swift-storage-0\") pod \"a7bd5fd5-3c0d-4ff4-ae1e-933d95c02e40\" (UID: \"a7bd5fd5-3c0d-4ff4-ae1e-933d95c02e40\") " Jan 27 15:28:39 crc kubenswrapper[4697]: I0127 15:28:39.107906 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a7bd5fd5-3c0d-4ff4-ae1e-933d95c02e40-ovsdbserver-nb\") pod \"a7bd5fd5-3c0d-4ff4-ae1e-933d95c02e40\" (UID: \"a7bd5fd5-3c0d-4ff4-ae1e-933d95c02e40\") " Jan 27 15:28:39 crc kubenswrapper[4697]: I0127 15:28:39.107946 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a7bd5fd5-3c0d-4ff4-ae1e-933d95c02e40-dns-svc\") pod \"a7bd5fd5-3c0d-4ff4-ae1e-933d95c02e40\" (UID: \"a7bd5fd5-3c0d-4ff4-ae1e-933d95c02e40\") " Jan 27 15:28:39 crc kubenswrapper[4697]: I0127 15:28:39.107993 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7bd5fd5-3c0d-4ff4-ae1e-933d95c02e40-config\") pod \"a7bd5fd5-3c0d-4ff4-ae1e-933d95c02e40\" (UID: \"a7bd5fd5-3c0d-4ff4-ae1e-933d95c02e40\") " Jan 27 15:28:39 crc kubenswrapper[4697]: I0127 15:28:39.108024 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a7bd5fd5-3c0d-4ff4-ae1e-933d95c02e40-ovsdbserver-sb\") pod \"a7bd5fd5-3c0d-4ff4-ae1e-933d95c02e40\" (UID: \"a7bd5fd5-3c0d-4ff4-ae1e-933d95c02e40\") " Jan 27 15:28:39 crc kubenswrapper[4697]: I0127 15:28:39.108076 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gmdkz\" (UniqueName: \"kubernetes.io/projected/a7bd5fd5-3c0d-4ff4-ae1e-933d95c02e40-kube-api-access-gmdkz\") pod \"a7bd5fd5-3c0d-4ff4-ae1e-933d95c02e40\" (UID: \"a7bd5fd5-3c0d-4ff4-ae1e-933d95c02e40\") " Jan 27 15:28:39 crc kubenswrapper[4697]: I0127 15:28:39.121279 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7bd5fd5-3c0d-4ff4-ae1e-933d95c02e40-kube-api-access-gmdkz" (OuterVolumeSpecName: "kube-api-access-gmdkz") pod "a7bd5fd5-3c0d-4ff4-ae1e-933d95c02e40" (UID: "a7bd5fd5-3c0d-4ff4-ae1e-933d95c02e40"). InnerVolumeSpecName "kube-api-access-gmdkz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:28:39 crc kubenswrapper[4697]: I0127 15:28:39.142501 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7bd5fd5-3c0d-4ff4-ae1e-933d95c02e40-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a7bd5fd5-3c0d-4ff4-ae1e-933d95c02e40" (UID: "a7bd5fd5-3c0d-4ff4-ae1e-933d95c02e40"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:28:39 crc kubenswrapper[4697]: I0127 15:28:39.145392 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7bd5fd5-3c0d-4ff4-ae1e-933d95c02e40-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "a7bd5fd5-3c0d-4ff4-ae1e-933d95c02e40" (UID: "a7bd5fd5-3c0d-4ff4-ae1e-933d95c02e40"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:28:39 crc kubenswrapper[4697]: I0127 15:28:39.174886 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7bd5fd5-3c0d-4ff4-ae1e-933d95c02e40-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a7bd5fd5-3c0d-4ff4-ae1e-933d95c02e40" (UID: "a7bd5fd5-3c0d-4ff4-ae1e-933d95c02e40"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:28:39 crc kubenswrapper[4697]: I0127 15:28:39.176221 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7bd5fd5-3c0d-4ff4-ae1e-933d95c02e40-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a7bd5fd5-3c0d-4ff4-ae1e-933d95c02e40" (UID: "a7bd5fd5-3c0d-4ff4-ae1e-933d95c02e40"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:28:39 crc kubenswrapper[4697]: I0127 15:28:39.178996 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7bd5fd5-3c0d-4ff4-ae1e-933d95c02e40-config" (OuterVolumeSpecName: "config") pod "a7bd5fd5-3c0d-4ff4-ae1e-933d95c02e40" (UID: "a7bd5fd5-3c0d-4ff4-ae1e-933d95c02e40"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:28:39 crc kubenswrapper[4697]: I0127 15:28:39.210221 4697 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a7bd5fd5-3c0d-4ff4-ae1e-933d95c02e40-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 27 15:28:39 crc kubenswrapper[4697]: I0127 15:28:39.210252 4697 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a7bd5fd5-3c0d-4ff4-ae1e-933d95c02e40-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 15:28:39 crc kubenswrapper[4697]: I0127 15:28:39.210265 4697 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a7bd5fd5-3c0d-4ff4-ae1e-933d95c02e40-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 15:28:39 crc kubenswrapper[4697]: I0127 15:28:39.210273 4697 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7bd5fd5-3c0d-4ff4-ae1e-933d95c02e40-config\") on node \"crc\" DevicePath \"\"" Jan 27 15:28:39 crc kubenswrapper[4697]: I0127 15:28:39.210282 4697 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a7bd5fd5-3c0d-4ff4-ae1e-933d95c02e40-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 15:28:39 crc kubenswrapper[4697]: I0127 15:28:39.210290 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gmdkz\" (UniqueName: \"kubernetes.io/projected/a7bd5fd5-3c0d-4ff4-ae1e-933d95c02e40-kube-api-access-gmdkz\") on node \"crc\" DevicePath \"\"" Jan 27 15:28:39 crc kubenswrapper[4697]: I0127 15:28:39.477608 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-lnmrx" event={"ID":"98cd1413-1ae7-49dd-91b9-d30f7947c4ea","Type":"ContainerStarted","Data":"3dfcc3b51443f5b0547b3b33bbcd5d7c3d4c777901c6ff9a71b37261257fd236"} Jan 27 15:28:39 crc kubenswrapper[4697]: I0127 15:28:39.478080 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-58dd9ff6bc-lnmrx" Jan 27 15:28:39 crc kubenswrapper[4697]: I0127 15:28:39.488384 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-5xj92" Jan 27 15:28:39 crc kubenswrapper[4697]: I0127 15:28:39.488374 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5959f8865f-5xj92" event={"ID":"a7bd5fd5-3c0d-4ff4-ae1e-933d95c02e40","Type":"ContainerDied","Data":"7a4970c902a3997eec8c81c6d6928ca838627e9b4a0624bce54a7cbef11a7063"} Jan 27 15:28:39 crc kubenswrapper[4697]: I0127 15:28:39.488520 4697 scope.go:117] "RemoveContainer" containerID="4faed5d7dc7500bdfedae676c651bf46b1fd078dff802e3e1770d053496ae81e" Jan 27 15:28:39 crc kubenswrapper[4697]: I0127 15:28:39.515437 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-58dd9ff6bc-lnmrx" podStartSLOduration=7.5154149199999996 podStartE2EDuration="7.51541492s" podCreationTimestamp="2026-01-27 15:28:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:28:39.509699041 +0000 UTC m=+1215.682098822" watchObservedRunningTime="2026-01-27 15:28:39.51541492 +0000 UTC m=+1215.687814711" Jan 27 15:28:39 crc kubenswrapper[4697]: I0127 15:28:39.582009 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-5xj92"] Jan 27 15:28:39 crc kubenswrapper[4697]: I0127 15:28:39.594822 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-5xj92"] Jan 27 15:28:40 crc kubenswrapper[4697]: I0127 15:28:40.238212 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-56b6fb4dd9-vzq9d"] Jan 27 15:28:40 crc kubenswrapper[4697]: I0127 15:28:40.289694 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5965fc65fb-dvhzz"] Jan 27 15:28:40 crc kubenswrapper[4697]: E0127 15:28:40.290134 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7bd5fd5-3c0d-4ff4-ae1e-933d95c02e40" containerName="init" Jan 27 15:28:40 crc kubenswrapper[4697]: I0127 15:28:40.290155 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7bd5fd5-3c0d-4ff4-ae1e-933d95c02e40" containerName="init" Jan 27 15:28:40 crc kubenswrapper[4697]: E0127 15:28:40.290188 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a43e2028-864c-4cb7-b20a-cc9e01417436" containerName="dnsmasq-dns" Jan 27 15:28:40 crc kubenswrapper[4697]: I0127 15:28:40.290196 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="a43e2028-864c-4cb7-b20a-cc9e01417436" containerName="dnsmasq-dns" Jan 27 15:28:40 crc kubenswrapper[4697]: E0127 15:28:40.291902 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a43e2028-864c-4cb7-b20a-cc9e01417436" containerName="init" Jan 27 15:28:40 crc kubenswrapper[4697]: I0127 15:28:40.291915 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="a43e2028-864c-4cb7-b20a-cc9e01417436" containerName="init" Jan 27 15:28:40 crc kubenswrapper[4697]: I0127 15:28:40.292221 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="a43e2028-864c-4cb7-b20a-cc9e01417436" containerName="dnsmasq-dns" Jan 27 15:28:40 crc kubenswrapper[4697]: I0127 15:28:40.292258 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7bd5fd5-3c0d-4ff4-ae1e-933d95c02e40" containerName="init" Jan 27 15:28:40 crc kubenswrapper[4697]: I0127 15:28:40.293466 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5965fc65fb-dvhzz" Jan 27 15:28:40 crc kubenswrapper[4697]: I0127 15:28:40.299265 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Jan 27 15:28:40 crc kubenswrapper[4697]: I0127 15:28:40.310828 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5965fc65fb-dvhzz"] Jan 27 15:28:40 crc kubenswrapper[4697]: I0127 15:28:40.337432 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/d6ad161d-fe95-4ad3-8f60-1f1310b2974c-horizon-tls-certs\") pod \"horizon-5965fc65fb-dvhzz\" (UID: \"d6ad161d-fe95-4ad3-8f60-1f1310b2974c\") " pod="openstack/horizon-5965fc65fb-dvhzz" Jan 27 15:28:40 crc kubenswrapper[4697]: I0127 15:28:40.337492 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mt6v5\" (UniqueName: \"kubernetes.io/projected/d6ad161d-fe95-4ad3-8f60-1f1310b2974c-kube-api-access-mt6v5\") pod \"horizon-5965fc65fb-dvhzz\" (UID: \"d6ad161d-fe95-4ad3-8f60-1f1310b2974c\") " pod="openstack/horizon-5965fc65fb-dvhzz" Jan 27 15:28:40 crc kubenswrapper[4697]: I0127 15:28:40.337527 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d6ad161d-fe95-4ad3-8f60-1f1310b2974c-scripts\") pod \"horizon-5965fc65fb-dvhzz\" (UID: \"d6ad161d-fe95-4ad3-8f60-1f1310b2974c\") " pod="openstack/horizon-5965fc65fb-dvhzz" Jan 27 15:28:40 crc kubenswrapper[4697]: I0127 15:28:40.337550 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d6ad161d-fe95-4ad3-8f60-1f1310b2974c-horizon-secret-key\") pod \"horizon-5965fc65fb-dvhzz\" (UID: \"d6ad161d-fe95-4ad3-8f60-1f1310b2974c\") " pod="openstack/horizon-5965fc65fb-dvhzz" Jan 27 15:28:40 crc kubenswrapper[4697]: I0127 15:28:40.337565 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d6ad161d-fe95-4ad3-8f60-1f1310b2974c-config-data\") pod \"horizon-5965fc65fb-dvhzz\" (UID: \"d6ad161d-fe95-4ad3-8f60-1f1310b2974c\") " pod="openstack/horizon-5965fc65fb-dvhzz" Jan 27 15:28:40 crc kubenswrapper[4697]: I0127 15:28:40.337587 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6ad161d-fe95-4ad3-8f60-1f1310b2974c-combined-ca-bundle\") pod \"horizon-5965fc65fb-dvhzz\" (UID: \"d6ad161d-fe95-4ad3-8f60-1f1310b2974c\") " pod="openstack/horizon-5965fc65fb-dvhzz" Jan 27 15:28:40 crc kubenswrapper[4697]: I0127 15:28:40.337605 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d6ad161d-fe95-4ad3-8f60-1f1310b2974c-logs\") pod \"horizon-5965fc65fb-dvhzz\" (UID: \"d6ad161d-fe95-4ad3-8f60-1f1310b2974c\") " pod="openstack/horizon-5965fc65fb-dvhzz" Jan 27 15:28:40 crc kubenswrapper[4697]: I0127 15:28:40.415622 4697 scope.go:117] "RemoveContainer" containerID="867e4d5b8aa913644d6dbe952dd54b059411c64e96b8a20b8cd471ed8de3bd45" Jan 27 15:28:40 crc kubenswrapper[4697]: I0127 15:28:40.438981 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d6ad161d-fe95-4ad3-8f60-1f1310b2974c-scripts\") pod \"horizon-5965fc65fb-dvhzz\" (UID: \"d6ad161d-fe95-4ad3-8f60-1f1310b2974c\") " pod="openstack/horizon-5965fc65fb-dvhzz" Jan 27 15:28:40 crc kubenswrapper[4697]: I0127 15:28:40.439048 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d6ad161d-fe95-4ad3-8f60-1f1310b2974c-config-data\") pod \"horizon-5965fc65fb-dvhzz\" (UID: \"d6ad161d-fe95-4ad3-8f60-1f1310b2974c\") " pod="openstack/horizon-5965fc65fb-dvhzz" Jan 27 15:28:40 crc kubenswrapper[4697]: I0127 15:28:40.439083 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d6ad161d-fe95-4ad3-8f60-1f1310b2974c-horizon-secret-key\") pod \"horizon-5965fc65fb-dvhzz\" (UID: \"d6ad161d-fe95-4ad3-8f60-1f1310b2974c\") " pod="openstack/horizon-5965fc65fb-dvhzz" Jan 27 15:28:40 crc kubenswrapper[4697]: I0127 15:28:40.439120 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6ad161d-fe95-4ad3-8f60-1f1310b2974c-combined-ca-bundle\") pod \"horizon-5965fc65fb-dvhzz\" (UID: \"d6ad161d-fe95-4ad3-8f60-1f1310b2974c\") " pod="openstack/horizon-5965fc65fb-dvhzz" Jan 27 15:28:40 crc kubenswrapper[4697]: I0127 15:28:40.439136 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d6ad161d-fe95-4ad3-8f60-1f1310b2974c-logs\") pod \"horizon-5965fc65fb-dvhzz\" (UID: \"d6ad161d-fe95-4ad3-8f60-1f1310b2974c\") " pod="openstack/horizon-5965fc65fb-dvhzz" Jan 27 15:28:40 crc kubenswrapper[4697]: I0127 15:28:40.439239 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/d6ad161d-fe95-4ad3-8f60-1f1310b2974c-horizon-tls-certs\") pod \"horizon-5965fc65fb-dvhzz\" (UID: \"d6ad161d-fe95-4ad3-8f60-1f1310b2974c\") " pod="openstack/horizon-5965fc65fb-dvhzz" Jan 27 15:28:40 crc kubenswrapper[4697]: I0127 15:28:40.439270 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mt6v5\" (UniqueName: \"kubernetes.io/projected/d6ad161d-fe95-4ad3-8f60-1f1310b2974c-kube-api-access-mt6v5\") pod \"horizon-5965fc65fb-dvhzz\" (UID: \"d6ad161d-fe95-4ad3-8f60-1f1310b2974c\") " pod="openstack/horizon-5965fc65fb-dvhzz" Jan 27 15:28:40 crc kubenswrapper[4697]: I0127 15:28:40.441387 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d6ad161d-fe95-4ad3-8f60-1f1310b2974c-scripts\") pod \"horizon-5965fc65fb-dvhzz\" (UID: \"d6ad161d-fe95-4ad3-8f60-1f1310b2974c\") " pod="openstack/horizon-5965fc65fb-dvhzz" Jan 27 15:28:40 crc kubenswrapper[4697]: I0127 15:28:40.442409 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d6ad161d-fe95-4ad3-8f60-1f1310b2974c-logs\") pod \"horizon-5965fc65fb-dvhzz\" (UID: \"d6ad161d-fe95-4ad3-8f60-1f1310b2974c\") " pod="openstack/horizon-5965fc65fb-dvhzz" Jan 27 15:28:40 crc kubenswrapper[4697]: I0127 15:28:40.443607 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-58cf9979b5-9tn4x"] Jan 27 15:28:40 crc kubenswrapper[4697]: I0127 15:28:40.444837 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d6ad161d-fe95-4ad3-8f60-1f1310b2974c-config-data\") pod \"horizon-5965fc65fb-dvhzz\" (UID: \"d6ad161d-fe95-4ad3-8f60-1f1310b2974c\") " pod="openstack/horizon-5965fc65fb-dvhzz" Jan 27 15:28:40 crc kubenswrapper[4697]: I0127 15:28:40.449895 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d6ad161d-fe95-4ad3-8f60-1f1310b2974c-horizon-secret-key\") pod \"horizon-5965fc65fb-dvhzz\" (UID: \"d6ad161d-fe95-4ad3-8f60-1f1310b2974c\") " pod="openstack/horizon-5965fc65fb-dvhzz" Jan 27 15:28:40 crc kubenswrapper[4697]: I0127 15:28:40.450490 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6ad161d-fe95-4ad3-8f60-1f1310b2974c-combined-ca-bundle\") pod \"horizon-5965fc65fb-dvhzz\" (UID: \"d6ad161d-fe95-4ad3-8f60-1f1310b2974c\") " pod="openstack/horizon-5965fc65fb-dvhzz" Jan 27 15:28:40 crc kubenswrapper[4697]: I0127 15:28:40.451019 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/d6ad161d-fe95-4ad3-8f60-1f1310b2974c-horizon-tls-certs\") pod \"horizon-5965fc65fb-dvhzz\" (UID: \"d6ad161d-fe95-4ad3-8f60-1f1310b2974c\") " pod="openstack/horizon-5965fc65fb-dvhzz" Jan 27 15:28:40 crc kubenswrapper[4697]: I0127 15:28:40.489447 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mt6v5\" (UniqueName: \"kubernetes.io/projected/d6ad161d-fe95-4ad3-8f60-1f1310b2974c-kube-api-access-mt6v5\") pod \"horizon-5965fc65fb-dvhzz\" (UID: \"d6ad161d-fe95-4ad3-8f60-1f1310b2974c\") " pod="openstack/horizon-5965fc65fb-dvhzz" Jan 27 15:28:40 crc kubenswrapper[4697]: I0127 15:28:40.489603 4697 scope.go:117] "RemoveContainer" containerID="7d5a2579e4e43d8a9dd99c02e30ba9e511f1341190f40fd3db1259ccb6b5c208" Jan 27 15:28:40 crc kubenswrapper[4697]: I0127 15:28:40.533059 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5b9dc56b78-cpxnx"] Jan 27 15:28:40 crc kubenswrapper[4697]: I0127 15:28:40.534553 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5b9dc56b78-cpxnx" Jan 27 15:28:40 crc kubenswrapper[4697]: I0127 15:28:40.543093 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ca5e937a-90cf-44e0-bf5c-bcb75c95a2f4-horizon-secret-key\") pod \"horizon-5b9dc56b78-cpxnx\" (UID: \"ca5e937a-90cf-44e0-bf5c-bcb75c95a2f4\") " pod="openstack/horizon-5b9dc56b78-cpxnx" Jan 27 15:28:40 crc kubenswrapper[4697]: I0127 15:28:40.543133 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ca5e937a-90cf-44e0-bf5c-bcb75c95a2f4-scripts\") pod \"horizon-5b9dc56b78-cpxnx\" (UID: \"ca5e937a-90cf-44e0-bf5c-bcb75c95a2f4\") " pod="openstack/horizon-5b9dc56b78-cpxnx" Jan 27 15:28:40 crc kubenswrapper[4697]: I0127 15:28:40.543176 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ca5e937a-90cf-44e0-bf5c-bcb75c95a2f4-config-data\") pod \"horizon-5b9dc56b78-cpxnx\" (UID: \"ca5e937a-90cf-44e0-bf5c-bcb75c95a2f4\") " pod="openstack/horizon-5b9dc56b78-cpxnx" Jan 27 15:28:40 crc kubenswrapper[4697]: I0127 15:28:40.543201 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/ca5e937a-90cf-44e0-bf5c-bcb75c95a2f4-horizon-tls-certs\") pod \"horizon-5b9dc56b78-cpxnx\" (UID: \"ca5e937a-90cf-44e0-bf5c-bcb75c95a2f4\") " pod="openstack/horizon-5b9dc56b78-cpxnx" Jan 27 15:28:40 crc kubenswrapper[4697]: I0127 15:28:40.543236 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca5e937a-90cf-44e0-bf5c-bcb75c95a2f4-combined-ca-bundle\") pod \"horizon-5b9dc56b78-cpxnx\" (UID: \"ca5e937a-90cf-44e0-bf5c-bcb75c95a2f4\") " pod="openstack/horizon-5b9dc56b78-cpxnx" Jan 27 15:28:40 crc kubenswrapper[4697]: I0127 15:28:40.543268 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dp9l7\" (UniqueName: \"kubernetes.io/projected/ca5e937a-90cf-44e0-bf5c-bcb75c95a2f4-kube-api-access-dp9l7\") pod \"horizon-5b9dc56b78-cpxnx\" (UID: \"ca5e937a-90cf-44e0-bf5c-bcb75c95a2f4\") " pod="openstack/horizon-5b9dc56b78-cpxnx" Jan 27 15:28:40 crc kubenswrapper[4697]: I0127 15:28:40.543320 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ca5e937a-90cf-44e0-bf5c-bcb75c95a2f4-logs\") pod \"horizon-5b9dc56b78-cpxnx\" (UID: \"ca5e937a-90cf-44e0-bf5c-bcb75c95a2f4\") " pod="openstack/horizon-5b9dc56b78-cpxnx" Jan 27 15:28:40 crc kubenswrapper[4697]: I0127 15:28:40.585673 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7bd5fd5-3c0d-4ff4-ae1e-933d95c02e40" path="/var/lib/kubelet/pods/a7bd5fd5-3c0d-4ff4-ae1e-933d95c02e40/volumes" Jan 27 15:28:40 crc kubenswrapper[4697]: I0127 15:28:40.587436 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5b9dc56b78-cpxnx"] Jan 27 15:28:40 crc kubenswrapper[4697]: I0127 15:28:40.618543 4697 scope.go:117] "RemoveContainer" containerID="065973e512ac85d9a8e4d23acc1d90e12e867f0a2b095a5c5ed17fb201b6bd3f" Jan 27 15:28:40 crc kubenswrapper[4697]: I0127 15:28:40.627616 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5965fc65fb-dvhzz" Jan 27 15:28:40 crc kubenswrapper[4697]: I0127 15:28:40.645661 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ca5e937a-90cf-44e0-bf5c-bcb75c95a2f4-config-data\") pod \"horizon-5b9dc56b78-cpxnx\" (UID: \"ca5e937a-90cf-44e0-bf5c-bcb75c95a2f4\") " pod="openstack/horizon-5b9dc56b78-cpxnx" Jan 27 15:28:40 crc kubenswrapper[4697]: I0127 15:28:40.649424 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ca5e937a-90cf-44e0-bf5c-bcb75c95a2f4-config-data\") pod \"horizon-5b9dc56b78-cpxnx\" (UID: \"ca5e937a-90cf-44e0-bf5c-bcb75c95a2f4\") " pod="openstack/horizon-5b9dc56b78-cpxnx" Jan 27 15:28:40 crc kubenswrapper[4697]: I0127 15:28:40.655342 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/ca5e937a-90cf-44e0-bf5c-bcb75c95a2f4-horizon-tls-certs\") pod \"horizon-5b9dc56b78-cpxnx\" (UID: \"ca5e937a-90cf-44e0-bf5c-bcb75c95a2f4\") " pod="openstack/horizon-5b9dc56b78-cpxnx" Jan 27 15:28:40 crc kubenswrapper[4697]: I0127 15:28:40.655498 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca5e937a-90cf-44e0-bf5c-bcb75c95a2f4-combined-ca-bundle\") pod \"horizon-5b9dc56b78-cpxnx\" (UID: \"ca5e937a-90cf-44e0-bf5c-bcb75c95a2f4\") " pod="openstack/horizon-5b9dc56b78-cpxnx" Jan 27 15:28:40 crc kubenswrapper[4697]: I0127 15:28:40.655592 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dp9l7\" (UniqueName: \"kubernetes.io/projected/ca5e937a-90cf-44e0-bf5c-bcb75c95a2f4-kube-api-access-dp9l7\") pod \"horizon-5b9dc56b78-cpxnx\" (UID: \"ca5e937a-90cf-44e0-bf5c-bcb75c95a2f4\") " pod="openstack/horizon-5b9dc56b78-cpxnx" Jan 27 15:28:40 crc kubenswrapper[4697]: I0127 15:28:40.655621 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ca5e937a-90cf-44e0-bf5c-bcb75c95a2f4-logs\") pod \"horizon-5b9dc56b78-cpxnx\" (UID: \"ca5e937a-90cf-44e0-bf5c-bcb75c95a2f4\") " pod="openstack/horizon-5b9dc56b78-cpxnx" Jan 27 15:28:40 crc kubenswrapper[4697]: I0127 15:28:40.655762 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ca5e937a-90cf-44e0-bf5c-bcb75c95a2f4-horizon-secret-key\") pod \"horizon-5b9dc56b78-cpxnx\" (UID: \"ca5e937a-90cf-44e0-bf5c-bcb75c95a2f4\") " pod="openstack/horizon-5b9dc56b78-cpxnx" Jan 27 15:28:40 crc kubenswrapper[4697]: I0127 15:28:40.655808 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ca5e937a-90cf-44e0-bf5c-bcb75c95a2f4-scripts\") pod \"horizon-5b9dc56b78-cpxnx\" (UID: \"ca5e937a-90cf-44e0-bf5c-bcb75c95a2f4\") " pod="openstack/horizon-5b9dc56b78-cpxnx" Jan 27 15:28:40 crc kubenswrapper[4697]: I0127 15:28:40.656348 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ca5e937a-90cf-44e0-bf5c-bcb75c95a2f4-scripts\") pod \"horizon-5b9dc56b78-cpxnx\" (UID: \"ca5e937a-90cf-44e0-bf5c-bcb75c95a2f4\") " pod="openstack/horizon-5b9dc56b78-cpxnx" Jan 27 15:28:40 crc kubenswrapper[4697]: I0127 15:28:40.659517 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ca5e937a-90cf-44e0-bf5c-bcb75c95a2f4-logs\") pod \"horizon-5b9dc56b78-cpxnx\" (UID: \"ca5e937a-90cf-44e0-bf5c-bcb75c95a2f4\") " pod="openstack/horizon-5b9dc56b78-cpxnx" Jan 27 15:28:40 crc kubenswrapper[4697]: I0127 15:28:40.678942 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca5e937a-90cf-44e0-bf5c-bcb75c95a2f4-combined-ca-bundle\") pod \"horizon-5b9dc56b78-cpxnx\" (UID: \"ca5e937a-90cf-44e0-bf5c-bcb75c95a2f4\") " pod="openstack/horizon-5b9dc56b78-cpxnx" Jan 27 15:28:40 crc kubenswrapper[4697]: I0127 15:28:40.680751 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/ca5e937a-90cf-44e0-bf5c-bcb75c95a2f4-horizon-tls-certs\") pod \"horizon-5b9dc56b78-cpxnx\" (UID: \"ca5e937a-90cf-44e0-bf5c-bcb75c95a2f4\") " pod="openstack/horizon-5b9dc56b78-cpxnx" Jan 27 15:28:40 crc kubenswrapper[4697]: I0127 15:28:40.687359 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ca5e937a-90cf-44e0-bf5c-bcb75c95a2f4-horizon-secret-key\") pod \"horizon-5b9dc56b78-cpxnx\" (UID: \"ca5e937a-90cf-44e0-bf5c-bcb75c95a2f4\") " pod="openstack/horizon-5b9dc56b78-cpxnx" Jan 27 15:28:40 crc kubenswrapper[4697]: I0127 15:28:40.692559 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dp9l7\" (UniqueName: \"kubernetes.io/projected/ca5e937a-90cf-44e0-bf5c-bcb75c95a2f4-kube-api-access-dp9l7\") pod \"horizon-5b9dc56b78-cpxnx\" (UID: \"ca5e937a-90cf-44e0-bf5c-bcb75c95a2f4\") " pod="openstack/horizon-5b9dc56b78-cpxnx" Jan 27 15:28:40 crc kubenswrapper[4697]: I0127 15:28:40.917437 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5b9dc56b78-cpxnx" Jan 27 15:28:41 crc kubenswrapper[4697]: I0127 15:28:41.630941 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5965fc65fb-dvhzz"] Jan 27 15:28:42 crc kubenswrapper[4697]: I0127 15:28:42.026742 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5b9dc56b78-cpxnx"] Jan 27 15:28:42 crc kubenswrapper[4697]: W0127 15:28:42.069035 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podca5e937a_90cf_44e0_bf5c_bcb75c95a2f4.slice/crio-3ab87844251b73c1b172ad07f6b662ccd751c35d783d3314f333c7309ef03349 WatchSource:0}: Error finding container 3ab87844251b73c1b172ad07f6b662ccd751c35d783d3314f333c7309ef03349: Status 404 returned error can't find the container with id 3ab87844251b73c1b172ad07f6b662ccd751c35d783d3314f333c7309ef03349 Jan 27 15:28:42 crc kubenswrapper[4697]: I0127 15:28:42.628318 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5965fc65fb-dvhzz" event={"ID":"d6ad161d-fe95-4ad3-8f60-1f1310b2974c","Type":"ContainerStarted","Data":"df9f412f4a46fd18e30b83b99e2150f3a7fcbe89d808c1d409cff5a479e9d5e1"} Jan 27 15:28:42 crc kubenswrapper[4697]: I0127 15:28:42.630413 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5b9dc56b78-cpxnx" event={"ID":"ca5e937a-90cf-44e0-bf5c-bcb75c95a2f4","Type":"ContainerStarted","Data":"3ab87844251b73c1b172ad07f6b662ccd751c35d783d3314f333c7309ef03349"} Jan 27 15:28:46 crc kubenswrapper[4697]: I0127 15:28:46.669566 4697 generic.go:334] "Generic (PLEG): container finished" podID="09bebc2c-092b-415e-b4b3-80296620ce1b" containerID="bd61487fd854802c2e320e4d05469eb46e19091ab34cfe84eec437e4c137a414" exitCode=0 Jan 27 15:28:46 crc kubenswrapper[4697]: I0127 15:28:46.669661 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-mwqgr" event={"ID":"09bebc2c-092b-415e-b4b3-80296620ce1b","Type":"ContainerDied","Data":"bd61487fd854802c2e320e4d05469eb46e19091ab34cfe84eec437e4c137a414"} Jan 27 15:28:47 crc kubenswrapper[4697]: I0127 15:28:47.507939 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-58dd9ff6bc-lnmrx" Jan 27 15:28:47 crc kubenswrapper[4697]: I0127 15:28:47.556235 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-nrzk8"] Jan 27 15:28:47 crc kubenswrapper[4697]: I0127 15:28:47.556493 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-698758b865-nrzk8" podUID="621e9d49-138c-485b-a57e-1f3ec16c5875" containerName="dnsmasq-dns" containerID="cri-o://ecbb4d6d233ecc2067421709c3613025b36534385ec0d4620144bbb9d9533977" gracePeriod=10 Jan 27 15:28:48 crc kubenswrapper[4697]: I0127 15:28:48.655306 4697 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-nrzk8" podUID="621e9d49-138c-485b-a57e-1f3ec16c5875" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.113:5353: connect: connection refused" Jan 27 15:28:48 crc kubenswrapper[4697]: I0127 15:28:48.741807 4697 generic.go:334] "Generic (PLEG): container finished" podID="621e9d49-138c-485b-a57e-1f3ec16c5875" containerID="ecbb4d6d233ecc2067421709c3613025b36534385ec0d4620144bbb9d9533977" exitCode=0 Jan 27 15:28:48 crc kubenswrapper[4697]: I0127 15:28:48.741863 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-nrzk8" event={"ID":"621e9d49-138c-485b-a57e-1f3ec16c5875","Type":"ContainerDied","Data":"ecbb4d6d233ecc2067421709c3613025b36534385ec0d4620144bbb9d9533977"} Jan 27 15:28:53 crc kubenswrapper[4697]: I0127 15:28:53.655468 4697 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-nrzk8" podUID="621e9d49-138c-485b-a57e-1f3ec16c5875" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.113:5353: connect: connection refused" Jan 27 15:28:56 crc kubenswrapper[4697]: I0127 15:28:56.822627 4697 generic.go:334] "Generic (PLEG): container finished" podID="f75c5842-64d4-45c9-a282-b8fb8bea1af6" containerID="7c000ddf2638ad23872e6f733e1f5c537c95bf9ee2fd1f801eac52b3a2c28342" exitCode=0 Jan 27 15:28:56 crc kubenswrapper[4697]: I0127 15:28:56.822871 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-xgxk5" event={"ID":"f75c5842-64d4-45c9-a282-b8fb8bea1af6","Type":"ContainerDied","Data":"7c000ddf2638ad23872e6f733e1f5c537c95bf9ee2fd1f801eac52b3a2c28342"} Jan 27 15:28:58 crc kubenswrapper[4697]: I0127 15:28:58.655615 4697 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-nrzk8" podUID="621e9d49-138c-485b-a57e-1f3ec16c5875" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.113:5353: connect: connection refused" Jan 27 15:28:58 crc kubenswrapper[4697]: I0127 15:28:58.656036 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-nrzk8" Jan 27 15:28:59 crc kubenswrapper[4697]: E0127 15:28:59.012125 4697 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Jan 27 15:28:59 crc kubenswrapper[4697]: E0127 15:28:59.012293 4697 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n66dh696h678h5b8h5b7h5f5h9fh659h5dh66dh689h558hd9h5bfh665h667h575hb4hb4h68h5ffh699h564h56bh8ch7ch68dh687h644h98h56hbcq,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p9mgn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-56b6fb4dd9-vzq9d_openstack(6ceeb921-c7c4-41df-bed1-a082bdcb6e79): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 15:28:59 crc kubenswrapper[4697]: E0127 15:28:59.031764 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-56b6fb4dd9-vzq9d" podUID="6ceeb921-c7c4-41df-bed1-a082bdcb6e79" Jan 27 15:28:59 crc kubenswrapper[4697]: E0127 15:28:59.042689 4697 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Jan 27 15:28:59 crc kubenswrapper[4697]: E0127 15:28:59.042921 4697 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nf7h5d5h64h65dh5cch5c4hc7hdbhbdh674h7bhd4hdch5cdh67ch64dh684hb5hfdhfh645h5cbh5f5h567h67fh588h8bh5dfh5d6h5d8h667hf6q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mt6v5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-5965fc65fb-dvhzz_openstack(d6ad161d-fe95-4ad3-8f60-1f1310b2974c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 15:28:59 crc kubenswrapper[4697]: E0127 15:28:59.047367 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-5965fc65fb-dvhzz" podUID="d6ad161d-fe95-4ad3-8f60-1f1310b2974c" Jan 27 15:28:59 crc kubenswrapper[4697]: E0127 15:28:59.053748 4697 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Jan 27 15:28:59 crc kubenswrapper[4697]: E0127 15:28:59.053910 4697 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n67bh9dh94h8h5dchd6h584h9chb8hc8h8fh655h554hf4h7bh59ch5cfh95h9ch8h556hc9h64fh55ch555h5f4h5cfh74h84h567h56hb6q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dw6kg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-789b55bb8f-pdnjn_openstack(f716119d-b8f8-4bdf-87de-e4452080d972): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 15:28:59 crc kubenswrapper[4697]: E0127 15:28:59.055835 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-789b55bb8f-pdnjn" podUID="f716119d-b8f8-4bdf-87de-e4452080d972" Jan 27 15:28:59 crc kubenswrapper[4697]: I0127 15:28:59.139820 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-xgxk5" Jan 27 15:28:59 crc kubenswrapper[4697]: I0127 15:28:59.197847 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f75c5842-64d4-45c9-a282-b8fb8bea1af6-db-sync-config-data\") pod \"f75c5842-64d4-45c9-a282-b8fb8bea1af6\" (UID: \"f75c5842-64d4-45c9-a282-b8fb8bea1af6\") " Jan 27 15:28:59 crc kubenswrapper[4697]: I0127 15:28:59.197912 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f75c5842-64d4-45c9-a282-b8fb8bea1af6-combined-ca-bundle\") pod \"f75c5842-64d4-45c9-a282-b8fb8bea1af6\" (UID: \"f75c5842-64d4-45c9-a282-b8fb8bea1af6\") " Jan 27 15:28:59 crc kubenswrapper[4697]: I0127 15:28:59.197960 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f75c5842-64d4-45c9-a282-b8fb8bea1af6-config-data\") pod \"f75c5842-64d4-45c9-a282-b8fb8bea1af6\" (UID: \"f75c5842-64d4-45c9-a282-b8fb8bea1af6\") " Jan 27 15:28:59 crc kubenswrapper[4697]: I0127 15:28:59.198019 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b89bk\" (UniqueName: \"kubernetes.io/projected/f75c5842-64d4-45c9-a282-b8fb8bea1af6-kube-api-access-b89bk\") pod \"f75c5842-64d4-45c9-a282-b8fb8bea1af6\" (UID: \"f75c5842-64d4-45c9-a282-b8fb8bea1af6\") " Jan 27 15:28:59 crc kubenswrapper[4697]: I0127 15:28:59.205109 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f75c5842-64d4-45c9-a282-b8fb8bea1af6-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "f75c5842-64d4-45c9-a282-b8fb8bea1af6" (UID: "f75c5842-64d4-45c9-a282-b8fb8bea1af6"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:28:59 crc kubenswrapper[4697]: I0127 15:28:59.207174 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f75c5842-64d4-45c9-a282-b8fb8bea1af6-kube-api-access-b89bk" (OuterVolumeSpecName: "kube-api-access-b89bk") pod "f75c5842-64d4-45c9-a282-b8fb8bea1af6" (UID: "f75c5842-64d4-45c9-a282-b8fb8bea1af6"). InnerVolumeSpecName "kube-api-access-b89bk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:28:59 crc kubenswrapper[4697]: I0127 15:28:59.239312 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f75c5842-64d4-45c9-a282-b8fb8bea1af6-config-data" (OuterVolumeSpecName: "config-data") pod "f75c5842-64d4-45c9-a282-b8fb8bea1af6" (UID: "f75c5842-64d4-45c9-a282-b8fb8bea1af6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:28:59 crc kubenswrapper[4697]: I0127 15:28:59.241817 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f75c5842-64d4-45c9-a282-b8fb8bea1af6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f75c5842-64d4-45c9-a282-b8fb8bea1af6" (UID: "f75c5842-64d4-45c9-a282-b8fb8bea1af6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:28:59 crc kubenswrapper[4697]: I0127 15:28:59.299694 4697 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f75c5842-64d4-45c9-a282-b8fb8bea1af6-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 15:28:59 crc kubenswrapper[4697]: I0127 15:28:59.299730 4697 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f75c5842-64d4-45c9-a282-b8fb8bea1af6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 15:28:59 crc kubenswrapper[4697]: I0127 15:28:59.299740 4697 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f75c5842-64d4-45c9-a282-b8fb8bea1af6-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 15:28:59 crc kubenswrapper[4697]: I0127 15:28:59.299748 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b89bk\" (UniqueName: \"kubernetes.io/projected/f75c5842-64d4-45c9-a282-b8fb8bea1af6-kube-api-access-b89bk\") on node \"crc\" DevicePath \"\"" Jan 27 15:28:59 crc kubenswrapper[4697]: I0127 15:28:59.850492 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-xgxk5" Jan 27 15:28:59 crc kubenswrapper[4697]: I0127 15:28:59.850478 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-xgxk5" event={"ID":"f75c5842-64d4-45c9-a282-b8fb8bea1af6","Type":"ContainerDied","Data":"f13488db6c04a180035cc860a9fa33ac2f12c65c59ed929a139bea8ddec9c293"} Jan 27 15:28:59 crc kubenswrapper[4697]: I0127 15:28:59.850543 4697 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f13488db6c04a180035cc860a9fa33ac2f12c65c59ed929a139bea8ddec9c293" Jan 27 15:28:59 crc kubenswrapper[4697]: E0127 15:28:59.864998 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-5965fc65fb-dvhzz" podUID="d6ad161d-fe95-4ad3-8f60-1f1310b2974c" Jan 27 15:29:00 crc kubenswrapper[4697]: I0127 15:29:00.650476 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-rxcqn"] Jan 27 15:29:00 crc kubenswrapper[4697]: E0127 15:29:00.651085 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f75c5842-64d4-45c9-a282-b8fb8bea1af6" containerName="glance-db-sync" Jan 27 15:29:00 crc kubenswrapper[4697]: I0127 15:29:00.651100 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="f75c5842-64d4-45c9-a282-b8fb8bea1af6" containerName="glance-db-sync" Jan 27 15:29:00 crc kubenswrapper[4697]: I0127 15:29:00.651255 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="f75c5842-64d4-45c9-a282-b8fb8bea1af6" containerName="glance-db-sync" Jan 27 15:29:00 crc kubenswrapper[4697]: I0127 15:29:00.652192 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-rxcqn" Jan 27 15:29:00 crc kubenswrapper[4697]: I0127 15:29:00.674164 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-rxcqn"] Jan 27 15:29:00 crc kubenswrapper[4697]: I0127 15:29:00.725627 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5f1bd3d5-7712-4eb4-a256-ffe933ef88de-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-rxcqn\" (UID: \"5f1bd3d5-7712-4eb4-a256-ffe933ef88de\") " pod="openstack/dnsmasq-dns-785d8bcb8c-rxcqn" Jan 27 15:29:00 crc kubenswrapper[4697]: I0127 15:29:00.725684 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxd9d\" (UniqueName: \"kubernetes.io/projected/5f1bd3d5-7712-4eb4-a256-ffe933ef88de-kube-api-access-dxd9d\") pod \"dnsmasq-dns-785d8bcb8c-rxcqn\" (UID: \"5f1bd3d5-7712-4eb4-a256-ffe933ef88de\") " pod="openstack/dnsmasq-dns-785d8bcb8c-rxcqn" Jan 27 15:29:00 crc kubenswrapper[4697]: I0127 15:29:00.725736 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5f1bd3d5-7712-4eb4-a256-ffe933ef88de-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-rxcqn\" (UID: \"5f1bd3d5-7712-4eb4-a256-ffe933ef88de\") " pod="openstack/dnsmasq-dns-785d8bcb8c-rxcqn" Jan 27 15:29:00 crc kubenswrapper[4697]: I0127 15:29:00.725763 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f1bd3d5-7712-4eb4-a256-ffe933ef88de-config\") pod \"dnsmasq-dns-785d8bcb8c-rxcqn\" (UID: \"5f1bd3d5-7712-4eb4-a256-ffe933ef88de\") " pod="openstack/dnsmasq-dns-785d8bcb8c-rxcqn" Jan 27 15:29:00 crc kubenswrapper[4697]: I0127 15:29:00.725821 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5f1bd3d5-7712-4eb4-a256-ffe933ef88de-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-rxcqn\" (UID: \"5f1bd3d5-7712-4eb4-a256-ffe933ef88de\") " pod="openstack/dnsmasq-dns-785d8bcb8c-rxcqn" Jan 27 15:29:00 crc kubenswrapper[4697]: I0127 15:29:00.725869 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5f1bd3d5-7712-4eb4-a256-ffe933ef88de-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-rxcqn\" (UID: \"5f1bd3d5-7712-4eb4-a256-ffe933ef88de\") " pod="openstack/dnsmasq-dns-785d8bcb8c-rxcqn" Jan 27 15:29:00 crc kubenswrapper[4697]: I0127 15:29:00.827872 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5f1bd3d5-7712-4eb4-a256-ffe933ef88de-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-rxcqn\" (UID: \"5f1bd3d5-7712-4eb4-a256-ffe933ef88de\") " pod="openstack/dnsmasq-dns-785d8bcb8c-rxcqn" Jan 27 15:29:00 crc kubenswrapper[4697]: I0127 15:29:00.827937 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxd9d\" (UniqueName: \"kubernetes.io/projected/5f1bd3d5-7712-4eb4-a256-ffe933ef88de-kube-api-access-dxd9d\") pod \"dnsmasq-dns-785d8bcb8c-rxcqn\" (UID: \"5f1bd3d5-7712-4eb4-a256-ffe933ef88de\") " pod="openstack/dnsmasq-dns-785d8bcb8c-rxcqn" Jan 27 15:29:00 crc kubenswrapper[4697]: I0127 15:29:00.827989 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5f1bd3d5-7712-4eb4-a256-ffe933ef88de-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-rxcqn\" (UID: \"5f1bd3d5-7712-4eb4-a256-ffe933ef88de\") " pod="openstack/dnsmasq-dns-785d8bcb8c-rxcqn" Jan 27 15:29:00 crc kubenswrapper[4697]: I0127 15:29:00.828023 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f1bd3d5-7712-4eb4-a256-ffe933ef88de-config\") pod \"dnsmasq-dns-785d8bcb8c-rxcqn\" (UID: \"5f1bd3d5-7712-4eb4-a256-ffe933ef88de\") " pod="openstack/dnsmasq-dns-785d8bcb8c-rxcqn" Jan 27 15:29:00 crc kubenswrapper[4697]: I0127 15:29:00.828051 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5f1bd3d5-7712-4eb4-a256-ffe933ef88de-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-rxcqn\" (UID: \"5f1bd3d5-7712-4eb4-a256-ffe933ef88de\") " pod="openstack/dnsmasq-dns-785d8bcb8c-rxcqn" Jan 27 15:29:00 crc kubenswrapper[4697]: I0127 15:29:00.828100 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5f1bd3d5-7712-4eb4-a256-ffe933ef88de-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-rxcqn\" (UID: \"5f1bd3d5-7712-4eb4-a256-ffe933ef88de\") " pod="openstack/dnsmasq-dns-785d8bcb8c-rxcqn" Jan 27 15:29:00 crc kubenswrapper[4697]: I0127 15:29:00.828683 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5f1bd3d5-7712-4eb4-a256-ffe933ef88de-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-rxcqn\" (UID: \"5f1bd3d5-7712-4eb4-a256-ffe933ef88de\") " pod="openstack/dnsmasq-dns-785d8bcb8c-rxcqn" Jan 27 15:29:00 crc kubenswrapper[4697]: I0127 15:29:00.828959 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5f1bd3d5-7712-4eb4-a256-ffe933ef88de-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-rxcqn\" (UID: \"5f1bd3d5-7712-4eb4-a256-ffe933ef88de\") " pod="openstack/dnsmasq-dns-785d8bcb8c-rxcqn" Jan 27 15:29:00 crc kubenswrapper[4697]: I0127 15:29:00.829330 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5f1bd3d5-7712-4eb4-a256-ffe933ef88de-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-rxcqn\" (UID: \"5f1bd3d5-7712-4eb4-a256-ffe933ef88de\") " pod="openstack/dnsmasq-dns-785d8bcb8c-rxcqn" Jan 27 15:29:00 crc kubenswrapper[4697]: I0127 15:29:00.829634 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f1bd3d5-7712-4eb4-a256-ffe933ef88de-config\") pod \"dnsmasq-dns-785d8bcb8c-rxcqn\" (UID: \"5f1bd3d5-7712-4eb4-a256-ffe933ef88de\") " pod="openstack/dnsmasq-dns-785d8bcb8c-rxcqn" Jan 27 15:29:00 crc kubenswrapper[4697]: I0127 15:29:00.830101 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5f1bd3d5-7712-4eb4-a256-ffe933ef88de-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-rxcqn\" (UID: \"5f1bd3d5-7712-4eb4-a256-ffe933ef88de\") " pod="openstack/dnsmasq-dns-785d8bcb8c-rxcqn" Jan 27 15:29:00 crc kubenswrapper[4697]: I0127 15:29:00.852588 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxd9d\" (UniqueName: \"kubernetes.io/projected/5f1bd3d5-7712-4eb4-a256-ffe933ef88de-kube-api-access-dxd9d\") pod \"dnsmasq-dns-785d8bcb8c-rxcqn\" (UID: \"5f1bd3d5-7712-4eb4-a256-ffe933ef88de\") " pod="openstack/dnsmasq-dns-785d8bcb8c-rxcqn" Jan 27 15:29:00 crc kubenswrapper[4697]: I0127 15:29:00.979185 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-rxcqn" Jan 27 15:29:01 crc kubenswrapper[4697]: I0127 15:29:01.515600 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 15:29:01 crc kubenswrapper[4697]: I0127 15:29:01.517617 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 15:29:01 crc kubenswrapper[4697]: I0127 15:29:01.520460 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-dxlmx" Jan 27 15:29:01 crc kubenswrapper[4697]: I0127 15:29:01.520647 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 27 15:29:01 crc kubenswrapper[4697]: I0127 15:29:01.525215 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 15:29:01 crc kubenswrapper[4697]: I0127 15:29:01.530544 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 27 15:29:01 crc kubenswrapper[4697]: I0127 15:29:01.642730 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e63493f9-ccd0-485d-a4ef-827699a7d1de-config-data\") pod \"glance-default-external-api-0\" (UID: \"e63493f9-ccd0-485d-a4ef-827699a7d1de\") " pod="openstack/glance-default-external-api-0" Jan 27 15:29:01 crc kubenswrapper[4697]: I0127 15:29:01.642798 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7mgq\" (UniqueName: \"kubernetes.io/projected/e63493f9-ccd0-485d-a4ef-827699a7d1de-kube-api-access-k7mgq\") pod \"glance-default-external-api-0\" (UID: \"e63493f9-ccd0-485d-a4ef-827699a7d1de\") " pod="openstack/glance-default-external-api-0" Jan 27 15:29:01 crc kubenswrapper[4697]: I0127 15:29:01.642829 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"e63493f9-ccd0-485d-a4ef-827699a7d1de\") " pod="openstack/glance-default-external-api-0" Jan 27 15:29:01 crc kubenswrapper[4697]: I0127 15:29:01.642856 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e63493f9-ccd0-485d-a4ef-827699a7d1de-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"e63493f9-ccd0-485d-a4ef-827699a7d1de\") " pod="openstack/glance-default-external-api-0" Jan 27 15:29:01 crc kubenswrapper[4697]: I0127 15:29:01.642883 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e63493f9-ccd0-485d-a4ef-827699a7d1de-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"e63493f9-ccd0-485d-a4ef-827699a7d1de\") " pod="openstack/glance-default-external-api-0" Jan 27 15:29:01 crc kubenswrapper[4697]: I0127 15:29:01.642931 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e63493f9-ccd0-485d-a4ef-827699a7d1de-scripts\") pod \"glance-default-external-api-0\" (UID: \"e63493f9-ccd0-485d-a4ef-827699a7d1de\") " pod="openstack/glance-default-external-api-0" Jan 27 15:29:01 crc kubenswrapper[4697]: I0127 15:29:01.642946 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e63493f9-ccd0-485d-a4ef-827699a7d1de-logs\") pod \"glance-default-external-api-0\" (UID: \"e63493f9-ccd0-485d-a4ef-827699a7d1de\") " pod="openstack/glance-default-external-api-0" Jan 27 15:29:01 crc kubenswrapper[4697]: I0127 15:29:01.744392 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k7mgq\" (UniqueName: \"kubernetes.io/projected/e63493f9-ccd0-485d-a4ef-827699a7d1de-kube-api-access-k7mgq\") pod \"glance-default-external-api-0\" (UID: \"e63493f9-ccd0-485d-a4ef-827699a7d1de\") " pod="openstack/glance-default-external-api-0" Jan 27 15:29:01 crc kubenswrapper[4697]: I0127 15:29:01.744456 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"e63493f9-ccd0-485d-a4ef-827699a7d1de\") " pod="openstack/glance-default-external-api-0" Jan 27 15:29:01 crc kubenswrapper[4697]: I0127 15:29:01.744496 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e63493f9-ccd0-485d-a4ef-827699a7d1de-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"e63493f9-ccd0-485d-a4ef-827699a7d1de\") " pod="openstack/glance-default-external-api-0" Jan 27 15:29:01 crc kubenswrapper[4697]: I0127 15:29:01.744532 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e63493f9-ccd0-485d-a4ef-827699a7d1de-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"e63493f9-ccd0-485d-a4ef-827699a7d1de\") " pod="openstack/glance-default-external-api-0" Jan 27 15:29:01 crc kubenswrapper[4697]: I0127 15:29:01.744601 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e63493f9-ccd0-485d-a4ef-827699a7d1de-scripts\") pod \"glance-default-external-api-0\" (UID: \"e63493f9-ccd0-485d-a4ef-827699a7d1de\") " pod="openstack/glance-default-external-api-0" Jan 27 15:29:01 crc kubenswrapper[4697]: I0127 15:29:01.744623 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e63493f9-ccd0-485d-a4ef-827699a7d1de-logs\") pod \"glance-default-external-api-0\" (UID: \"e63493f9-ccd0-485d-a4ef-827699a7d1de\") " pod="openstack/glance-default-external-api-0" Jan 27 15:29:01 crc kubenswrapper[4697]: I0127 15:29:01.744745 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e63493f9-ccd0-485d-a4ef-827699a7d1de-config-data\") pod \"glance-default-external-api-0\" (UID: \"e63493f9-ccd0-485d-a4ef-827699a7d1de\") " pod="openstack/glance-default-external-api-0" Jan 27 15:29:01 crc kubenswrapper[4697]: I0127 15:29:01.746271 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e63493f9-ccd0-485d-a4ef-827699a7d1de-logs\") pod \"glance-default-external-api-0\" (UID: \"e63493f9-ccd0-485d-a4ef-827699a7d1de\") " pod="openstack/glance-default-external-api-0" Jan 27 15:29:01 crc kubenswrapper[4697]: I0127 15:29:01.746271 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e63493f9-ccd0-485d-a4ef-827699a7d1de-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"e63493f9-ccd0-485d-a4ef-827699a7d1de\") " pod="openstack/glance-default-external-api-0" Jan 27 15:29:01 crc kubenswrapper[4697]: I0127 15:29:01.747297 4697 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"e63493f9-ccd0-485d-a4ef-827699a7d1de\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-external-api-0" Jan 27 15:29:01 crc kubenswrapper[4697]: I0127 15:29:01.751034 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e63493f9-ccd0-485d-a4ef-827699a7d1de-config-data\") pod \"glance-default-external-api-0\" (UID: \"e63493f9-ccd0-485d-a4ef-827699a7d1de\") " pod="openstack/glance-default-external-api-0" Jan 27 15:29:01 crc kubenswrapper[4697]: I0127 15:29:01.763267 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e63493f9-ccd0-485d-a4ef-827699a7d1de-scripts\") pod \"glance-default-external-api-0\" (UID: \"e63493f9-ccd0-485d-a4ef-827699a7d1de\") " pod="openstack/glance-default-external-api-0" Jan 27 15:29:01 crc kubenswrapper[4697]: I0127 15:29:01.763582 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e63493f9-ccd0-485d-a4ef-827699a7d1de-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"e63493f9-ccd0-485d-a4ef-827699a7d1de\") " pod="openstack/glance-default-external-api-0" Jan 27 15:29:01 crc kubenswrapper[4697]: I0127 15:29:01.788064 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"e63493f9-ccd0-485d-a4ef-827699a7d1de\") " pod="openstack/glance-default-external-api-0" Jan 27 15:29:01 crc kubenswrapper[4697]: I0127 15:29:01.796003 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 15:29:01 crc kubenswrapper[4697]: I0127 15:29:01.797873 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 15:29:01 crc kubenswrapper[4697]: I0127 15:29:01.801185 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 27 15:29:01 crc kubenswrapper[4697]: I0127 15:29:01.805162 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k7mgq\" (UniqueName: \"kubernetes.io/projected/e63493f9-ccd0-485d-a4ef-827699a7d1de-kube-api-access-k7mgq\") pod \"glance-default-external-api-0\" (UID: \"e63493f9-ccd0-485d-a4ef-827699a7d1de\") " pod="openstack/glance-default-external-api-0" Jan 27 15:29:01 crc kubenswrapper[4697]: I0127 15:29:01.833855 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 15:29:01 crc kubenswrapper[4697]: I0127 15:29:01.842962 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 15:29:01 crc kubenswrapper[4697]: I0127 15:29:01.947896 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2b36c8f6-7ed7-4100-9716-9e0d9914667c-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"2b36c8f6-7ed7-4100-9716-9e0d9914667c\") " pod="openstack/glance-default-internal-api-0" Jan 27 15:29:01 crc kubenswrapper[4697]: I0127 15:29:01.947958 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"2b36c8f6-7ed7-4100-9716-9e0d9914667c\") " pod="openstack/glance-default-internal-api-0" Jan 27 15:29:01 crc kubenswrapper[4697]: I0127 15:29:01.947990 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b36c8f6-7ed7-4100-9716-9e0d9914667c-config-data\") pod \"glance-default-internal-api-0\" (UID: \"2b36c8f6-7ed7-4100-9716-9e0d9914667c\") " pod="openstack/glance-default-internal-api-0" Jan 27 15:29:01 crc kubenswrapper[4697]: I0127 15:29:01.948062 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qsfc9\" (UniqueName: \"kubernetes.io/projected/2b36c8f6-7ed7-4100-9716-9e0d9914667c-kube-api-access-qsfc9\") pod \"glance-default-internal-api-0\" (UID: \"2b36c8f6-7ed7-4100-9716-9e0d9914667c\") " pod="openstack/glance-default-internal-api-0" Jan 27 15:29:01 crc kubenswrapper[4697]: I0127 15:29:01.948189 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b36c8f6-7ed7-4100-9716-9e0d9914667c-scripts\") pod \"glance-default-internal-api-0\" (UID: \"2b36c8f6-7ed7-4100-9716-9e0d9914667c\") " pod="openstack/glance-default-internal-api-0" Jan 27 15:29:01 crc kubenswrapper[4697]: I0127 15:29:01.948231 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b36c8f6-7ed7-4100-9716-9e0d9914667c-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"2b36c8f6-7ed7-4100-9716-9e0d9914667c\") " pod="openstack/glance-default-internal-api-0" Jan 27 15:29:01 crc kubenswrapper[4697]: I0127 15:29:01.948297 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2b36c8f6-7ed7-4100-9716-9e0d9914667c-logs\") pod \"glance-default-internal-api-0\" (UID: \"2b36c8f6-7ed7-4100-9716-9e0d9914667c\") " pod="openstack/glance-default-internal-api-0" Jan 27 15:29:02 crc kubenswrapper[4697]: I0127 15:29:02.049568 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b36c8f6-7ed7-4100-9716-9e0d9914667c-scripts\") pod \"glance-default-internal-api-0\" (UID: \"2b36c8f6-7ed7-4100-9716-9e0d9914667c\") " pod="openstack/glance-default-internal-api-0" Jan 27 15:29:02 crc kubenswrapper[4697]: I0127 15:29:02.051078 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b36c8f6-7ed7-4100-9716-9e0d9914667c-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"2b36c8f6-7ed7-4100-9716-9e0d9914667c\") " pod="openstack/glance-default-internal-api-0" Jan 27 15:29:02 crc kubenswrapper[4697]: I0127 15:29:02.051289 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2b36c8f6-7ed7-4100-9716-9e0d9914667c-logs\") pod \"glance-default-internal-api-0\" (UID: \"2b36c8f6-7ed7-4100-9716-9e0d9914667c\") " pod="openstack/glance-default-internal-api-0" Jan 27 15:29:02 crc kubenswrapper[4697]: I0127 15:29:02.051404 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2b36c8f6-7ed7-4100-9716-9e0d9914667c-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"2b36c8f6-7ed7-4100-9716-9e0d9914667c\") " pod="openstack/glance-default-internal-api-0" Jan 27 15:29:02 crc kubenswrapper[4697]: I0127 15:29:02.051522 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"2b36c8f6-7ed7-4100-9716-9e0d9914667c\") " pod="openstack/glance-default-internal-api-0" Jan 27 15:29:02 crc kubenswrapper[4697]: I0127 15:29:02.051615 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b36c8f6-7ed7-4100-9716-9e0d9914667c-config-data\") pod \"glance-default-internal-api-0\" (UID: \"2b36c8f6-7ed7-4100-9716-9e0d9914667c\") " pod="openstack/glance-default-internal-api-0" Jan 27 15:29:02 crc kubenswrapper[4697]: I0127 15:29:02.051734 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qsfc9\" (UniqueName: \"kubernetes.io/projected/2b36c8f6-7ed7-4100-9716-9e0d9914667c-kube-api-access-qsfc9\") pod \"glance-default-internal-api-0\" (UID: \"2b36c8f6-7ed7-4100-9716-9e0d9914667c\") " pod="openstack/glance-default-internal-api-0" Jan 27 15:29:02 crc kubenswrapper[4697]: I0127 15:29:02.051744 4697 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"2b36c8f6-7ed7-4100-9716-9e0d9914667c\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/glance-default-internal-api-0" Jan 27 15:29:02 crc kubenswrapper[4697]: I0127 15:29:02.051967 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2b36c8f6-7ed7-4100-9716-9e0d9914667c-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"2b36c8f6-7ed7-4100-9716-9e0d9914667c\") " pod="openstack/glance-default-internal-api-0" Jan 27 15:29:02 crc kubenswrapper[4697]: I0127 15:29:02.052192 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2b36c8f6-7ed7-4100-9716-9e0d9914667c-logs\") pod \"glance-default-internal-api-0\" (UID: \"2b36c8f6-7ed7-4100-9716-9e0d9914667c\") " pod="openstack/glance-default-internal-api-0" Jan 27 15:29:02 crc kubenswrapper[4697]: I0127 15:29:02.064147 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b36c8f6-7ed7-4100-9716-9e0d9914667c-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"2b36c8f6-7ed7-4100-9716-9e0d9914667c\") " pod="openstack/glance-default-internal-api-0" Jan 27 15:29:02 crc kubenswrapper[4697]: I0127 15:29:02.066174 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qsfc9\" (UniqueName: \"kubernetes.io/projected/2b36c8f6-7ed7-4100-9716-9e0d9914667c-kube-api-access-qsfc9\") pod \"glance-default-internal-api-0\" (UID: \"2b36c8f6-7ed7-4100-9716-9e0d9914667c\") " pod="openstack/glance-default-internal-api-0" Jan 27 15:29:02 crc kubenswrapper[4697]: I0127 15:29:02.070103 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b36c8f6-7ed7-4100-9716-9e0d9914667c-config-data\") pod \"glance-default-internal-api-0\" (UID: \"2b36c8f6-7ed7-4100-9716-9e0d9914667c\") " pod="openstack/glance-default-internal-api-0" Jan 27 15:29:02 crc kubenswrapper[4697]: I0127 15:29:02.070973 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b36c8f6-7ed7-4100-9716-9e0d9914667c-scripts\") pod \"glance-default-internal-api-0\" (UID: \"2b36c8f6-7ed7-4100-9716-9e0d9914667c\") " pod="openstack/glance-default-internal-api-0" Jan 27 15:29:02 crc kubenswrapper[4697]: I0127 15:29:02.086280 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"2b36c8f6-7ed7-4100-9716-9e0d9914667c\") " pod="openstack/glance-default-internal-api-0" Jan 27 15:29:02 crc kubenswrapper[4697]: I0127 15:29:02.200597 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 15:29:02 crc kubenswrapper[4697]: E0127 15:29:02.681649 4697 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Jan 27 15:29:02 crc kubenswrapper[4697]: E0127 15:29:02.682084 4697 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5fdh659h695hbfh8fh5c4h697h567h586hbh5cfhbdh85h8ch95h674h599h579h94h6ch588h597hc7h598h54bh58dh68dh676hcdh5fch5f4h65q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-trfdq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-58cf9979b5-9tn4x_openstack(a8b36f3c-50c2-400c-bd10-0dfe3ac4a01d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 15:29:02 crc kubenswrapper[4697]: E0127 15:29:02.687970 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-58cf9979b5-9tn4x" podUID="a8b36f3c-50c2-400c-bd10-0dfe3ac4a01d" Jan 27 15:29:03 crc kubenswrapper[4697]: I0127 15:29:03.342424 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 15:29:03 crc kubenswrapper[4697]: I0127 15:29:03.420742 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 15:29:03 crc kubenswrapper[4697]: I0127 15:29:03.655696 4697 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-nrzk8" podUID="621e9d49-138c-485b-a57e-1f3ec16c5875" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.113:5353: connect: connection refused" Jan 27 15:29:08 crc kubenswrapper[4697]: I0127 15:29:08.655298 4697 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-nrzk8" podUID="621e9d49-138c-485b-a57e-1f3ec16c5875" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.113:5353: connect: connection refused" Jan 27 15:29:13 crc kubenswrapper[4697]: I0127 15:29:13.656753 4697 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-nrzk8" podUID="621e9d49-138c-485b-a57e-1f3ec16c5875" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.113:5353: connect: connection refused" Jan 27 15:29:15 crc kubenswrapper[4697]: E0127 15:29:15.208930 4697 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Jan 27 15:29:15 crc kubenswrapper[4697]: E0127 15:29:15.210521 4697 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n599h59bh6h5cbh55hd8hd4h5cbh78hcdh596h56ch545hdh676hd5h55dh9bh546h57fh67bh56bh65dh5d4h548h5c5h68dh669h5f8h647h56fh5dcq,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dp9l7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-5b9dc56b78-cpxnx_openstack(ca5e937a-90cf-44e0-bf5c-bcb75c95a2f4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 15:29:15 crc kubenswrapper[4697]: E0127 15:29:15.279861 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-5b9dc56b78-cpxnx" podUID="ca5e937a-90cf-44e0-bf5c-bcb75c95a2f4" Jan 27 15:29:15 crc kubenswrapper[4697]: I0127 15:29:15.303317 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-mwqgr" Jan 27 15:29:15 crc kubenswrapper[4697]: I0127 15:29:15.406881 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/09bebc2c-092b-415e-b4b3-80296620ce1b-fernet-keys\") pod \"09bebc2c-092b-415e-b4b3-80296620ce1b\" (UID: \"09bebc2c-092b-415e-b4b3-80296620ce1b\") " Jan 27 15:29:15 crc kubenswrapper[4697]: I0127 15:29:15.406966 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4zt5n\" (UniqueName: \"kubernetes.io/projected/09bebc2c-092b-415e-b4b3-80296620ce1b-kube-api-access-4zt5n\") pod \"09bebc2c-092b-415e-b4b3-80296620ce1b\" (UID: \"09bebc2c-092b-415e-b4b3-80296620ce1b\") " Jan 27 15:29:15 crc kubenswrapper[4697]: I0127 15:29:15.406992 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09bebc2c-092b-415e-b4b3-80296620ce1b-config-data\") pod \"09bebc2c-092b-415e-b4b3-80296620ce1b\" (UID: \"09bebc2c-092b-415e-b4b3-80296620ce1b\") " Jan 27 15:29:15 crc kubenswrapper[4697]: I0127 15:29:15.407040 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/09bebc2c-092b-415e-b4b3-80296620ce1b-scripts\") pod \"09bebc2c-092b-415e-b4b3-80296620ce1b\" (UID: \"09bebc2c-092b-415e-b4b3-80296620ce1b\") " Jan 27 15:29:15 crc kubenswrapper[4697]: I0127 15:29:15.407121 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09bebc2c-092b-415e-b4b3-80296620ce1b-combined-ca-bundle\") pod \"09bebc2c-092b-415e-b4b3-80296620ce1b\" (UID: \"09bebc2c-092b-415e-b4b3-80296620ce1b\") " Jan 27 15:29:15 crc kubenswrapper[4697]: I0127 15:29:15.407161 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/09bebc2c-092b-415e-b4b3-80296620ce1b-credential-keys\") pod \"09bebc2c-092b-415e-b4b3-80296620ce1b\" (UID: \"09bebc2c-092b-415e-b4b3-80296620ce1b\") " Jan 27 15:29:15 crc kubenswrapper[4697]: I0127 15:29:15.415473 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09bebc2c-092b-415e-b4b3-80296620ce1b-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "09bebc2c-092b-415e-b4b3-80296620ce1b" (UID: "09bebc2c-092b-415e-b4b3-80296620ce1b"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:29:15 crc kubenswrapper[4697]: I0127 15:29:15.419989 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09bebc2c-092b-415e-b4b3-80296620ce1b-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "09bebc2c-092b-415e-b4b3-80296620ce1b" (UID: "09bebc2c-092b-415e-b4b3-80296620ce1b"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:29:15 crc kubenswrapper[4697]: I0127 15:29:15.424568 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09bebc2c-092b-415e-b4b3-80296620ce1b-scripts" (OuterVolumeSpecName: "scripts") pod "09bebc2c-092b-415e-b4b3-80296620ce1b" (UID: "09bebc2c-092b-415e-b4b3-80296620ce1b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:29:15 crc kubenswrapper[4697]: I0127 15:29:15.424717 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09bebc2c-092b-415e-b4b3-80296620ce1b-kube-api-access-4zt5n" (OuterVolumeSpecName: "kube-api-access-4zt5n") pod "09bebc2c-092b-415e-b4b3-80296620ce1b" (UID: "09bebc2c-092b-415e-b4b3-80296620ce1b"). InnerVolumeSpecName "kube-api-access-4zt5n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:29:15 crc kubenswrapper[4697]: I0127 15:29:15.445431 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09bebc2c-092b-415e-b4b3-80296620ce1b-config-data" (OuterVolumeSpecName: "config-data") pod "09bebc2c-092b-415e-b4b3-80296620ce1b" (UID: "09bebc2c-092b-415e-b4b3-80296620ce1b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:29:15 crc kubenswrapper[4697]: I0127 15:29:15.450646 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09bebc2c-092b-415e-b4b3-80296620ce1b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "09bebc2c-092b-415e-b4b3-80296620ce1b" (UID: "09bebc2c-092b-415e-b4b3-80296620ce1b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:29:15 crc kubenswrapper[4697]: I0127 15:29:15.508935 4697 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/09bebc2c-092b-415e-b4b3-80296620ce1b-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 27 15:29:15 crc kubenswrapper[4697]: I0127 15:29:15.509266 4697 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/09bebc2c-092b-415e-b4b3-80296620ce1b-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 27 15:29:15 crc kubenswrapper[4697]: I0127 15:29:15.509283 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4zt5n\" (UniqueName: \"kubernetes.io/projected/09bebc2c-092b-415e-b4b3-80296620ce1b-kube-api-access-4zt5n\") on node \"crc\" DevicePath \"\"" Jan 27 15:29:15 crc kubenswrapper[4697]: I0127 15:29:15.509297 4697 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09bebc2c-092b-415e-b4b3-80296620ce1b-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 15:29:15 crc kubenswrapper[4697]: I0127 15:29:15.509310 4697 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/09bebc2c-092b-415e-b4b3-80296620ce1b-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 15:29:15 crc kubenswrapper[4697]: I0127 15:29:15.509321 4697 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09bebc2c-092b-415e-b4b3-80296620ce1b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 15:29:16 crc kubenswrapper[4697]: I0127 15:29:16.001039 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-mwqgr" event={"ID":"09bebc2c-092b-415e-b4b3-80296620ce1b","Type":"ContainerDied","Data":"07dcf0441e80f347c4d7cad8654be241eb9ed2b44c442c6623c297d9b407dc74"} Jan 27 15:29:16 crc kubenswrapper[4697]: I0127 15:29:16.001085 4697 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="07dcf0441e80f347c4d7cad8654be241eb9ed2b44c442c6623c297d9b407dc74" Jan 27 15:29:16 crc kubenswrapper[4697]: I0127 15:29:16.001140 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-mwqgr" Jan 27 15:29:16 crc kubenswrapper[4697]: E0127 15:29:16.003162 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-5b9dc56b78-cpxnx" podUID="ca5e937a-90cf-44e0-bf5c-bcb75c95a2f4" Jan 27 15:29:16 crc kubenswrapper[4697]: I0127 15:29:16.390057 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-mwqgr"] Jan 27 15:29:16 crc kubenswrapper[4697]: I0127 15:29:16.397548 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-mwqgr"] Jan 27 15:29:16 crc kubenswrapper[4697]: E0127 15:29:16.443899 4697 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Jan 27 15:29:16 crc kubenswrapper[4697]: E0127 15:29:16.444050 4697 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hgvx9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-5c6j2_openstack(09a835cc-5807-48ce-a9f8-354d3182603f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 15:29:16 crc kubenswrapper[4697]: E0127 15:29:16.445273 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-5c6j2" podUID="09a835cc-5807-48ce-a9f8-354d3182603f" Jan 27 15:29:16 crc kubenswrapper[4697]: I0127 15:29:16.504657 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-pdrgr"] Jan 27 15:29:16 crc kubenswrapper[4697]: E0127 15:29:16.505043 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09bebc2c-092b-415e-b4b3-80296620ce1b" containerName="keystone-bootstrap" Jan 27 15:29:16 crc kubenswrapper[4697]: I0127 15:29:16.505061 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="09bebc2c-092b-415e-b4b3-80296620ce1b" containerName="keystone-bootstrap" Jan 27 15:29:16 crc kubenswrapper[4697]: I0127 15:29:16.505224 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="09bebc2c-092b-415e-b4b3-80296620ce1b" containerName="keystone-bootstrap" Jan 27 15:29:16 crc kubenswrapper[4697]: I0127 15:29:16.505968 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-pdrgr" Jan 27 15:29:16 crc kubenswrapper[4697]: I0127 15:29:16.510434 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-hr2gd" Jan 27 15:29:16 crc kubenswrapper[4697]: I0127 15:29:16.510670 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 27 15:29:16 crc kubenswrapper[4697]: I0127 15:29:16.510963 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 27 15:29:16 crc kubenswrapper[4697]: I0127 15:29:16.511970 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 27 15:29:16 crc kubenswrapper[4697]: I0127 15:29:16.512251 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 27 15:29:16 crc kubenswrapper[4697]: I0127 15:29:16.567221 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-pdrgr"] Jan 27 15:29:16 crc kubenswrapper[4697]: I0127 15:29:16.582513 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09bebc2c-092b-415e-b4b3-80296620ce1b" path="/var/lib/kubelet/pods/09bebc2c-092b-415e-b4b3-80296620ce1b/volumes" Jan 27 15:29:16 crc kubenswrapper[4697]: I0127 15:29:16.596151 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-58cf9979b5-9tn4x" Jan 27 15:29:16 crc kubenswrapper[4697]: I0127 15:29:16.621346 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-56b6fb4dd9-vzq9d" Jan 27 15:29:16 crc kubenswrapper[4697]: I0127 15:29:16.631884 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-789b55bb8f-pdnjn" Jan 27 15:29:16 crc kubenswrapper[4697]: I0127 15:29:16.658413 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4q8v8\" (UniqueName: \"kubernetes.io/projected/edc034d6-13db-4ae2-be4c-86e4dad22dc7-kube-api-access-4q8v8\") pod \"keystone-bootstrap-pdrgr\" (UID: \"edc034d6-13db-4ae2-be4c-86e4dad22dc7\") " pod="openstack/keystone-bootstrap-pdrgr" Jan 27 15:29:16 crc kubenswrapper[4697]: I0127 15:29:16.658478 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edc034d6-13db-4ae2-be4c-86e4dad22dc7-config-data\") pod \"keystone-bootstrap-pdrgr\" (UID: \"edc034d6-13db-4ae2-be4c-86e4dad22dc7\") " pod="openstack/keystone-bootstrap-pdrgr" Jan 27 15:29:16 crc kubenswrapper[4697]: I0127 15:29:16.658528 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/edc034d6-13db-4ae2-be4c-86e4dad22dc7-credential-keys\") pod \"keystone-bootstrap-pdrgr\" (UID: \"edc034d6-13db-4ae2-be4c-86e4dad22dc7\") " pod="openstack/keystone-bootstrap-pdrgr" Jan 27 15:29:16 crc kubenswrapper[4697]: I0127 15:29:16.658573 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/edc034d6-13db-4ae2-be4c-86e4dad22dc7-scripts\") pod \"keystone-bootstrap-pdrgr\" (UID: \"edc034d6-13db-4ae2-be4c-86e4dad22dc7\") " pod="openstack/keystone-bootstrap-pdrgr" Jan 27 15:29:16 crc kubenswrapper[4697]: I0127 15:29:16.658613 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/edc034d6-13db-4ae2-be4c-86e4dad22dc7-fernet-keys\") pod \"keystone-bootstrap-pdrgr\" (UID: \"edc034d6-13db-4ae2-be4c-86e4dad22dc7\") " pod="openstack/keystone-bootstrap-pdrgr" Jan 27 15:29:16 crc kubenswrapper[4697]: I0127 15:29:16.658677 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edc034d6-13db-4ae2-be4c-86e4dad22dc7-combined-ca-bundle\") pod \"keystone-bootstrap-pdrgr\" (UID: \"edc034d6-13db-4ae2-be4c-86e4dad22dc7\") " pod="openstack/keystone-bootstrap-pdrgr" Jan 27 15:29:16 crc kubenswrapper[4697]: I0127 15:29:16.759395 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-trfdq\" (UniqueName: \"kubernetes.io/projected/a8b36f3c-50c2-400c-bd10-0dfe3ac4a01d-kube-api-access-trfdq\") pod \"a8b36f3c-50c2-400c-bd10-0dfe3ac4a01d\" (UID: \"a8b36f3c-50c2-400c-bd10-0dfe3ac4a01d\") " Jan 27 15:29:16 crc kubenswrapper[4697]: I0127 15:29:16.759444 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f716119d-b8f8-4bdf-87de-e4452080d972-logs\") pod \"f716119d-b8f8-4bdf-87de-e4452080d972\" (UID: \"f716119d-b8f8-4bdf-87de-e4452080d972\") " Jan 27 15:29:16 crc kubenswrapper[4697]: I0127 15:29:16.759484 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p9mgn\" (UniqueName: \"kubernetes.io/projected/6ceeb921-c7c4-41df-bed1-a082bdcb6e79-kube-api-access-p9mgn\") pod \"6ceeb921-c7c4-41df-bed1-a082bdcb6e79\" (UID: \"6ceeb921-c7c4-41df-bed1-a082bdcb6e79\") " Jan 27 15:29:16 crc kubenswrapper[4697]: I0127 15:29:16.759518 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f716119d-b8f8-4bdf-87de-e4452080d972-scripts\") pod \"f716119d-b8f8-4bdf-87de-e4452080d972\" (UID: \"f716119d-b8f8-4bdf-87de-e4452080d972\") " Jan 27 15:29:16 crc kubenswrapper[4697]: I0127 15:29:16.759556 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dw6kg\" (UniqueName: \"kubernetes.io/projected/f716119d-b8f8-4bdf-87de-e4452080d972-kube-api-access-dw6kg\") pod \"f716119d-b8f8-4bdf-87de-e4452080d972\" (UID: \"f716119d-b8f8-4bdf-87de-e4452080d972\") " Jan 27 15:29:16 crc kubenswrapper[4697]: I0127 15:29:16.759597 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6ceeb921-c7c4-41df-bed1-a082bdcb6e79-horizon-secret-key\") pod \"6ceeb921-c7c4-41df-bed1-a082bdcb6e79\" (UID: \"6ceeb921-c7c4-41df-bed1-a082bdcb6e79\") " Jan 27 15:29:16 crc kubenswrapper[4697]: I0127 15:29:16.759612 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a8b36f3c-50c2-400c-bd10-0dfe3ac4a01d-horizon-secret-key\") pod \"a8b36f3c-50c2-400c-bd10-0dfe3ac4a01d\" (UID: \"a8b36f3c-50c2-400c-bd10-0dfe3ac4a01d\") " Jan 27 15:29:16 crc kubenswrapper[4697]: I0127 15:29:16.759628 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a8b36f3c-50c2-400c-bd10-0dfe3ac4a01d-logs\") pod \"a8b36f3c-50c2-400c-bd10-0dfe3ac4a01d\" (UID: \"a8b36f3c-50c2-400c-bd10-0dfe3ac4a01d\") " Jan 27 15:29:16 crc kubenswrapper[4697]: I0127 15:29:16.759659 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6ceeb921-c7c4-41df-bed1-a082bdcb6e79-logs\") pod \"6ceeb921-c7c4-41df-bed1-a082bdcb6e79\" (UID: \"6ceeb921-c7c4-41df-bed1-a082bdcb6e79\") " Jan 27 15:29:16 crc kubenswrapper[4697]: I0127 15:29:16.759716 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f716119d-b8f8-4bdf-87de-e4452080d972-config-data\") pod \"f716119d-b8f8-4bdf-87de-e4452080d972\" (UID: \"f716119d-b8f8-4bdf-87de-e4452080d972\") " Jan 27 15:29:16 crc kubenswrapper[4697]: I0127 15:29:16.759732 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a8b36f3c-50c2-400c-bd10-0dfe3ac4a01d-config-data\") pod \"a8b36f3c-50c2-400c-bd10-0dfe3ac4a01d\" (UID: \"a8b36f3c-50c2-400c-bd10-0dfe3ac4a01d\") " Jan 27 15:29:16 crc kubenswrapper[4697]: I0127 15:29:16.759795 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f716119d-b8f8-4bdf-87de-e4452080d972-horizon-secret-key\") pod \"f716119d-b8f8-4bdf-87de-e4452080d972\" (UID: \"f716119d-b8f8-4bdf-87de-e4452080d972\") " Jan 27 15:29:16 crc kubenswrapper[4697]: I0127 15:29:16.759811 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6ceeb921-c7c4-41df-bed1-a082bdcb6e79-scripts\") pod \"6ceeb921-c7c4-41df-bed1-a082bdcb6e79\" (UID: \"6ceeb921-c7c4-41df-bed1-a082bdcb6e79\") " Jan 27 15:29:16 crc kubenswrapper[4697]: I0127 15:29:16.759825 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a8b36f3c-50c2-400c-bd10-0dfe3ac4a01d-scripts\") pod \"a8b36f3c-50c2-400c-bd10-0dfe3ac4a01d\" (UID: \"a8b36f3c-50c2-400c-bd10-0dfe3ac4a01d\") " Jan 27 15:29:16 crc kubenswrapper[4697]: I0127 15:29:16.759855 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6ceeb921-c7c4-41df-bed1-a082bdcb6e79-config-data\") pod \"6ceeb921-c7c4-41df-bed1-a082bdcb6e79\" (UID: \"6ceeb921-c7c4-41df-bed1-a082bdcb6e79\") " Jan 27 15:29:16 crc kubenswrapper[4697]: I0127 15:29:16.760281 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edc034d6-13db-4ae2-be4c-86e4dad22dc7-config-data\") pod \"keystone-bootstrap-pdrgr\" (UID: \"edc034d6-13db-4ae2-be4c-86e4dad22dc7\") " pod="openstack/keystone-bootstrap-pdrgr" Jan 27 15:29:16 crc kubenswrapper[4697]: I0127 15:29:16.760319 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/edc034d6-13db-4ae2-be4c-86e4dad22dc7-credential-keys\") pod \"keystone-bootstrap-pdrgr\" (UID: \"edc034d6-13db-4ae2-be4c-86e4dad22dc7\") " pod="openstack/keystone-bootstrap-pdrgr" Jan 27 15:29:16 crc kubenswrapper[4697]: I0127 15:29:16.760343 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/edc034d6-13db-4ae2-be4c-86e4dad22dc7-scripts\") pod \"keystone-bootstrap-pdrgr\" (UID: \"edc034d6-13db-4ae2-be4c-86e4dad22dc7\") " pod="openstack/keystone-bootstrap-pdrgr" Jan 27 15:29:16 crc kubenswrapper[4697]: I0127 15:29:16.760363 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/edc034d6-13db-4ae2-be4c-86e4dad22dc7-fernet-keys\") pod \"keystone-bootstrap-pdrgr\" (UID: \"edc034d6-13db-4ae2-be4c-86e4dad22dc7\") " pod="openstack/keystone-bootstrap-pdrgr" Jan 27 15:29:16 crc kubenswrapper[4697]: I0127 15:29:16.760468 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edc034d6-13db-4ae2-be4c-86e4dad22dc7-combined-ca-bundle\") pod \"keystone-bootstrap-pdrgr\" (UID: \"edc034d6-13db-4ae2-be4c-86e4dad22dc7\") " pod="openstack/keystone-bootstrap-pdrgr" Jan 27 15:29:16 crc kubenswrapper[4697]: I0127 15:29:16.760598 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4q8v8\" (UniqueName: \"kubernetes.io/projected/edc034d6-13db-4ae2-be4c-86e4dad22dc7-kube-api-access-4q8v8\") pod \"keystone-bootstrap-pdrgr\" (UID: \"edc034d6-13db-4ae2-be4c-86e4dad22dc7\") " pod="openstack/keystone-bootstrap-pdrgr" Jan 27 15:29:16 crc kubenswrapper[4697]: I0127 15:29:16.761241 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f716119d-b8f8-4bdf-87de-e4452080d972-config-data" (OuterVolumeSpecName: "config-data") pod "f716119d-b8f8-4bdf-87de-e4452080d972" (UID: "f716119d-b8f8-4bdf-87de-e4452080d972"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:29:16 crc kubenswrapper[4697]: I0127 15:29:16.761451 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a8b36f3c-50c2-400c-bd10-0dfe3ac4a01d-config-data" (OuterVolumeSpecName: "config-data") pod "a8b36f3c-50c2-400c-bd10-0dfe3ac4a01d" (UID: "a8b36f3c-50c2-400c-bd10-0dfe3ac4a01d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:29:16 crc kubenswrapper[4697]: I0127 15:29:16.763182 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a8b36f3c-50c2-400c-bd10-0dfe3ac4a01d-scripts" (OuterVolumeSpecName: "scripts") pod "a8b36f3c-50c2-400c-bd10-0dfe3ac4a01d" (UID: "a8b36f3c-50c2-400c-bd10-0dfe3ac4a01d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:29:16 crc kubenswrapper[4697]: I0127 15:29:16.763589 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ceeb921-c7c4-41df-bed1-a082bdcb6e79-scripts" (OuterVolumeSpecName: "scripts") pod "6ceeb921-c7c4-41df-bed1-a082bdcb6e79" (UID: "6ceeb921-c7c4-41df-bed1-a082bdcb6e79"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:29:16 crc kubenswrapper[4697]: I0127 15:29:16.768193 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a8b36f3c-50c2-400c-bd10-0dfe3ac4a01d-logs" (OuterVolumeSpecName: "logs") pod "a8b36f3c-50c2-400c-bd10-0dfe3ac4a01d" (UID: "a8b36f3c-50c2-400c-bd10-0dfe3ac4a01d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:29:16 crc kubenswrapper[4697]: I0127 15:29:16.768557 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ceeb921-c7c4-41df-bed1-a082bdcb6e79-logs" (OuterVolumeSpecName: "logs") pod "6ceeb921-c7c4-41df-bed1-a082bdcb6e79" (UID: "6ceeb921-c7c4-41df-bed1-a082bdcb6e79"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:29:16 crc kubenswrapper[4697]: I0127 15:29:16.785078 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f716119d-b8f8-4bdf-87de-e4452080d972-logs" (OuterVolumeSpecName: "logs") pod "f716119d-b8f8-4bdf-87de-e4452080d972" (UID: "f716119d-b8f8-4bdf-87de-e4452080d972"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:29:16 crc kubenswrapper[4697]: I0127 15:29:16.803724 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f716119d-b8f8-4bdf-87de-e4452080d972-scripts" (OuterVolumeSpecName: "scripts") pod "f716119d-b8f8-4bdf-87de-e4452080d972" (UID: "f716119d-b8f8-4bdf-87de-e4452080d972"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:29:16 crc kubenswrapper[4697]: I0127 15:29:16.811342 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ceeb921-c7c4-41df-bed1-a082bdcb6e79-config-data" (OuterVolumeSpecName: "config-data") pod "6ceeb921-c7c4-41df-bed1-a082bdcb6e79" (UID: "6ceeb921-c7c4-41df-bed1-a082bdcb6e79"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:29:16 crc kubenswrapper[4697]: I0127 15:29:16.818023 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edc034d6-13db-4ae2-be4c-86e4dad22dc7-config-data\") pod \"keystone-bootstrap-pdrgr\" (UID: \"edc034d6-13db-4ae2-be4c-86e4dad22dc7\") " pod="openstack/keystone-bootstrap-pdrgr" Jan 27 15:29:16 crc kubenswrapper[4697]: I0127 15:29:16.833801 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/edc034d6-13db-4ae2-be4c-86e4dad22dc7-scripts\") pod \"keystone-bootstrap-pdrgr\" (UID: \"edc034d6-13db-4ae2-be4c-86e4dad22dc7\") " pod="openstack/keystone-bootstrap-pdrgr" Jan 27 15:29:16 crc kubenswrapper[4697]: I0127 15:29:16.861535 4697 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6ceeb921-c7c4-41df-bed1-a082bdcb6e79-logs\") on node \"crc\" DevicePath \"\"" Jan 27 15:29:16 crc kubenswrapper[4697]: I0127 15:29:16.861558 4697 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f716119d-b8f8-4bdf-87de-e4452080d972-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 15:29:16 crc kubenswrapper[4697]: I0127 15:29:16.861567 4697 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a8b36f3c-50c2-400c-bd10-0dfe3ac4a01d-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 15:29:16 crc kubenswrapper[4697]: I0127 15:29:16.861575 4697 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6ceeb921-c7c4-41df-bed1-a082bdcb6e79-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 15:29:16 crc kubenswrapper[4697]: I0127 15:29:16.861583 4697 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a8b36f3c-50c2-400c-bd10-0dfe3ac4a01d-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 15:29:16 crc kubenswrapper[4697]: I0127 15:29:16.861591 4697 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6ceeb921-c7c4-41df-bed1-a082bdcb6e79-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 15:29:16 crc kubenswrapper[4697]: I0127 15:29:16.861600 4697 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f716119d-b8f8-4bdf-87de-e4452080d972-logs\") on node \"crc\" DevicePath \"\"" Jan 27 15:29:16 crc kubenswrapper[4697]: I0127 15:29:16.861608 4697 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f716119d-b8f8-4bdf-87de-e4452080d972-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 15:29:16 crc kubenswrapper[4697]: I0127 15:29:16.861615 4697 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a8b36f3c-50c2-400c-bd10-0dfe3ac4a01d-logs\") on node \"crc\" DevicePath \"\"" Jan 27 15:29:16 crc kubenswrapper[4697]: I0127 15:29:16.864665 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/edc034d6-13db-4ae2-be4c-86e4dad22dc7-fernet-keys\") pod \"keystone-bootstrap-pdrgr\" (UID: \"edc034d6-13db-4ae2-be4c-86e4dad22dc7\") " pod="openstack/keystone-bootstrap-pdrgr" Jan 27 15:29:16 crc kubenswrapper[4697]: I0127 15:29:16.874326 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/edc034d6-13db-4ae2-be4c-86e4dad22dc7-credential-keys\") pod \"keystone-bootstrap-pdrgr\" (UID: \"edc034d6-13db-4ae2-be4c-86e4dad22dc7\") " pod="openstack/keystone-bootstrap-pdrgr" Jan 27 15:29:16 crc kubenswrapper[4697]: I0127 15:29:16.874735 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4q8v8\" (UniqueName: \"kubernetes.io/projected/edc034d6-13db-4ae2-be4c-86e4dad22dc7-kube-api-access-4q8v8\") pod \"keystone-bootstrap-pdrgr\" (UID: \"edc034d6-13db-4ae2-be4c-86e4dad22dc7\") " pod="openstack/keystone-bootstrap-pdrgr" Jan 27 15:29:16 crc kubenswrapper[4697]: I0127 15:29:16.875436 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edc034d6-13db-4ae2-be4c-86e4dad22dc7-combined-ca-bundle\") pod \"keystone-bootstrap-pdrgr\" (UID: \"edc034d6-13db-4ae2-be4c-86e4dad22dc7\") " pod="openstack/keystone-bootstrap-pdrgr" Jan 27 15:29:16 crc kubenswrapper[4697]: I0127 15:29:16.938937 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ceeb921-c7c4-41df-bed1-a082bdcb6e79-kube-api-access-p9mgn" (OuterVolumeSpecName: "kube-api-access-p9mgn") pod "6ceeb921-c7c4-41df-bed1-a082bdcb6e79" (UID: "6ceeb921-c7c4-41df-bed1-a082bdcb6e79"). InnerVolumeSpecName "kube-api-access-p9mgn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:29:16 crc kubenswrapper[4697]: I0127 15:29:16.939065 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f716119d-b8f8-4bdf-87de-e4452080d972-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "f716119d-b8f8-4bdf-87de-e4452080d972" (UID: "f716119d-b8f8-4bdf-87de-e4452080d972"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:29:16 crc kubenswrapper[4697]: I0127 15:29:16.939442 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8b36f3c-50c2-400c-bd10-0dfe3ac4a01d-kube-api-access-trfdq" (OuterVolumeSpecName: "kube-api-access-trfdq") pod "a8b36f3c-50c2-400c-bd10-0dfe3ac4a01d" (UID: "a8b36f3c-50c2-400c-bd10-0dfe3ac4a01d"). InnerVolumeSpecName "kube-api-access-trfdq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:29:16 crc kubenswrapper[4697]: I0127 15:29:16.939540 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f716119d-b8f8-4bdf-87de-e4452080d972-kube-api-access-dw6kg" (OuterVolumeSpecName: "kube-api-access-dw6kg") pod "f716119d-b8f8-4bdf-87de-e4452080d972" (UID: "f716119d-b8f8-4bdf-87de-e4452080d972"). InnerVolumeSpecName "kube-api-access-dw6kg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:29:16 crc kubenswrapper[4697]: I0127 15:29:16.939920 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8b36f3c-50c2-400c-bd10-0dfe3ac4a01d-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "a8b36f3c-50c2-400c-bd10-0dfe3ac4a01d" (UID: "a8b36f3c-50c2-400c-bd10-0dfe3ac4a01d"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:29:16 crc kubenswrapper[4697]: I0127 15:29:16.939986 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ceeb921-c7c4-41df-bed1-a082bdcb6e79-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "6ceeb921-c7c4-41df-bed1-a082bdcb6e79" (UID: "6ceeb921-c7c4-41df-bed1-a082bdcb6e79"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:29:16 crc kubenswrapper[4697]: I0127 15:29:16.946446 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-pdrgr" Jan 27 15:29:16 crc kubenswrapper[4697]: I0127 15:29:16.963239 4697 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f716119d-b8f8-4bdf-87de-e4452080d972-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 27 15:29:16 crc kubenswrapper[4697]: I0127 15:29:16.963284 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-trfdq\" (UniqueName: \"kubernetes.io/projected/a8b36f3c-50c2-400c-bd10-0dfe3ac4a01d-kube-api-access-trfdq\") on node \"crc\" DevicePath \"\"" Jan 27 15:29:16 crc kubenswrapper[4697]: I0127 15:29:16.963298 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p9mgn\" (UniqueName: \"kubernetes.io/projected/6ceeb921-c7c4-41df-bed1-a082bdcb6e79-kube-api-access-p9mgn\") on node \"crc\" DevicePath \"\"" Jan 27 15:29:16 crc kubenswrapper[4697]: I0127 15:29:16.963311 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dw6kg\" (UniqueName: \"kubernetes.io/projected/f716119d-b8f8-4bdf-87de-e4452080d972-kube-api-access-dw6kg\") on node \"crc\" DevicePath \"\"" Jan 27 15:29:16 crc kubenswrapper[4697]: I0127 15:29:16.963323 4697 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6ceeb921-c7c4-41df-bed1-a082bdcb6e79-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 27 15:29:16 crc kubenswrapper[4697]: I0127 15:29:16.963334 4697 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a8b36f3c-50c2-400c-bd10-0dfe3ac4a01d-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 27 15:29:17 crc kubenswrapper[4697]: I0127 15:29:17.011860 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-58cf9979b5-9tn4x" Jan 27 15:29:17 crc kubenswrapper[4697]: I0127 15:29:17.011866 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-58cf9979b5-9tn4x" event={"ID":"a8b36f3c-50c2-400c-bd10-0dfe3ac4a01d","Type":"ContainerDied","Data":"7991fb1895c3da4f700b312332c89f6ff9adce01b3e2cc08b31fd74ea3ace8cf"} Jan 27 15:29:17 crc kubenswrapper[4697]: I0127 15:29:17.016874 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-56b6fb4dd9-vzq9d" event={"ID":"6ceeb921-c7c4-41df-bed1-a082bdcb6e79","Type":"ContainerDied","Data":"8e3c916b4bbe2d56f93b3375840b4c9e8a5f2f9784327ff50fe12c09dfcf47ef"} Jan 27 15:29:17 crc kubenswrapper[4697]: I0127 15:29:17.016961 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-56b6fb4dd9-vzq9d" Jan 27 15:29:17 crc kubenswrapper[4697]: I0127 15:29:17.019224 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-789b55bb8f-pdnjn" Jan 27 15:29:17 crc kubenswrapper[4697]: I0127 15:29:17.019241 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-789b55bb8f-pdnjn" event={"ID":"f716119d-b8f8-4bdf-87de-e4452080d972","Type":"ContainerDied","Data":"581d9414a5dbcec64939a3fcbc60962bb3dd8729b3a2d996cc09ee86bbf8cf7f"} Jan 27 15:29:17 crc kubenswrapper[4697]: E0127 15:29:17.020678 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-5c6j2" podUID="09a835cc-5807-48ce-a9f8-354d3182603f" Jan 27 15:29:17 crc kubenswrapper[4697]: I0127 15:29:17.117763 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-56b6fb4dd9-vzq9d"] Jan 27 15:29:17 crc kubenswrapper[4697]: I0127 15:29:17.172392 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-56b6fb4dd9-vzq9d"] Jan 27 15:29:17 crc kubenswrapper[4697]: I0127 15:29:17.193403 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-58cf9979b5-9tn4x"] Jan 27 15:29:17 crc kubenswrapper[4697]: I0127 15:29:17.200483 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-58cf9979b5-9tn4x"] Jan 27 15:29:17 crc kubenswrapper[4697]: I0127 15:29:17.213806 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-789b55bb8f-pdnjn"] Jan 27 15:29:17 crc kubenswrapper[4697]: I0127 15:29:17.220042 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-789b55bb8f-pdnjn"] Jan 27 15:29:18 crc kubenswrapper[4697]: I0127 15:29:18.029642 4697 generic.go:334] "Generic (PLEG): container finished" podID="8fac9142-cfe0-4849-b6d4-3315ce2475ef" containerID="b415f2de3fd0a8e4b4e4315c410125fe1f25008c46f379f9a295599c46f44730" exitCode=0 Jan 27 15:29:18 crc kubenswrapper[4697]: I0127 15:29:18.029771 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-pbx5g" event={"ID":"8fac9142-cfe0-4849-b6d4-3315ce2475ef","Type":"ContainerDied","Data":"b415f2de3fd0a8e4b4e4315c410125fe1f25008c46f379f9a295599c46f44730"} Jan 27 15:29:18 crc kubenswrapper[4697]: E0127 15:29:18.254064 4697 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Jan 27 15:29:18 crc kubenswrapper[4697]: E0127 15:29:18.254220 4697 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6mjlc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-n5g7m_openstack(ba2a2abf-806a-4708-8f03-9e68c85c6c6c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 15:29:18 crc kubenswrapper[4697]: E0127 15:29:18.255506 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-n5g7m" podUID="ba2a2abf-806a-4708-8f03-9e68c85c6c6c" Jan 27 15:29:18 crc kubenswrapper[4697]: I0127 15:29:18.618630 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ceeb921-c7c4-41df-bed1-a082bdcb6e79" path="/var/lib/kubelet/pods/6ceeb921-c7c4-41df-bed1-a082bdcb6e79/volumes" Jan 27 15:29:18 crc kubenswrapper[4697]: I0127 15:29:18.623909 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8b36f3c-50c2-400c-bd10-0dfe3ac4a01d" path="/var/lib/kubelet/pods/a8b36f3c-50c2-400c-bd10-0dfe3ac4a01d/volumes" Jan 27 15:29:18 crc kubenswrapper[4697]: I0127 15:29:18.624449 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f716119d-b8f8-4bdf-87de-e4452080d972" path="/var/lib/kubelet/pods/f716119d-b8f8-4bdf-87de-e4452080d972/volumes" Jan 27 15:29:18 crc kubenswrapper[4697]: I0127 15:29:18.664340 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-nrzk8" Jan 27 15:29:18 crc kubenswrapper[4697]: I0127 15:29:18.829015 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mf9q6\" (UniqueName: \"kubernetes.io/projected/621e9d49-138c-485b-a57e-1f3ec16c5875-kube-api-access-mf9q6\") pod \"621e9d49-138c-485b-a57e-1f3ec16c5875\" (UID: \"621e9d49-138c-485b-a57e-1f3ec16c5875\") " Jan 27 15:29:18 crc kubenswrapper[4697]: I0127 15:29:18.829338 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/621e9d49-138c-485b-a57e-1f3ec16c5875-config\") pod \"621e9d49-138c-485b-a57e-1f3ec16c5875\" (UID: \"621e9d49-138c-485b-a57e-1f3ec16c5875\") " Jan 27 15:29:18 crc kubenswrapper[4697]: I0127 15:29:18.829362 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/621e9d49-138c-485b-a57e-1f3ec16c5875-ovsdbserver-sb\") pod \"621e9d49-138c-485b-a57e-1f3ec16c5875\" (UID: \"621e9d49-138c-485b-a57e-1f3ec16c5875\") " Jan 27 15:29:18 crc kubenswrapper[4697]: I0127 15:29:18.829380 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/621e9d49-138c-485b-a57e-1f3ec16c5875-ovsdbserver-nb\") pod \"621e9d49-138c-485b-a57e-1f3ec16c5875\" (UID: \"621e9d49-138c-485b-a57e-1f3ec16c5875\") " Jan 27 15:29:18 crc kubenswrapper[4697]: I0127 15:29:18.829536 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/621e9d49-138c-485b-a57e-1f3ec16c5875-dns-svc\") pod \"621e9d49-138c-485b-a57e-1f3ec16c5875\" (UID: \"621e9d49-138c-485b-a57e-1f3ec16c5875\") " Jan 27 15:29:18 crc kubenswrapper[4697]: I0127 15:29:18.872745 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/621e9d49-138c-485b-a57e-1f3ec16c5875-kube-api-access-mf9q6" (OuterVolumeSpecName: "kube-api-access-mf9q6") pod "621e9d49-138c-485b-a57e-1f3ec16c5875" (UID: "621e9d49-138c-485b-a57e-1f3ec16c5875"). InnerVolumeSpecName "kube-api-access-mf9q6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:29:18 crc kubenswrapper[4697]: I0127 15:29:18.924864 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/621e9d49-138c-485b-a57e-1f3ec16c5875-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "621e9d49-138c-485b-a57e-1f3ec16c5875" (UID: "621e9d49-138c-485b-a57e-1f3ec16c5875"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:29:18 crc kubenswrapper[4697]: I0127 15:29:18.931495 4697 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/621e9d49-138c-485b-a57e-1f3ec16c5875-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 15:29:18 crc kubenswrapper[4697]: I0127 15:29:18.931528 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mf9q6\" (UniqueName: \"kubernetes.io/projected/621e9d49-138c-485b-a57e-1f3ec16c5875-kube-api-access-mf9q6\") on node \"crc\" DevicePath \"\"" Jan 27 15:29:18 crc kubenswrapper[4697]: I0127 15:29:18.973035 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/621e9d49-138c-485b-a57e-1f3ec16c5875-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "621e9d49-138c-485b-a57e-1f3ec16c5875" (UID: "621e9d49-138c-485b-a57e-1f3ec16c5875"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:29:19 crc kubenswrapper[4697]: I0127 15:29:19.008686 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/621e9d49-138c-485b-a57e-1f3ec16c5875-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "621e9d49-138c-485b-a57e-1f3ec16c5875" (UID: "621e9d49-138c-485b-a57e-1f3ec16c5875"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:29:19 crc kubenswrapper[4697]: I0127 15:29:19.025553 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/621e9d49-138c-485b-a57e-1f3ec16c5875-config" (OuterVolumeSpecName: "config") pod "621e9d49-138c-485b-a57e-1f3ec16c5875" (UID: "621e9d49-138c-485b-a57e-1f3ec16c5875"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:29:19 crc kubenswrapper[4697]: I0127 15:29:19.037824 4697 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/621e9d49-138c-485b-a57e-1f3ec16c5875-config\") on node \"crc\" DevicePath \"\"" Jan 27 15:29:19 crc kubenswrapper[4697]: I0127 15:29:19.037855 4697 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/621e9d49-138c-485b-a57e-1f3ec16c5875-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 15:29:19 crc kubenswrapper[4697]: I0127 15:29:19.037869 4697 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/621e9d49-138c-485b-a57e-1f3ec16c5875-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 15:29:19 crc kubenswrapper[4697]: I0127 15:29:19.042641 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 15:29:19 crc kubenswrapper[4697]: I0127 15:29:19.057879 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-nrzk8" Jan 27 15:29:19 crc kubenswrapper[4697]: I0127 15:29:19.058246 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-nrzk8" event={"ID":"621e9d49-138c-485b-a57e-1f3ec16c5875","Type":"ContainerDied","Data":"3c64131b550b4f10c0ae08c4ef24f9a5b447d526f4ed8f27556ecd96eb936e0d"} Jan 27 15:29:19 crc kubenswrapper[4697]: I0127 15:29:19.058905 4697 scope.go:117] "RemoveContainer" containerID="ecbb4d6d233ecc2067421709c3613025b36534385ec0d4620144bbb9d9533977" Jan 27 15:29:19 crc kubenswrapper[4697]: I0127 15:29:19.067425 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-xc9hp" event={"ID":"c11e83f3-61e4-4f13-89e2-cf9209760247","Type":"ContainerStarted","Data":"80dc09b6ac5700456b759d3d7ebde4333d93b1f0223b1146f3edfbff995bf507"} Jan 27 15:29:19 crc kubenswrapper[4697]: I0127 15:29:19.094592 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5965fc65fb-dvhzz" event={"ID":"d6ad161d-fe95-4ad3-8f60-1f1310b2974c","Type":"ContainerStarted","Data":"d1e760cbe02185bc38a0ab3d68834dd5be89159d85d23e6c2893a23d0cd8eff0"} Jan 27 15:29:19 crc kubenswrapper[4697]: I0127 15:29:19.101495 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-xc9hp" podStartSLOduration=2.925717094 podStartE2EDuration="47.101472083s" podCreationTimestamp="2026-01-27 15:28:32 +0000 UTC" firstStartedPulling="2026-01-27 15:28:34.066340405 +0000 UTC m=+1210.238740186" lastFinishedPulling="2026-01-27 15:29:18.242095394 +0000 UTC m=+1254.414495175" observedRunningTime="2026-01-27 15:29:19.083452275 +0000 UTC m=+1255.255852056" watchObservedRunningTime="2026-01-27 15:29:19.101472083 +0000 UTC m=+1255.273871864" Jan 27 15:29:19 crc kubenswrapper[4697]: I0127 15:29:19.105943 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"70da0843-011d-422d-bc59-479d90e689a8","Type":"ContainerStarted","Data":"023d0b7bbc4282457b9c42149fe43bd96f28c8d5b0006f9f50340bf622320c7a"} Jan 27 15:29:19 crc kubenswrapper[4697]: E0127 15:29:19.113299 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-n5g7m" podUID="ba2a2abf-806a-4708-8f03-9e68c85c6c6c" Jan 27 15:29:19 crc kubenswrapper[4697]: I0127 15:29:19.114195 4697 scope.go:117] "RemoveContainer" containerID="567854e2f52d0ef7316cd93a2cf47b1045e956ea21cec65266ab23aaaefe8a96" Jan 27 15:29:19 crc kubenswrapper[4697]: I0127 15:29:19.133884 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-nrzk8"] Jan 27 15:29:19 crc kubenswrapper[4697]: I0127 15:29:19.146870 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-698758b865-nrzk8"] Jan 27 15:29:19 crc kubenswrapper[4697]: I0127 15:29:19.155478 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-pdrgr"] Jan 27 15:29:19 crc kubenswrapper[4697]: I0127 15:29:19.164381 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-rxcqn"] Jan 27 15:29:19 crc kubenswrapper[4697]: I0127 15:29:19.425762 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-pbx5g" Jan 27 15:29:19 crc kubenswrapper[4697]: I0127 15:29:19.550336 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8fac9142-cfe0-4849-b6d4-3315ce2475ef-config\") pod \"8fac9142-cfe0-4849-b6d4-3315ce2475ef\" (UID: \"8fac9142-cfe0-4849-b6d4-3315ce2475ef\") " Jan 27 15:29:19 crc kubenswrapper[4697]: I0127 15:29:19.550477 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fac9142-cfe0-4849-b6d4-3315ce2475ef-combined-ca-bundle\") pod \"8fac9142-cfe0-4849-b6d4-3315ce2475ef\" (UID: \"8fac9142-cfe0-4849-b6d4-3315ce2475ef\") " Jan 27 15:29:19 crc kubenswrapper[4697]: I0127 15:29:19.550557 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkzbk\" (UniqueName: \"kubernetes.io/projected/8fac9142-cfe0-4849-b6d4-3315ce2475ef-kube-api-access-zkzbk\") pod \"8fac9142-cfe0-4849-b6d4-3315ce2475ef\" (UID: \"8fac9142-cfe0-4849-b6d4-3315ce2475ef\") " Jan 27 15:29:19 crc kubenswrapper[4697]: I0127 15:29:19.555994 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8fac9142-cfe0-4849-b6d4-3315ce2475ef-kube-api-access-zkzbk" (OuterVolumeSpecName: "kube-api-access-zkzbk") pod "8fac9142-cfe0-4849-b6d4-3315ce2475ef" (UID: "8fac9142-cfe0-4849-b6d4-3315ce2475ef"). InnerVolumeSpecName "kube-api-access-zkzbk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:29:19 crc kubenswrapper[4697]: I0127 15:29:19.576918 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8fac9142-cfe0-4849-b6d4-3315ce2475ef-config" (OuterVolumeSpecName: "config") pod "8fac9142-cfe0-4849-b6d4-3315ce2475ef" (UID: "8fac9142-cfe0-4849-b6d4-3315ce2475ef"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:29:19 crc kubenswrapper[4697]: I0127 15:29:19.608690 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8fac9142-cfe0-4849-b6d4-3315ce2475ef-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8fac9142-cfe0-4849-b6d4-3315ce2475ef" (UID: "8fac9142-cfe0-4849-b6d4-3315ce2475ef"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:29:19 crc kubenswrapper[4697]: I0127 15:29:19.652858 4697 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/8fac9142-cfe0-4849-b6d4-3315ce2475ef-config\") on node \"crc\" DevicePath \"\"" Jan 27 15:29:19 crc kubenswrapper[4697]: I0127 15:29:19.652915 4697 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fac9142-cfe0-4849-b6d4-3315ce2475ef-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 15:29:19 crc kubenswrapper[4697]: I0127 15:29:19.652925 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkzbk\" (UniqueName: \"kubernetes.io/projected/8fac9142-cfe0-4849-b6d4-3315ce2475ef-kube-api-access-zkzbk\") on node \"crc\" DevicePath \"\"" Jan 27 15:29:19 crc kubenswrapper[4697]: I0127 15:29:19.848336 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 15:29:20 crc kubenswrapper[4697]: I0127 15:29:20.122284 4697 generic.go:334] "Generic (PLEG): container finished" podID="5f1bd3d5-7712-4eb4-a256-ffe933ef88de" containerID="c13e5dd074d46bfec8dfd9b381098548bb5829a0b319f7f62f86611e209b4b00" exitCode=0 Jan 27 15:29:20 crc kubenswrapper[4697]: I0127 15:29:20.122347 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-rxcqn" event={"ID":"5f1bd3d5-7712-4eb4-a256-ffe933ef88de","Type":"ContainerDied","Data":"c13e5dd074d46bfec8dfd9b381098548bb5829a0b319f7f62f86611e209b4b00"} Jan 27 15:29:20 crc kubenswrapper[4697]: I0127 15:29:20.122372 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-rxcqn" event={"ID":"5f1bd3d5-7712-4eb4-a256-ffe933ef88de","Type":"ContainerStarted","Data":"b287e7af57d45ade8d2e5080a255b3b774c6839591a7b6d61244e13f47c0e048"} Jan 27 15:29:20 crc kubenswrapper[4697]: I0127 15:29:20.137181 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e63493f9-ccd0-485d-a4ef-827699a7d1de","Type":"ContainerStarted","Data":"b5e6958634b5ef82eaacbb26a4da255ff438c1c324f1fabdcdfea68eb3ecd76e"} Jan 27 15:29:20 crc kubenswrapper[4697]: I0127 15:29:20.153485 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"2b36c8f6-7ed7-4100-9716-9e0d9914667c","Type":"ContainerStarted","Data":"5e928e90885715afc384d4a938689d61e0426eb3f1b0905150d7bf6114270162"} Jan 27 15:29:20 crc kubenswrapper[4697]: I0127 15:29:20.178045 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5965fc65fb-dvhzz" event={"ID":"d6ad161d-fe95-4ad3-8f60-1f1310b2974c","Type":"ContainerStarted","Data":"e54450188c94f7298427d91a62c88df853535928735655dd6ef49dea887a8a99"} Jan 27 15:29:20 crc kubenswrapper[4697]: I0127 15:29:20.189112 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-pbx5g" event={"ID":"8fac9142-cfe0-4849-b6d4-3315ce2475ef","Type":"ContainerDied","Data":"46ad3c48299713728442b73f3951287fc90938046d7ba8ac0aa2009957e9a040"} Jan 27 15:29:20 crc kubenswrapper[4697]: I0127 15:29:20.189157 4697 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="46ad3c48299713728442b73f3951287fc90938046d7ba8ac0aa2009957e9a040" Jan 27 15:29:20 crc kubenswrapper[4697]: I0127 15:29:20.189225 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-pbx5g" Jan 27 15:29:20 crc kubenswrapper[4697]: I0127 15:29:20.223892 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-pdrgr" event={"ID":"edc034d6-13db-4ae2-be4c-86e4dad22dc7","Type":"ContainerStarted","Data":"3b3e5994844f75460670ebf4405acc857c69bcbf2ea85d491c81da5c3d0a7c4f"} Jan 27 15:29:20 crc kubenswrapper[4697]: I0127 15:29:20.223936 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-pdrgr" event={"ID":"edc034d6-13db-4ae2-be4c-86e4dad22dc7","Type":"ContainerStarted","Data":"11691313ad63b1f68626dd2978bcafb6f25ce99f09bc3e50982693d2d53fbc91"} Jan 27 15:29:20 crc kubenswrapper[4697]: I0127 15:29:20.230703 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-5965fc65fb-dvhzz" podStartSLOduration=3.494448318 podStartE2EDuration="40.230680166s" podCreationTimestamp="2026-01-27 15:28:40 +0000 UTC" firstStartedPulling="2026-01-27 15:28:41.658220818 +0000 UTC m=+1217.830620599" lastFinishedPulling="2026-01-27 15:29:18.394452666 +0000 UTC m=+1254.566852447" observedRunningTime="2026-01-27 15:29:20.212585647 +0000 UTC m=+1256.384985428" watchObservedRunningTime="2026-01-27 15:29:20.230680166 +0000 UTC m=+1256.403079947" Jan 27 15:29:20 crc kubenswrapper[4697]: I0127 15:29:20.314501 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-rxcqn"] Jan 27 15:29:20 crc kubenswrapper[4697]: I0127 15:29:20.315449 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-pdrgr" podStartSLOduration=4.315435486 podStartE2EDuration="4.315435486s" podCreationTimestamp="2026-01-27 15:29:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:29:20.266419245 +0000 UTC m=+1256.438819026" watchObservedRunningTime="2026-01-27 15:29:20.315435486 +0000 UTC m=+1256.487835267" Jan 27 15:29:20 crc kubenswrapper[4697]: I0127 15:29:20.402410 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-kc2ll"] Jan 27 15:29:20 crc kubenswrapper[4697]: E0127 15:29:20.403464 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="621e9d49-138c-485b-a57e-1f3ec16c5875" containerName="dnsmasq-dns" Jan 27 15:29:20 crc kubenswrapper[4697]: I0127 15:29:20.403483 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="621e9d49-138c-485b-a57e-1f3ec16c5875" containerName="dnsmasq-dns" Jan 27 15:29:20 crc kubenswrapper[4697]: E0127 15:29:20.403519 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fac9142-cfe0-4849-b6d4-3315ce2475ef" containerName="neutron-db-sync" Jan 27 15:29:20 crc kubenswrapper[4697]: I0127 15:29:20.403529 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fac9142-cfe0-4849-b6d4-3315ce2475ef" containerName="neutron-db-sync" Jan 27 15:29:20 crc kubenswrapper[4697]: E0127 15:29:20.403555 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="621e9d49-138c-485b-a57e-1f3ec16c5875" containerName="init" Jan 27 15:29:20 crc kubenswrapper[4697]: I0127 15:29:20.403565 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="621e9d49-138c-485b-a57e-1f3ec16c5875" containerName="init" Jan 27 15:29:20 crc kubenswrapper[4697]: I0127 15:29:20.403891 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="621e9d49-138c-485b-a57e-1f3ec16c5875" containerName="dnsmasq-dns" Jan 27 15:29:20 crc kubenswrapper[4697]: I0127 15:29:20.403938 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="8fac9142-cfe0-4849-b6d4-3315ce2475ef" containerName="neutron-db-sync" Jan 27 15:29:20 crc kubenswrapper[4697]: I0127 15:29:20.405179 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-kc2ll" Jan 27 15:29:20 crc kubenswrapper[4697]: I0127 15:29:20.420954 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-kc2ll"] Jan 27 15:29:20 crc kubenswrapper[4697]: I0127 15:29:20.585291 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="621e9d49-138c-485b-a57e-1f3ec16c5875" path="/var/lib/kubelet/pods/621e9d49-138c-485b-a57e-1f3ec16c5875/volumes" Jan 27 15:29:20 crc kubenswrapper[4697]: I0127 15:29:20.592489 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e4bebb87-c35b-4185-8c32-560d5ddc3664-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-kc2ll\" (UID: \"e4bebb87-c35b-4185-8c32-560d5ddc3664\") " pod="openstack/dnsmasq-dns-55f844cf75-kc2ll" Jan 27 15:29:20 crc kubenswrapper[4697]: I0127 15:29:20.592535 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e4bebb87-c35b-4185-8c32-560d5ddc3664-dns-svc\") pod \"dnsmasq-dns-55f844cf75-kc2ll\" (UID: \"e4bebb87-c35b-4185-8c32-560d5ddc3664\") " pod="openstack/dnsmasq-dns-55f844cf75-kc2ll" Jan 27 15:29:20 crc kubenswrapper[4697]: I0127 15:29:20.592587 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bws4d\" (UniqueName: \"kubernetes.io/projected/e4bebb87-c35b-4185-8c32-560d5ddc3664-kube-api-access-bws4d\") pod \"dnsmasq-dns-55f844cf75-kc2ll\" (UID: \"e4bebb87-c35b-4185-8c32-560d5ddc3664\") " pod="openstack/dnsmasq-dns-55f844cf75-kc2ll" Jan 27 15:29:20 crc kubenswrapper[4697]: I0127 15:29:20.592619 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e4bebb87-c35b-4185-8c32-560d5ddc3664-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-kc2ll\" (UID: \"e4bebb87-c35b-4185-8c32-560d5ddc3664\") " pod="openstack/dnsmasq-dns-55f844cf75-kc2ll" Jan 27 15:29:20 crc kubenswrapper[4697]: I0127 15:29:20.592661 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4bebb87-c35b-4185-8c32-560d5ddc3664-config\") pod \"dnsmasq-dns-55f844cf75-kc2ll\" (UID: \"e4bebb87-c35b-4185-8c32-560d5ddc3664\") " pod="openstack/dnsmasq-dns-55f844cf75-kc2ll" Jan 27 15:29:20 crc kubenswrapper[4697]: I0127 15:29:20.592695 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e4bebb87-c35b-4185-8c32-560d5ddc3664-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-kc2ll\" (UID: \"e4bebb87-c35b-4185-8c32-560d5ddc3664\") " pod="openstack/dnsmasq-dns-55f844cf75-kc2ll" Jan 27 15:29:20 crc kubenswrapper[4697]: I0127 15:29:20.631921 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-5965fc65fb-dvhzz" Jan 27 15:29:20 crc kubenswrapper[4697]: I0127 15:29:20.631963 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5965fc65fb-dvhzz" Jan 27 15:29:20 crc kubenswrapper[4697]: I0127 15:29:20.694311 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bws4d\" (UniqueName: \"kubernetes.io/projected/e4bebb87-c35b-4185-8c32-560d5ddc3664-kube-api-access-bws4d\") pod \"dnsmasq-dns-55f844cf75-kc2ll\" (UID: \"e4bebb87-c35b-4185-8c32-560d5ddc3664\") " pod="openstack/dnsmasq-dns-55f844cf75-kc2ll" Jan 27 15:29:20 crc kubenswrapper[4697]: I0127 15:29:20.695288 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e4bebb87-c35b-4185-8c32-560d5ddc3664-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-kc2ll\" (UID: \"e4bebb87-c35b-4185-8c32-560d5ddc3664\") " pod="openstack/dnsmasq-dns-55f844cf75-kc2ll" Jan 27 15:29:20 crc kubenswrapper[4697]: I0127 15:29:20.696516 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4bebb87-c35b-4185-8c32-560d5ddc3664-config\") pod \"dnsmasq-dns-55f844cf75-kc2ll\" (UID: \"e4bebb87-c35b-4185-8c32-560d5ddc3664\") " pod="openstack/dnsmasq-dns-55f844cf75-kc2ll" Jan 27 15:29:20 crc kubenswrapper[4697]: I0127 15:29:20.697385 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e4bebb87-c35b-4185-8c32-560d5ddc3664-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-kc2ll\" (UID: \"e4bebb87-c35b-4185-8c32-560d5ddc3664\") " pod="openstack/dnsmasq-dns-55f844cf75-kc2ll" Jan 27 15:29:20 crc kubenswrapper[4697]: I0127 15:29:20.698322 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e4bebb87-c35b-4185-8c32-560d5ddc3664-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-kc2ll\" (UID: \"e4bebb87-c35b-4185-8c32-560d5ddc3664\") " pod="openstack/dnsmasq-dns-55f844cf75-kc2ll" Jan 27 15:29:20 crc kubenswrapper[4697]: I0127 15:29:20.699101 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e4bebb87-c35b-4185-8c32-560d5ddc3664-dns-svc\") pod \"dnsmasq-dns-55f844cf75-kc2ll\" (UID: \"e4bebb87-c35b-4185-8c32-560d5ddc3664\") " pod="openstack/dnsmasq-dns-55f844cf75-kc2ll" Jan 27 15:29:20 crc kubenswrapper[4697]: I0127 15:29:20.698194 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e4bebb87-c35b-4185-8c32-560d5ddc3664-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-kc2ll\" (UID: \"e4bebb87-c35b-4185-8c32-560d5ddc3664\") " pod="openstack/dnsmasq-dns-55f844cf75-kc2ll" Jan 27 15:29:20 crc kubenswrapper[4697]: I0127 15:29:20.697303 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4bebb87-c35b-4185-8c32-560d5ddc3664-config\") pod \"dnsmasq-dns-55f844cf75-kc2ll\" (UID: \"e4bebb87-c35b-4185-8c32-560d5ddc3664\") " pod="openstack/dnsmasq-dns-55f844cf75-kc2ll" Jan 27 15:29:20 crc kubenswrapper[4697]: I0127 15:29:20.699038 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e4bebb87-c35b-4185-8c32-560d5ddc3664-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-kc2ll\" (UID: \"e4bebb87-c35b-4185-8c32-560d5ddc3664\") " pod="openstack/dnsmasq-dns-55f844cf75-kc2ll" Jan 27 15:29:20 crc kubenswrapper[4697]: I0127 15:29:20.696365 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e4bebb87-c35b-4185-8c32-560d5ddc3664-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-kc2ll\" (UID: \"e4bebb87-c35b-4185-8c32-560d5ddc3664\") " pod="openstack/dnsmasq-dns-55f844cf75-kc2ll" Jan 27 15:29:20 crc kubenswrapper[4697]: I0127 15:29:20.699811 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e4bebb87-c35b-4185-8c32-560d5ddc3664-dns-svc\") pod \"dnsmasq-dns-55f844cf75-kc2ll\" (UID: \"e4bebb87-c35b-4185-8c32-560d5ddc3664\") " pod="openstack/dnsmasq-dns-55f844cf75-kc2ll" Jan 27 15:29:20 crc kubenswrapper[4697]: I0127 15:29:20.736736 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bws4d\" (UniqueName: \"kubernetes.io/projected/e4bebb87-c35b-4185-8c32-560d5ddc3664-kube-api-access-bws4d\") pod \"dnsmasq-dns-55f844cf75-kc2ll\" (UID: \"e4bebb87-c35b-4185-8c32-560d5ddc3664\") " pod="openstack/dnsmasq-dns-55f844cf75-kc2ll" Jan 27 15:29:21 crc kubenswrapper[4697]: I0127 15:29:21.035446 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-kc2ll" Jan 27 15:29:21 crc kubenswrapper[4697]: I0127 15:29:21.499557 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-kc2ll"] Jan 27 15:29:21 crc kubenswrapper[4697]: W0127 15:29:21.523609 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode4bebb87_c35b_4185_8c32_560d5ddc3664.slice/crio-171a21f24282c55c836fe482a4d3434df658011cfd338ad9e50fab3fb7f1f870 WatchSource:0}: Error finding container 171a21f24282c55c836fe482a4d3434df658011cfd338ad9e50fab3fb7f1f870: Status 404 returned error can't find the container with id 171a21f24282c55c836fe482a4d3434df658011cfd338ad9e50fab3fb7f1f870 Jan 27 15:29:21 crc kubenswrapper[4697]: I0127 15:29:21.658444 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-6887cfc8d4-v8f57"] Jan 27 15:29:21 crc kubenswrapper[4697]: I0127 15:29:21.660985 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6887cfc8d4-v8f57" Jan 27 15:29:21 crc kubenswrapper[4697]: I0127 15:29:21.667161 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 27 15:29:21 crc kubenswrapper[4697]: I0127 15:29:21.667497 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Jan 27 15:29:21 crc kubenswrapper[4697]: I0127 15:29:21.667646 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-dbs24" Jan 27 15:29:21 crc kubenswrapper[4697]: I0127 15:29:21.667818 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 27 15:29:21 crc kubenswrapper[4697]: I0127 15:29:21.682571 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6887cfc8d4-v8f57"] Jan 27 15:29:21 crc kubenswrapper[4697]: I0127 15:29:21.823242 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc00891c-0cae-42c0-bb0a-8e78bd146365-combined-ca-bundle\") pod \"neutron-6887cfc8d4-v8f57\" (UID: \"dc00891c-0cae-42c0-bb0a-8e78bd146365\") " pod="openstack/neutron-6887cfc8d4-v8f57" Jan 27 15:29:21 crc kubenswrapper[4697]: I0127 15:29:21.823581 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/dc00891c-0cae-42c0-bb0a-8e78bd146365-httpd-config\") pod \"neutron-6887cfc8d4-v8f57\" (UID: \"dc00891c-0cae-42c0-bb0a-8e78bd146365\") " pod="openstack/neutron-6887cfc8d4-v8f57" Jan 27 15:29:21 crc kubenswrapper[4697]: I0127 15:29:21.823664 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/dc00891c-0cae-42c0-bb0a-8e78bd146365-config\") pod \"neutron-6887cfc8d4-v8f57\" (UID: \"dc00891c-0cae-42c0-bb0a-8e78bd146365\") " pod="openstack/neutron-6887cfc8d4-v8f57" Jan 27 15:29:21 crc kubenswrapper[4697]: I0127 15:29:21.823704 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nprcp\" (UniqueName: \"kubernetes.io/projected/dc00891c-0cae-42c0-bb0a-8e78bd146365-kube-api-access-nprcp\") pod \"neutron-6887cfc8d4-v8f57\" (UID: \"dc00891c-0cae-42c0-bb0a-8e78bd146365\") " pod="openstack/neutron-6887cfc8d4-v8f57" Jan 27 15:29:21 crc kubenswrapper[4697]: I0127 15:29:21.823741 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc00891c-0cae-42c0-bb0a-8e78bd146365-ovndb-tls-certs\") pod \"neutron-6887cfc8d4-v8f57\" (UID: \"dc00891c-0cae-42c0-bb0a-8e78bd146365\") " pod="openstack/neutron-6887cfc8d4-v8f57" Jan 27 15:29:21 crc kubenswrapper[4697]: I0127 15:29:21.924963 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nprcp\" (UniqueName: \"kubernetes.io/projected/dc00891c-0cae-42c0-bb0a-8e78bd146365-kube-api-access-nprcp\") pod \"neutron-6887cfc8d4-v8f57\" (UID: \"dc00891c-0cae-42c0-bb0a-8e78bd146365\") " pod="openstack/neutron-6887cfc8d4-v8f57" Jan 27 15:29:21 crc kubenswrapper[4697]: I0127 15:29:21.925039 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc00891c-0cae-42c0-bb0a-8e78bd146365-ovndb-tls-certs\") pod \"neutron-6887cfc8d4-v8f57\" (UID: \"dc00891c-0cae-42c0-bb0a-8e78bd146365\") " pod="openstack/neutron-6887cfc8d4-v8f57" Jan 27 15:29:21 crc kubenswrapper[4697]: I0127 15:29:21.925138 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc00891c-0cae-42c0-bb0a-8e78bd146365-combined-ca-bundle\") pod \"neutron-6887cfc8d4-v8f57\" (UID: \"dc00891c-0cae-42c0-bb0a-8e78bd146365\") " pod="openstack/neutron-6887cfc8d4-v8f57" Jan 27 15:29:21 crc kubenswrapper[4697]: I0127 15:29:21.925216 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/dc00891c-0cae-42c0-bb0a-8e78bd146365-httpd-config\") pod \"neutron-6887cfc8d4-v8f57\" (UID: \"dc00891c-0cae-42c0-bb0a-8e78bd146365\") " pod="openstack/neutron-6887cfc8d4-v8f57" Jan 27 15:29:21 crc kubenswrapper[4697]: I0127 15:29:21.925260 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/dc00891c-0cae-42c0-bb0a-8e78bd146365-config\") pod \"neutron-6887cfc8d4-v8f57\" (UID: \"dc00891c-0cae-42c0-bb0a-8e78bd146365\") " pod="openstack/neutron-6887cfc8d4-v8f57" Jan 27 15:29:21 crc kubenswrapper[4697]: I0127 15:29:21.932303 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/dc00891c-0cae-42c0-bb0a-8e78bd146365-httpd-config\") pod \"neutron-6887cfc8d4-v8f57\" (UID: \"dc00891c-0cae-42c0-bb0a-8e78bd146365\") " pod="openstack/neutron-6887cfc8d4-v8f57" Jan 27 15:29:21 crc kubenswrapper[4697]: I0127 15:29:21.932371 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc00891c-0cae-42c0-bb0a-8e78bd146365-ovndb-tls-certs\") pod \"neutron-6887cfc8d4-v8f57\" (UID: \"dc00891c-0cae-42c0-bb0a-8e78bd146365\") " pod="openstack/neutron-6887cfc8d4-v8f57" Jan 27 15:29:21 crc kubenswrapper[4697]: I0127 15:29:21.940822 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc00891c-0cae-42c0-bb0a-8e78bd146365-combined-ca-bundle\") pod \"neutron-6887cfc8d4-v8f57\" (UID: \"dc00891c-0cae-42c0-bb0a-8e78bd146365\") " pod="openstack/neutron-6887cfc8d4-v8f57" Jan 27 15:29:21 crc kubenswrapper[4697]: I0127 15:29:21.941996 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/dc00891c-0cae-42c0-bb0a-8e78bd146365-config\") pod \"neutron-6887cfc8d4-v8f57\" (UID: \"dc00891c-0cae-42c0-bb0a-8e78bd146365\") " pod="openstack/neutron-6887cfc8d4-v8f57" Jan 27 15:29:21 crc kubenswrapper[4697]: I0127 15:29:21.949498 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nprcp\" (UniqueName: \"kubernetes.io/projected/dc00891c-0cae-42c0-bb0a-8e78bd146365-kube-api-access-nprcp\") pod \"neutron-6887cfc8d4-v8f57\" (UID: \"dc00891c-0cae-42c0-bb0a-8e78bd146365\") " pod="openstack/neutron-6887cfc8d4-v8f57" Jan 27 15:29:21 crc kubenswrapper[4697]: I0127 15:29:21.981515 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6887cfc8d4-v8f57" Jan 27 15:29:22 crc kubenswrapper[4697]: E0127 15:29:22.089292 4697 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Jan 27 15:29:22 crc kubenswrapper[4697]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/5f1bd3d5-7712-4eb4-a256-ffe933ef88de/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Jan 27 15:29:22 crc kubenswrapper[4697]: > podSandboxID="b287e7af57d45ade8d2e5080a255b3b774c6839591a7b6d61244e13f47c0e048" Jan 27 15:29:22 crc kubenswrapper[4697]: E0127 15:29:22.089763 4697 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 27 15:29:22 crc kubenswrapper[4697]: container &Container{Name:dnsmasq-dns,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n86h549h66h5b5h5dfh587h56bh555h586h5h67fh584h665h5f8h689h64dh58fhf6hd9h648h5bfhcfh55fh696h7bh55fh5f8h65h5dh57h645h596q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-swift-storage-0,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-swift-storage-0,SubPath:dns-swift-storage-0,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/ovsdbserver-nb,SubPath:ovsdbserver-nb,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-sb,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/ovsdbserver-sb,SubPath:ovsdbserver-sb,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dxd9d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-785d8bcb8c-rxcqn_openstack(5f1bd3d5-7712-4eb4-a256-ffe933ef88de): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/5f1bd3d5-7712-4eb4-a256-ffe933ef88de/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Jan 27 15:29:22 crc kubenswrapper[4697]: > logger="UnhandledError" Jan 27 15:29:22 crc kubenswrapper[4697]: E0127 15:29:22.091221 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dnsmasq-dns\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/5f1bd3d5-7712-4eb4-a256-ffe933ef88de/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory\\n\"" pod="openstack/dnsmasq-dns-785d8bcb8c-rxcqn" podUID="5f1bd3d5-7712-4eb4-a256-ffe933ef88de" Jan 27 15:29:22 crc kubenswrapper[4697]: I0127 15:29:22.258333 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"2b36c8f6-7ed7-4100-9716-9e0d9914667c","Type":"ContainerStarted","Data":"1bde98f760df90ce10716d4485d501650d35a743ca166d9a215f3368f405bf82"} Jan 27 15:29:22 crc kubenswrapper[4697]: I0127 15:29:22.261002 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-kc2ll" event={"ID":"e4bebb87-c35b-4185-8c32-560d5ddc3664","Type":"ContainerStarted","Data":"171a21f24282c55c836fe482a4d3434df658011cfd338ad9e50fab3fb7f1f870"} Jan 27 15:29:22 crc kubenswrapper[4697]: I0127 15:29:22.625343 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6887cfc8d4-v8f57"] Jan 27 15:29:22 crc kubenswrapper[4697]: I0127 15:29:22.838229 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-rxcqn" Jan 27 15:29:22 crc kubenswrapper[4697]: I0127 15:29:22.966163 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5f1bd3d5-7712-4eb4-a256-ffe933ef88de-dns-swift-storage-0\") pod \"5f1bd3d5-7712-4eb4-a256-ffe933ef88de\" (UID: \"5f1bd3d5-7712-4eb4-a256-ffe933ef88de\") " Jan 27 15:29:22 crc kubenswrapper[4697]: I0127 15:29:22.966379 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f1bd3d5-7712-4eb4-a256-ffe933ef88de-config\") pod \"5f1bd3d5-7712-4eb4-a256-ffe933ef88de\" (UID: \"5f1bd3d5-7712-4eb4-a256-ffe933ef88de\") " Jan 27 15:29:22 crc kubenswrapper[4697]: I0127 15:29:22.966440 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5f1bd3d5-7712-4eb4-a256-ffe933ef88de-ovsdbserver-nb\") pod \"5f1bd3d5-7712-4eb4-a256-ffe933ef88de\" (UID: \"5f1bd3d5-7712-4eb4-a256-ffe933ef88de\") " Jan 27 15:29:22 crc kubenswrapper[4697]: I0127 15:29:22.966477 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5f1bd3d5-7712-4eb4-a256-ffe933ef88de-dns-svc\") pod \"5f1bd3d5-7712-4eb4-a256-ffe933ef88de\" (UID: \"5f1bd3d5-7712-4eb4-a256-ffe933ef88de\") " Jan 27 15:29:22 crc kubenswrapper[4697]: I0127 15:29:22.967378 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dxd9d\" (UniqueName: \"kubernetes.io/projected/5f1bd3d5-7712-4eb4-a256-ffe933ef88de-kube-api-access-dxd9d\") pod \"5f1bd3d5-7712-4eb4-a256-ffe933ef88de\" (UID: \"5f1bd3d5-7712-4eb4-a256-ffe933ef88de\") " Jan 27 15:29:22 crc kubenswrapper[4697]: I0127 15:29:22.967438 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5f1bd3d5-7712-4eb4-a256-ffe933ef88de-ovsdbserver-sb\") pod \"5f1bd3d5-7712-4eb4-a256-ffe933ef88de\" (UID: \"5f1bd3d5-7712-4eb4-a256-ffe933ef88de\") " Jan 27 15:29:23 crc kubenswrapper[4697]: I0127 15:29:23.040915 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f1bd3d5-7712-4eb4-a256-ffe933ef88de-kube-api-access-dxd9d" (OuterVolumeSpecName: "kube-api-access-dxd9d") pod "5f1bd3d5-7712-4eb4-a256-ffe933ef88de" (UID: "5f1bd3d5-7712-4eb4-a256-ffe933ef88de"). InnerVolumeSpecName "kube-api-access-dxd9d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:29:23 crc kubenswrapper[4697]: I0127 15:29:23.044234 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f1bd3d5-7712-4eb4-a256-ffe933ef88de-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5f1bd3d5-7712-4eb4-a256-ffe933ef88de" (UID: "5f1bd3d5-7712-4eb4-a256-ffe933ef88de"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:29:23 crc kubenswrapper[4697]: I0127 15:29:23.066372 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f1bd3d5-7712-4eb4-a256-ffe933ef88de-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "5f1bd3d5-7712-4eb4-a256-ffe933ef88de" (UID: "5f1bd3d5-7712-4eb4-a256-ffe933ef88de"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:29:23 crc kubenswrapper[4697]: I0127 15:29:23.075810 4697 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5f1bd3d5-7712-4eb4-a256-ffe933ef88de-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 15:29:23 crc kubenswrapper[4697]: I0127 15:29:23.075847 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dxd9d\" (UniqueName: \"kubernetes.io/projected/5f1bd3d5-7712-4eb4-a256-ffe933ef88de-kube-api-access-dxd9d\") on node \"crc\" DevicePath \"\"" Jan 27 15:29:23 crc kubenswrapper[4697]: I0127 15:29:23.075861 4697 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5f1bd3d5-7712-4eb4-a256-ffe933ef88de-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 27 15:29:23 crc kubenswrapper[4697]: I0127 15:29:23.084219 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f1bd3d5-7712-4eb4-a256-ffe933ef88de-config" (OuterVolumeSpecName: "config") pod "5f1bd3d5-7712-4eb4-a256-ffe933ef88de" (UID: "5f1bd3d5-7712-4eb4-a256-ffe933ef88de"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:29:23 crc kubenswrapper[4697]: I0127 15:29:23.084681 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f1bd3d5-7712-4eb4-a256-ffe933ef88de-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "5f1bd3d5-7712-4eb4-a256-ffe933ef88de" (UID: "5f1bd3d5-7712-4eb4-a256-ffe933ef88de"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:29:23 crc kubenswrapper[4697]: I0127 15:29:23.085336 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f1bd3d5-7712-4eb4-a256-ffe933ef88de-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "5f1bd3d5-7712-4eb4-a256-ffe933ef88de" (UID: "5f1bd3d5-7712-4eb4-a256-ffe933ef88de"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:29:23 crc kubenswrapper[4697]: I0127 15:29:23.177444 4697 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5f1bd3d5-7712-4eb4-a256-ffe933ef88de-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 15:29:23 crc kubenswrapper[4697]: I0127 15:29:23.177475 4697 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5f1bd3d5-7712-4eb4-a256-ffe933ef88de-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 15:29:23 crc kubenswrapper[4697]: I0127 15:29:23.177486 4697 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f1bd3d5-7712-4eb4-a256-ffe933ef88de-config\") on node \"crc\" DevicePath \"\"" Jan 27 15:29:23 crc kubenswrapper[4697]: I0127 15:29:23.272972 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-kc2ll" event={"ID":"e4bebb87-c35b-4185-8c32-560d5ddc3664","Type":"ContainerStarted","Data":"10518bbe554a8ca61cdc472176eee5c16ed7c10cdfae11c2345e3111734a8059"} Jan 27 15:29:23 crc kubenswrapper[4697]: I0127 15:29:23.275281 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-rxcqn" Jan 27 15:29:23 crc kubenswrapper[4697]: I0127 15:29:23.275275 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-rxcqn" event={"ID":"5f1bd3d5-7712-4eb4-a256-ffe933ef88de","Type":"ContainerDied","Data":"b287e7af57d45ade8d2e5080a255b3b774c6839591a7b6d61244e13f47c0e048"} Jan 27 15:29:23 crc kubenswrapper[4697]: I0127 15:29:23.275609 4697 scope.go:117] "RemoveContainer" containerID="c13e5dd074d46bfec8dfd9b381098548bb5829a0b319f7f62f86611e209b4b00" Jan 27 15:29:23 crc kubenswrapper[4697]: I0127 15:29:23.280060 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e63493f9-ccd0-485d-a4ef-827699a7d1de","Type":"ContainerStarted","Data":"427d1ce893cac9dade30d1e12137c9329eb36b513da688e2f404559ca58c9d87"} Jan 27 15:29:23 crc kubenswrapper[4697]: I0127 15:29:23.281565 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6887cfc8d4-v8f57" event={"ID":"dc00891c-0cae-42c0-bb0a-8e78bd146365","Type":"ContainerStarted","Data":"fc8e63de561ed76bb1d4d955372538f84c96e36748660036ee2ee2c41d5d4e68"} Jan 27 15:29:23 crc kubenswrapper[4697]: I0127 15:29:23.399178 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-rxcqn"] Jan 27 15:29:23 crc kubenswrapper[4697]: I0127 15:29:23.406560 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-rxcqn"] Jan 27 15:29:23 crc kubenswrapper[4697]: I0127 15:29:23.655905 4697 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-nrzk8" podUID="621e9d49-138c-485b-a57e-1f3ec16c5875" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.113:5353: i/o timeout" Jan 27 15:29:23 crc kubenswrapper[4697]: I0127 15:29:23.846663 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-5ccccf969c-jgqtz"] Jan 27 15:29:23 crc kubenswrapper[4697]: E0127 15:29:23.847343 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f1bd3d5-7712-4eb4-a256-ffe933ef88de" containerName="init" Jan 27 15:29:23 crc kubenswrapper[4697]: I0127 15:29:23.847407 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f1bd3d5-7712-4eb4-a256-ffe933ef88de" containerName="init" Jan 27 15:29:23 crc kubenswrapper[4697]: I0127 15:29:23.847694 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f1bd3d5-7712-4eb4-a256-ffe933ef88de" containerName="init" Jan 27 15:29:23 crc kubenswrapper[4697]: I0127 15:29:23.848567 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5ccccf969c-jgqtz" Jan 27 15:29:23 crc kubenswrapper[4697]: I0127 15:29:23.853250 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Jan 27 15:29:23 crc kubenswrapper[4697]: I0127 15:29:23.853445 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Jan 27 15:29:23 crc kubenswrapper[4697]: I0127 15:29:23.887748 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5ccccf969c-jgqtz"] Jan 27 15:29:23 crc kubenswrapper[4697]: I0127 15:29:23.913865 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c83b7e83-b006-4f05-9f00-aa03173c05d9-public-tls-certs\") pod \"neutron-5ccccf969c-jgqtz\" (UID: \"c83b7e83-b006-4f05-9f00-aa03173c05d9\") " pod="openstack/neutron-5ccccf969c-jgqtz" Jan 27 15:29:23 crc kubenswrapper[4697]: I0127 15:29:23.913950 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c83b7e83-b006-4f05-9f00-aa03173c05d9-config\") pod \"neutron-5ccccf969c-jgqtz\" (UID: \"c83b7e83-b006-4f05-9f00-aa03173c05d9\") " pod="openstack/neutron-5ccccf969c-jgqtz" Jan 27 15:29:23 crc kubenswrapper[4697]: I0127 15:29:23.913978 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrbs6\" (UniqueName: \"kubernetes.io/projected/c83b7e83-b006-4f05-9f00-aa03173c05d9-kube-api-access-nrbs6\") pod \"neutron-5ccccf969c-jgqtz\" (UID: \"c83b7e83-b006-4f05-9f00-aa03173c05d9\") " pod="openstack/neutron-5ccccf969c-jgqtz" Jan 27 15:29:23 crc kubenswrapper[4697]: I0127 15:29:23.914013 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c83b7e83-b006-4f05-9f00-aa03173c05d9-ovndb-tls-certs\") pod \"neutron-5ccccf969c-jgqtz\" (UID: \"c83b7e83-b006-4f05-9f00-aa03173c05d9\") " pod="openstack/neutron-5ccccf969c-jgqtz" Jan 27 15:29:23 crc kubenswrapper[4697]: I0127 15:29:23.914050 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c83b7e83-b006-4f05-9f00-aa03173c05d9-combined-ca-bundle\") pod \"neutron-5ccccf969c-jgqtz\" (UID: \"c83b7e83-b006-4f05-9f00-aa03173c05d9\") " pod="openstack/neutron-5ccccf969c-jgqtz" Jan 27 15:29:23 crc kubenswrapper[4697]: I0127 15:29:23.914074 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c83b7e83-b006-4f05-9f00-aa03173c05d9-internal-tls-certs\") pod \"neutron-5ccccf969c-jgqtz\" (UID: \"c83b7e83-b006-4f05-9f00-aa03173c05d9\") " pod="openstack/neutron-5ccccf969c-jgqtz" Jan 27 15:29:23 crc kubenswrapper[4697]: I0127 15:29:23.914114 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/c83b7e83-b006-4f05-9f00-aa03173c05d9-httpd-config\") pod \"neutron-5ccccf969c-jgqtz\" (UID: \"c83b7e83-b006-4f05-9f00-aa03173c05d9\") " pod="openstack/neutron-5ccccf969c-jgqtz" Jan 27 15:29:24 crc kubenswrapper[4697]: I0127 15:29:24.015293 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c83b7e83-b006-4f05-9f00-aa03173c05d9-combined-ca-bundle\") pod \"neutron-5ccccf969c-jgqtz\" (UID: \"c83b7e83-b006-4f05-9f00-aa03173c05d9\") " pod="openstack/neutron-5ccccf969c-jgqtz" Jan 27 15:29:24 crc kubenswrapper[4697]: I0127 15:29:24.015355 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c83b7e83-b006-4f05-9f00-aa03173c05d9-internal-tls-certs\") pod \"neutron-5ccccf969c-jgqtz\" (UID: \"c83b7e83-b006-4f05-9f00-aa03173c05d9\") " pod="openstack/neutron-5ccccf969c-jgqtz" Jan 27 15:29:24 crc kubenswrapper[4697]: I0127 15:29:24.015403 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/c83b7e83-b006-4f05-9f00-aa03173c05d9-httpd-config\") pod \"neutron-5ccccf969c-jgqtz\" (UID: \"c83b7e83-b006-4f05-9f00-aa03173c05d9\") " pod="openstack/neutron-5ccccf969c-jgqtz" Jan 27 15:29:24 crc kubenswrapper[4697]: I0127 15:29:24.015436 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c83b7e83-b006-4f05-9f00-aa03173c05d9-public-tls-certs\") pod \"neutron-5ccccf969c-jgqtz\" (UID: \"c83b7e83-b006-4f05-9f00-aa03173c05d9\") " pod="openstack/neutron-5ccccf969c-jgqtz" Jan 27 15:29:24 crc kubenswrapper[4697]: I0127 15:29:24.015480 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c83b7e83-b006-4f05-9f00-aa03173c05d9-config\") pod \"neutron-5ccccf969c-jgqtz\" (UID: \"c83b7e83-b006-4f05-9f00-aa03173c05d9\") " pod="openstack/neutron-5ccccf969c-jgqtz" Jan 27 15:29:24 crc kubenswrapper[4697]: I0127 15:29:24.015504 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrbs6\" (UniqueName: \"kubernetes.io/projected/c83b7e83-b006-4f05-9f00-aa03173c05d9-kube-api-access-nrbs6\") pod \"neutron-5ccccf969c-jgqtz\" (UID: \"c83b7e83-b006-4f05-9f00-aa03173c05d9\") " pod="openstack/neutron-5ccccf969c-jgqtz" Jan 27 15:29:24 crc kubenswrapper[4697]: I0127 15:29:24.015538 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c83b7e83-b006-4f05-9f00-aa03173c05d9-ovndb-tls-certs\") pod \"neutron-5ccccf969c-jgqtz\" (UID: \"c83b7e83-b006-4f05-9f00-aa03173c05d9\") " pod="openstack/neutron-5ccccf969c-jgqtz" Jan 27 15:29:24 crc kubenswrapper[4697]: I0127 15:29:24.021377 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c83b7e83-b006-4f05-9f00-aa03173c05d9-internal-tls-certs\") pod \"neutron-5ccccf969c-jgqtz\" (UID: \"c83b7e83-b006-4f05-9f00-aa03173c05d9\") " pod="openstack/neutron-5ccccf969c-jgqtz" Jan 27 15:29:24 crc kubenswrapper[4697]: I0127 15:29:24.021559 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/c83b7e83-b006-4f05-9f00-aa03173c05d9-config\") pod \"neutron-5ccccf969c-jgqtz\" (UID: \"c83b7e83-b006-4f05-9f00-aa03173c05d9\") " pod="openstack/neutron-5ccccf969c-jgqtz" Jan 27 15:29:24 crc kubenswrapper[4697]: I0127 15:29:24.025377 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c83b7e83-b006-4f05-9f00-aa03173c05d9-public-tls-certs\") pod \"neutron-5ccccf969c-jgqtz\" (UID: \"c83b7e83-b006-4f05-9f00-aa03173c05d9\") " pod="openstack/neutron-5ccccf969c-jgqtz" Jan 27 15:29:24 crc kubenswrapper[4697]: I0127 15:29:24.029831 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/c83b7e83-b006-4f05-9f00-aa03173c05d9-httpd-config\") pod \"neutron-5ccccf969c-jgqtz\" (UID: \"c83b7e83-b006-4f05-9f00-aa03173c05d9\") " pod="openstack/neutron-5ccccf969c-jgqtz" Jan 27 15:29:24 crc kubenswrapper[4697]: I0127 15:29:24.035597 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c83b7e83-b006-4f05-9f00-aa03173c05d9-combined-ca-bundle\") pod \"neutron-5ccccf969c-jgqtz\" (UID: \"c83b7e83-b006-4f05-9f00-aa03173c05d9\") " pod="openstack/neutron-5ccccf969c-jgqtz" Jan 27 15:29:24 crc kubenswrapper[4697]: I0127 15:29:24.042591 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nrbs6\" (UniqueName: \"kubernetes.io/projected/c83b7e83-b006-4f05-9f00-aa03173c05d9-kube-api-access-nrbs6\") pod \"neutron-5ccccf969c-jgqtz\" (UID: \"c83b7e83-b006-4f05-9f00-aa03173c05d9\") " pod="openstack/neutron-5ccccf969c-jgqtz" Jan 27 15:29:24 crc kubenswrapper[4697]: I0127 15:29:24.149919 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c83b7e83-b006-4f05-9f00-aa03173c05d9-ovndb-tls-certs\") pod \"neutron-5ccccf969c-jgqtz\" (UID: \"c83b7e83-b006-4f05-9f00-aa03173c05d9\") " pod="openstack/neutron-5ccccf969c-jgqtz" Jan 27 15:29:24 crc kubenswrapper[4697]: I0127 15:29:24.164730 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5ccccf969c-jgqtz" Jan 27 15:29:24 crc kubenswrapper[4697]: I0127 15:29:24.292333 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6887cfc8d4-v8f57" event={"ID":"dc00891c-0cae-42c0-bb0a-8e78bd146365","Type":"ContainerStarted","Data":"2a16194145ec6654978b540e58e66ba2b45b349503991b45948adac1968da332"} Jan 27 15:29:24 crc kubenswrapper[4697]: I0127 15:29:24.294065 4697 generic.go:334] "Generic (PLEG): container finished" podID="e4bebb87-c35b-4185-8c32-560d5ddc3664" containerID="10518bbe554a8ca61cdc472176eee5c16ed7c10cdfae11c2345e3111734a8059" exitCode=0 Jan 27 15:29:24 crc kubenswrapper[4697]: I0127 15:29:24.294127 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-kc2ll" event={"ID":"e4bebb87-c35b-4185-8c32-560d5ddc3664","Type":"ContainerDied","Data":"10518bbe554a8ca61cdc472176eee5c16ed7c10cdfae11c2345e3111734a8059"} Jan 27 15:29:24 crc kubenswrapper[4697]: I0127 15:29:24.595060 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f1bd3d5-7712-4eb4-a256-ffe933ef88de" path="/var/lib/kubelet/pods/5f1bd3d5-7712-4eb4-a256-ffe933ef88de/volumes" Jan 27 15:29:25 crc kubenswrapper[4697]: I0127 15:29:25.312280 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"2b36c8f6-7ed7-4100-9716-9e0d9914667c","Type":"ContainerStarted","Data":"17b80c45ce1bf256d46d60ca3f1a308792d9c56d5b944a9628ab01d9f3c96d8f"} Jan 27 15:29:25 crc kubenswrapper[4697]: I0127 15:29:25.312748 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="2b36c8f6-7ed7-4100-9716-9e0d9914667c" containerName="glance-log" containerID="cri-o://1bde98f760df90ce10716d4485d501650d35a743ca166d9a215f3368f405bf82" gracePeriod=30 Jan 27 15:29:25 crc kubenswrapper[4697]: I0127 15:29:25.313552 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="2b36c8f6-7ed7-4100-9716-9e0d9914667c" containerName="glance-httpd" containerID="cri-o://17b80c45ce1bf256d46d60ca3f1a308792d9c56d5b944a9628ab01d9f3c96d8f" gracePeriod=30 Jan 27 15:29:25 crc kubenswrapper[4697]: I0127 15:29:25.326463 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e63493f9-ccd0-485d-a4ef-827699a7d1de","Type":"ContainerStarted","Data":"b4ea7f6164928dcfb77736cf4ffe05797bb915fa1f74cb4485c9421efe278def"} Jan 27 15:29:25 crc kubenswrapper[4697]: I0127 15:29:25.355067 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=25.355048082 podStartE2EDuration="25.355048082s" podCreationTimestamp="2026-01-27 15:29:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:29:25.339657578 +0000 UTC m=+1261.512057359" watchObservedRunningTime="2026-01-27 15:29:25.355048082 +0000 UTC m=+1261.527447863" Jan 27 15:29:26 crc kubenswrapper[4697]: I0127 15:29:26.337172 4697 generic.go:334] "Generic (PLEG): container finished" podID="2b36c8f6-7ed7-4100-9716-9e0d9914667c" containerID="17b80c45ce1bf256d46d60ca3f1a308792d9c56d5b944a9628ab01d9f3c96d8f" exitCode=143 Jan 27 15:29:26 crc kubenswrapper[4697]: I0127 15:29:26.337202 4697 generic.go:334] "Generic (PLEG): container finished" podID="2b36c8f6-7ed7-4100-9716-9e0d9914667c" containerID="1bde98f760df90ce10716d4485d501650d35a743ca166d9a215f3368f405bf82" exitCode=143 Jan 27 15:29:26 crc kubenswrapper[4697]: I0127 15:29:26.337215 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"2b36c8f6-7ed7-4100-9716-9e0d9914667c","Type":"ContainerDied","Data":"17b80c45ce1bf256d46d60ca3f1a308792d9c56d5b944a9628ab01d9f3c96d8f"} Jan 27 15:29:26 crc kubenswrapper[4697]: I0127 15:29:26.337280 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"2b36c8f6-7ed7-4100-9716-9e0d9914667c","Type":"ContainerDied","Data":"1bde98f760df90ce10716d4485d501650d35a743ca166d9a215f3368f405bf82"} Jan 27 15:29:26 crc kubenswrapper[4697]: I0127 15:29:26.337335 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="e63493f9-ccd0-485d-a4ef-827699a7d1de" containerName="glance-log" containerID="cri-o://427d1ce893cac9dade30d1e12137c9329eb36b513da688e2f404559ca58c9d87" gracePeriod=30 Jan 27 15:29:26 crc kubenswrapper[4697]: I0127 15:29:26.337414 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="e63493f9-ccd0-485d-a4ef-827699a7d1de" containerName="glance-httpd" containerID="cri-o://b4ea7f6164928dcfb77736cf4ffe05797bb915fa1f74cb4485c9421efe278def" gracePeriod=30 Jan 27 15:29:26 crc kubenswrapper[4697]: I0127 15:29:26.387304 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=26.387289619 podStartE2EDuration="26.387289619s" podCreationTimestamp="2026-01-27 15:29:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:29:26.37658262 +0000 UTC m=+1262.548982401" watchObservedRunningTime="2026-01-27 15:29:26.387289619 +0000 UTC m=+1262.559689400" Jan 27 15:29:27 crc kubenswrapper[4697]: I0127 15:29:27.356283 4697 generic.go:334] "Generic (PLEG): container finished" podID="e63493f9-ccd0-485d-a4ef-827699a7d1de" containerID="b4ea7f6164928dcfb77736cf4ffe05797bb915fa1f74cb4485c9421efe278def" exitCode=143 Jan 27 15:29:27 crc kubenswrapper[4697]: I0127 15:29:27.356939 4697 generic.go:334] "Generic (PLEG): container finished" podID="e63493f9-ccd0-485d-a4ef-827699a7d1de" containerID="427d1ce893cac9dade30d1e12137c9329eb36b513da688e2f404559ca58c9d87" exitCode=143 Jan 27 15:29:27 crc kubenswrapper[4697]: I0127 15:29:27.356907 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e63493f9-ccd0-485d-a4ef-827699a7d1de","Type":"ContainerDied","Data":"b4ea7f6164928dcfb77736cf4ffe05797bb915fa1f74cb4485c9421efe278def"} Jan 27 15:29:27 crc kubenswrapper[4697]: I0127 15:29:27.356980 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e63493f9-ccd0-485d-a4ef-827699a7d1de","Type":"ContainerDied","Data":"427d1ce893cac9dade30d1e12137c9329eb36b513da688e2f404559ca58c9d87"} Jan 27 15:29:27 crc kubenswrapper[4697]: I0127 15:29:27.879677 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5ccccf969c-jgqtz"] Jan 27 15:29:27 crc kubenswrapper[4697]: W0127 15:29:27.998314 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc83b7e83_b006_4f05_9f00_aa03173c05d9.slice/crio-a0bb3148509f3f5c0a492bf7c88d6610899aad9c5d1728f02471ee5ed42d452b WatchSource:0}: Error finding container a0bb3148509f3f5c0a492bf7c88d6610899aad9c5d1728f02471ee5ed42d452b: Status 404 returned error can't find the container with id a0bb3148509f3f5c0a492bf7c88d6610899aad9c5d1728f02471ee5ed42d452b Jan 27 15:29:28 crc kubenswrapper[4697]: I0127 15:29:28.011231 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 15:29:28 crc kubenswrapper[4697]: I0127 15:29:28.022036 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"e63493f9-ccd0-485d-a4ef-827699a7d1de\" (UID: \"e63493f9-ccd0-485d-a4ef-827699a7d1de\") " Jan 27 15:29:28 crc kubenswrapper[4697]: I0127 15:29:28.022136 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e63493f9-ccd0-485d-a4ef-827699a7d1de-combined-ca-bundle\") pod \"e63493f9-ccd0-485d-a4ef-827699a7d1de\" (UID: \"e63493f9-ccd0-485d-a4ef-827699a7d1de\") " Jan 27 15:29:28 crc kubenswrapper[4697]: I0127 15:29:28.022184 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e63493f9-ccd0-485d-a4ef-827699a7d1de-config-data\") pod \"e63493f9-ccd0-485d-a4ef-827699a7d1de\" (UID: \"e63493f9-ccd0-485d-a4ef-827699a7d1de\") " Jan 27 15:29:28 crc kubenswrapper[4697]: I0127 15:29:28.022299 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e63493f9-ccd0-485d-a4ef-827699a7d1de-logs\") pod \"e63493f9-ccd0-485d-a4ef-827699a7d1de\" (UID: \"e63493f9-ccd0-485d-a4ef-827699a7d1de\") " Jan 27 15:29:28 crc kubenswrapper[4697]: I0127 15:29:28.022333 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k7mgq\" (UniqueName: \"kubernetes.io/projected/e63493f9-ccd0-485d-a4ef-827699a7d1de-kube-api-access-k7mgq\") pod \"e63493f9-ccd0-485d-a4ef-827699a7d1de\" (UID: \"e63493f9-ccd0-485d-a4ef-827699a7d1de\") " Jan 27 15:29:28 crc kubenswrapper[4697]: I0127 15:29:28.022385 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e63493f9-ccd0-485d-a4ef-827699a7d1de-httpd-run\") pod \"e63493f9-ccd0-485d-a4ef-827699a7d1de\" (UID: \"e63493f9-ccd0-485d-a4ef-827699a7d1de\") " Jan 27 15:29:28 crc kubenswrapper[4697]: I0127 15:29:28.022457 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e63493f9-ccd0-485d-a4ef-827699a7d1de-scripts\") pod \"e63493f9-ccd0-485d-a4ef-827699a7d1de\" (UID: \"e63493f9-ccd0-485d-a4ef-827699a7d1de\") " Jan 27 15:29:28 crc kubenswrapper[4697]: I0127 15:29:28.023116 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e63493f9-ccd0-485d-a4ef-827699a7d1de-logs" (OuterVolumeSpecName: "logs") pod "e63493f9-ccd0-485d-a4ef-827699a7d1de" (UID: "e63493f9-ccd0-485d-a4ef-827699a7d1de"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:29:28 crc kubenswrapper[4697]: I0127 15:29:28.023151 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e63493f9-ccd0-485d-a4ef-827699a7d1de-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "e63493f9-ccd0-485d-a4ef-827699a7d1de" (UID: "e63493f9-ccd0-485d-a4ef-827699a7d1de"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:29:28 crc kubenswrapper[4697]: I0127 15:29:28.029003 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e63493f9-ccd0-485d-a4ef-827699a7d1de-scripts" (OuterVolumeSpecName: "scripts") pod "e63493f9-ccd0-485d-a4ef-827699a7d1de" (UID: "e63493f9-ccd0-485d-a4ef-827699a7d1de"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:29:28 crc kubenswrapper[4697]: I0127 15:29:28.033399 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "glance") pod "e63493f9-ccd0-485d-a4ef-827699a7d1de" (UID: "e63493f9-ccd0-485d-a4ef-827699a7d1de"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 27 15:29:28 crc kubenswrapper[4697]: I0127 15:29:28.044143 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e63493f9-ccd0-485d-a4ef-827699a7d1de-kube-api-access-k7mgq" (OuterVolumeSpecName: "kube-api-access-k7mgq") pod "e63493f9-ccd0-485d-a4ef-827699a7d1de" (UID: "e63493f9-ccd0-485d-a4ef-827699a7d1de"). InnerVolumeSpecName "kube-api-access-k7mgq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:29:28 crc kubenswrapper[4697]: I0127 15:29:28.107465 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e63493f9-ccd0-485d-a4ef-827699a7d1de-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e63493f9-ccd0-485d-a4ef-827699a7d1de" (UID: "e63493f9-ccd0-485d-a4ef-827699a7d1de"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:29:28 crc kubenswrapper[4697]: I0127 15:29:28.124435 4697 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e63493f9-ccd0-485d-a4ef-827699a7d1de-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 15:29:28 crc kubenswrapper[4697]: I0127 15:29:28.124460 4697 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e63493f9-ccd0-485d-a4ef-827699a7d1de-logs\") on node \"crc\" DevicePath \"\"" Jan 27 15:29:28 crc kubenswrapper[4697]: I0127 15:29:28.124470 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k7mgq\" (UniqueName: \"kubernetes.io/projected/e63493f9-ccd0-485d-a4ef-827699a7d1de-kube-api-access-k7mgq\") on node \"crc\" DevicePath \"\"" Jan 27 15:29:28 crc kubenswrapper[4697]: I0127 15:29:28.124479 4697 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e63493f9-ccd0-485d-a4ef-827699a7d1de-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 27 15:29:28 crc kubenswrapper[4697]: I0127 15:29:28.124490 4697 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e63493f9-ccd0-485d-a4ef-827699a7d1de-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 15:29:28 crc kubenswrapper[4697]: I0127 15:29:28.124508 4697 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Jan 27 15:29:28 crc kubenswrapper[4697]: I0127 15:29:28.145008 4697 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Jan 27 15:29:28 crc kubenswrapper[4697]: I0127 15:29:28.154904 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e63493f9-ccd0-485d-a4ef-827699a7d1de-config-data" (OuterVolumeSpecName: "config-data") pod "e63493f9-ccd0-485d-a4ef-827699a7d1de" (UID: "e63493f9-ccd0-485d-a4ef-827699a7d1de"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:29:28 crc kubenswrapper[4697]: I0127 15:29:28.225666 4697 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Jan 27 15:29:28 crc kubenswrapper[4697]: I0127 15:29:28.225700 4697 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e63493f9-ccd0-485d-a4ef-827699a7d1de-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 15:29:28 crc kubenswrapper[4697]: I0127 15:29:28.365410 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5ccccf969c-jgqtz" event={"ID":"c83b7e83-b006-4f05-9f00-aa03173c05d9","Type":"ContainerStarted","Data":"a0bb3148509f3f5c0a492bf7c88d6610899aad9c5d1728f02471ee5ed42d452b"} Jan 27 15:29:28 crc kubenswrapper[4697]: I0127 15:29:28.367570 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-kc2ll" event={"ID":"e4bebb87-c35b-4185-8c32-560d5ddc3664","Type":"ContainerStarted","Data":"c9da9fa853897df35c23910665d430955fbd2044d732f4824decf0dbad6d31b8"} Jan 27 15:29:28 crc kubenswrapper[4697]: I0127 15:29:28.367846 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-55f844cf75-kc2ll" Jan 27 15:29:28 crc kubenswrapper[4697]: I0127 15:29:28.369770 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e63493f9-ccd0-485d-a4ef-827699a7d1de","Type":"ContainerDied","Data":"b5e6958634b5ef82eaacbb26a4da255ff438c1c324f1fabdcdfea68eb3ecd76e"} Jan 27 15:29:28 crc kubenswrapper[4697]: I0127 15:29:28.369909 4697 scope.go:117] "RemoveContainer" containerID="b4ea7f6164928dcfb77736cf4ffe05797bb915fa1f74cb4485c9421efe278def" Jan 27 15:29:28 crc kubenswrapper[4697]: I0127 15:29:28.369932 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 15:29:28 crc kubenswrapper[4697]: I0127 15:29:28.377065 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6887cfc8d4-v8f57" event={"ID":"dc00891c-0cae-42c0-bb0a-8e78bd146365","Type":"ContainerStarted","Data":"41730bf612d1b12077a746b67d7a91f4a81462e0cc7cdf86fe3d450b1e672c0a"} Jan 27 15:29:28 crc kubenswrapper[4697]: I0127 15:29:28.377807 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-6887cfc8d4-v8f57" Jan 27 15:29:28 crc kubenswrapper[4697]: I0127 15:29:28.392844 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-55f844cf75-kc2ll" podStartSLOduration=8.392827904 podStartE2EDuration="8.392827904s" podCreationTimestamp="2026-01-27 15:29:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:29:28.384952252 +0000 UTC m=+1264.557352033" watchObservedRunningTime="2026-01-27 15:29:28.392827904 +0000 UTC m=+1264.565227685" Jan 27 15:29:28 crc kubenswrapper[4697]: I0127 15:29:28.415214 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-6887cfc8d4-v8f57" podStartSLOduration=7.415190077 podStartE2EDuration="7.415190077s" podCreationTimestamp="2026-01-27 15:29:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:29:28.415112545 +0000 UTC m=+1264.587512326" watchObservedRunningTime="2026-01-27 15:29:28.415190077 +0000 UTC m=+1264.587589858" Jan 27 15:29:28 crc kubenswrapper[4697]: I0127 15:29:28.432215 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 15:29:28 crc kubenswrapper[4697]: I0127 15:29:28.457274 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 15:29:28 crc kubenswrapper[4697]: I0127 15:29:28.495527 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 15:29:28 crc kubenswrapper[4697]: E0127 15:29:28.496215 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e63493f9-ccd0-485d-a4ef-827699a7d1de" containerName="glance-httpd" Jan 27 15:29:28 crc kubenswrapper[4697]: I0127 15:29:28.496309 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="e63493f9-ccd0-485d-a4ef-827699a7d1de" containerName="glance-httpd" Jan 27 15:29:28 crc kubenswrapper[4697]: E0127 15:29:28.496415 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e63493f9-ccd0-485d-a4ef-827699a7d1de" containerName="glance-log" Jan 27 15:29:28 crc kubenswrapper[4697]: I0127 15:29:28.496490 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="e63493f9-ccd0-485d-a4ef-827699a7d1de" containerName="glance-log" Jan 27 15:29:28 crc kubenswrapper[4697]: I0127 15:29:28.496811 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="e63493f9-ccd0-485d-a4ef-827699a7d1de" containerName="glance-httpd" Jan 27 15:29:28 crc kubenswrapper[4697]: I0127 15:29:28.496917 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="e63493f9-ccd0-485d-a4ef-827699a7d1de" containerName="glance-log" Jan 27 15:29:28 crc kubenswrapper[4697]: I0127 15:29:28.498060 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 15:29:28 crc kubenswrapper[4697]: I0127 15:29:28.501859 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 27 15:29:28 crc kubenswrapper[4697]: I0127 15:29:28.502056 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 27 15:29:28 crc kubenswrapper[4697]: I0127 15:29:28.516623 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 15:29:28 crc kubenswrapper[4697]: I0127 15:29:28.582472 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e63493f9-ccd0-485d-a4ef-827699a7d1de" path="/var/lib/kubelet/pods/e63493f9-ccd0-485d-a4ef-827699a7d1de/volumes" Jan 27 15:29:28 crc kubenswrapper[4697]: I0127 15:29:28.632396 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8b2f2\" (UniqueName: \"kubernetes.io/projected/2dccf05f-4c68-4ed9-91e5-6c96ec5ba1d6-kube-api-access-8b2f2\") pod \"glance-default-external-api-0\" (UID: \"2dccf05f-4c68-4ed9-91e5-6c96ec5ba1d6\") " pod="openstack/glance-default-external-api-0" Jan 27 15:29:28 crc kubenswrapper[4697]: I0127 15:29:28.632447 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2dccf05f-4c68-4ed9-91e5-6c96ec5ba1d6-config-data\") pod \"glance-default-external-api-0\" (UID: \"2dccf05f-4c68-4ed9-91e5-6c96ec5ba1d6\") " pod="openstack/glance-default-external-api-0" Jan 27 15:29:28 crc kubenswrapper[4697]: I0127 15:29:28.632492 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2dccf05f-4c68-4ed9-91e5-6c96ec5ba1d6-scripts\") pod \"glance-default-external-api-0\" (UID: \"2dccf05f-4c68-4ed9-91e5-6c96ec5ba1d6\") " pod="openstack/glance-default-external-api-0" Jan 27 15:29:28 crc kubenswrapper[4697]: I0127 15:29:28.632512 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2dccf05f-4c68-4ed9-91e5-6c96ec5ba1d6-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"2dccf05f-4c68-4ed9-91e5-6c96ec5ba1d6\") " pod="openstack/glance-default-external-api-0" Jan 27 15:29:28 crc kubenswrapper[4697]: I0127 15:29:28.632535 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"2dccf05f-4c68-4ed9-91e5-6c96ec5ba1d6\") " pod="openstack/glance-default-external-api-0" Jan 27 15:29:28 crc kubenswrapper[4697]: I0127 15:29:28.632562 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2dccf05f-4c68-4ed9-91e5-6c96ec5ba1d6-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"2dccf05f-4c68-4ed9-91e5-6c96ec5ba1d6\") " pod="openstack/glance-default-external-api-0" Jan 27 15:29:28 crc kubenswrapper[4697]: I0127 15:29:28.632594 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2dccf05f-4c68-4ed9-91e5-6c96ec5ba1d6-logs\") pod \"glance-default-external-api-0\" (UID: \"2dccf05f-4c68-4ed9-91e5-6c96ec5ba1d6\") " pod="openstack/glance-default-external-api-0" Jan 27 15:29:28 crc kubenswrapper[4697]: I0127 15:29:28.632627 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2dccf05f-4c68-4ed9-91e5-6c96ec5ba1d6-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"2dccf05f-4c68-4ed9-91e5-6c96ec5ba1d6\") " pod="openstack/glance-default-external-api-0" Jan 27 15:29:28 crc kubenswrapper[4697]: I0127 15:29:28.734678 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8b2f2\" (UniqueName: \"kubernetes.io/projected/2dccf05f-4c68-4ed9-91e5-6c96ec5ba1d6-kube-api-access-8b2f2\") pod \"glance-default-external-api-0\" (UID: \"2dccf05f-4c68-4ed9-91e5-6c96ec5ba1d6\") " pod="openstack/glance-default-external-api-0" Jan 27 15:29:28 crc kubenswrapper[4697]: I0127 15:29:28.734737 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2dccf05f-4c68-4ed9-91e5-6c96ec5ba1d6-config-data\") pod \"glance-default-external-api-0\" (UID: \"2dccf05f-4c68-4ed9-91e5-6c96ec5ba1d6\") " pod="openstack/glance-default-external-api-0" Jan 27 15:29:28 crc kubenswrapper[4697]: I0127 15:29:28.734800 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2dccf05f-4c68-4ed9-91e5-6c96ec5ba1d6-scripts\") pod \"glance-default-external-api-0\" (UID: \"2dccf05f-4c68-4ed9-91e5-6c96ec5ba1d6\") " pod="openstack/glance-default-external-api-0" Jan 27 15:29:28 crc kubenswrapper[4697]: I0127 15:29:28.734822 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2dccf05f-4c68-4ed9-91e5-6c96ec5ba1d6-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"2dccf05f-4c68-4ed9-91e5-6c96ec5ba1d6\") " pod="openstack/glance-default-external-api-0" Jan 27 15:29:28 crc kubenswrapper[4697]: I0127 15:29:28.734851 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"2dccf05f-4c68-4ed9-91e5-6c96ec5ba1d6\") " pod="openstack/glance-default-external-api-0" Jan 27 15:29:28 crc kubenswrapper[4697]: I0127 15:29:28.734880 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2dccf05f-4c68-4ed9-91e5-6c96ec5ba1d6-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"2dccf05f-4c68-4ed9-91e5-6c96ec5ba1d6\") " pod="openstack/glance-default-external-api-0" Jan 27 15:29:28 crc kubenswrapper[4697]: I0127 15:29:28.734914 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2dccf05f-4c68-4ed9-91e5-6c96ec5ba1d6-logs\") pod \"glance-default-external-api-0\" (UID: \"2dccf05f-4c68-4ed9-91e5-6c96ec5ba1d6\") " pod="openstack/glance-default-external-api-0" Jan 27 15:29:28 crc kubenswrapper[4697]: I0127 15:29:28.734952 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2dccf05f-4c68-4ed9-91e5-6c96ec5ba1d6-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"2dccf05f-4c68-4ed9-91e5-6c96ec5ba1d6\") " pod="openstack/glance-default-external-api-0" Jan 27 15:29:28 crc kubenswrapper[4697]: I0127 15:29:28.735281 4697 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"2dccf05f-4c68-4ed9-91e5-6c96ec5ba1d6\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-external-api-0" Jan 27 15:29:28 crc kubenswrapper[4697]: I0127 15:29:28.735511 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2dccf05f-4c68-4ed9-91e5-6c96ec5ba1d6-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"2dccf05f-4c68-4ed9-91e5-6c96ec5ba1d6\") " pod="openstack/glance-default-external-api-0" Jan 27 15:29:28 crc kubenswrapper[4697]: I0127 15:29:28.735647 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2dccf05f-4c68-4ed9-91e5-6c96ec5ba1d6-logs\") pod \"glance-default-external-api-0\" (UID: \"2dccf05f-4c68-4ed9-91e5-6c96ec5ba1d6\") " pod="openstack/glance-default-external-api-0" Jan 27 15:29:28 crc kubenswrapper[4697]: I0127 15:29:28.741471 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2dccf05f-4c68-4ed9-91e5-6c96ec5ba1d6-config-data\") pod \"glance-default-external-api-0\" (UID: \"2dccf05f-4c68-4ed9-91e5-6c96ec5ba1d6\") " pod="openstack/glance-default-external-api-0" Jan 27 15:29:28 crc kubenswrapper[4697]: I0127 15:29:28.757536 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2dccf05f-4c68-4ed9-91e5-6c96ec5ba1d6-scripts\") pod \"glance-default-external-api-0\" (UID: \"2dccf05f-4c68-4ed9-91e5-6c96ec5ba1d6\") " pod="openstack/glance-default-external-api-0" Jan 27 15:29:28 crc kubenswrapper[4697]: I0127 15:29:28.759195 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2dccf05f-4c68-4ed9-91e5-6c96ec5ba1d6-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"2dccf05f-4c68-4ed9-91e5-6c96ec5ba1d6\") " pod="openstack/glance-default-external-api-0" Jan 27 15:29:28 crc kubenswrapper[4697]: I0127 15:29:28.759349 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2dccf05f-4c68-4ed9-91e5-6c96ec5ba1d6-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"2dccf05f-4c68-4ed9-91e5-6c96ec5ba1d6\") " pod="openstack/glance-default-external-api-0" Jan 27 15:29:28 crc kubenswrapper[4697]: I0127 15:29:28.764147 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8b2f2\" (UniqueName: \"kubernetes.io/projected/2dccf05f-4c68-4ed9-91e5-6c96ec5ba1d6-kube-api-access-8b2f2\") pod \"glance-default-external-api-0\" (UID: \"2dccf05f-4c68-4ed9-91e5-6c96ec5ba1d6\") " pod="openstack/glance-default-external-api-0" Jan 27 15:29:28 crc kubenswrapper[4697]: I0127 15:29:28.798486 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"2dccf05f-4c68-4ed9-91e5-6c96ec5ba1d6\") " pod="openstack/glance-default-external-api-0" Jan 27 15:29:28 crc kubenswrapper[4697]: I0127 15:29:28.820573 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 15:29:30 crc kubenswrapper[4697]: I0127 15:29:30.631139 4697 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5965fc65fb-dvhzz" podUID="d6ad161d-fe95-4ad3-8f60-1f1310b2974c" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.147:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.147:8443: connect: connection refused" Jan 27 15:29:31 crc kubenswrapper[4697]: I0127 15:29:31.391200 4697 scope.go:117] "RemoveContainer" containerID="427d1ce893cac9dade30d1e12137c9329eb36b513da688e2f404559ca58c9d87" Jan 27 15:29:31 crc kubenswrapper[4697]: I0127 15:29:31.586533 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 15:29:31 crc kubenswrapper[4697]: I0127 15:29:31.694639 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b36c8f6-7ed7-4100-9716-9e0d9914667c-combined-ca-bundle\") pod \"2b36c8f6-7ed7-4100-9716-9e0d9914667c\" (UID: \"2b36c8f6-7ed7-4100-9716-9e0d9914667c\") " Jan 27 15:29:31 crc kubenswrapper[4697]: I0127 15:29:31.694774 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b36c8f6-7ed7-4100-9716-9e0d9914667c-config-data\") pod \"2b36c8f6-7ed7-4100-9716-9e0d9914667c\" (UID: \"2b36c8f6-7ed7-4100-9716-9e0d9914667c\") " Jan 27 15:29:31 crc kubenswrapper[4697]: I0127 15:29:31.694954 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2b36c8f6-7ed7-4100-9716-9e0d9914667c-logs\") pod \"2b36c8f6-7ed7-4100-9716-9e0d9914667c\" (UID: \"2b36c8f6-7ed7-4100-9716-9e0d9914667c\") " Jan 27 15:29:31 crc kubenswrapper[4697]: I0127 15:29:31.694997 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b36c8f6-7ed7-4100-9716-9e0d9914667c-scripts\") pod \"2b36c8f6-7ed7-4100-9716-9e0d9914667c\" (UID: \"2b36c8f6-7ed7-4100-9716-9e0d9914667c\") " Jan 27 15:29:31 crc kubenswrapper[4697]: I0127 15:29:31.701987 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"2b36c8f6-7ed7-4100-9716-9e0d9914667c\" (UID: \"2b36c8f6-7ed7-4100-9716-9e0d9914667c\") " Jan 27 15:29:31 crc kubenswrapper[4697]: I0127 15:29:31.702051 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2b36c8f6-7ed7-4100-9716-9e0d9914667c-httpd-run\") pod \"2b36c8f6-7ed7-4100-9716-9e0d9914667c\" (UID: \"2b36c8f6-7ed7-4100-9716-9e0d9914667c\") " Jan 27 15:29:31 crc kubenswrapper[4697]: I0127 15:29:31.702099 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qsfc9\" (UniqueName: \"kubernetes.io/projected/2b36c8f6-7ed7-4100-9716-9e0d9914667c-kube-api-access-qsfc9\") pod \"2b36c8f6-7ed7-4100-9716-9e0d9914667c\" (UID: \"2b36c8f6-7ed7-4100-9716-9e0d9914667c\") " Jan 27 15:29:31 crc kubenswrapper[4697]: I0127 15:29:31.702420 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b36c8f6-7ed7-4100-9716-9e0d9914667c-logs" (OuterVolumeSpecName: "logs") pod "2b36c8f6-7ed7-4100-9716-9e0d9914667c" (UID: "2b36c8f6-7ed7-4100-9716-9e0d9914667c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:29:31 crc kubenswrapper[4697]: I0127 15:29:31.709725 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "glance") pod "2b36c8f6-7ed7-4100-9716-9e0d9914667c" (UID: "2b36c8f6-7ed7-4100-9716-9e0d9914667c"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 27 15:29:31 crc kubenswrapper[4697]: I0127 15:29:31.712171 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b36c8f6-7ed7-4100-9716-9e0d9914667c-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "2b36c8f6-7ed7-4100-9716-9e0d9914667c" (UID: "2b36c8f6-7ed7-4100-9716-9e0d9914667c"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:29:31 crc kubenswrapper[4697]: I0127 15:29:31.716693 4697 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2b36c8f6-7ed7-4100-9716-9e0d9914667c-logs\") on node \"crc\" DevicePath \"\"" Jan 27 15:29:31 crc kubenswrapper[4697]: I0127 15:29:31.716727 4697 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Jan 27 15:29:31 crc kubenswrapper[4697]: I0127 15:29:31.716738 4697 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2b36c8f6-7ed7-4100-9716-9e0d9914667c-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 27 15:29:31 crc kubenswrapper[4697]: I0127 15:29:31.719536 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b36c8f6-7ed7-4100-9716-9e0d9914667c-kube-api-access-qsfc9" (OuterVolumeSpecName: "kube-api-access-qsfc9") pod "2b36c8f6-7ed7-4100-9716-9e0d9914667c" (UID: "2b36c8f6-7ed7-4100-9716-9e0d9914667c"). InnerVolumeSpecName "kube-api-access-qsfc9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:29:31 crc kubenswrapper[4697]: I0127 15:29:31.746235 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b36c8f6-7ed7-4100-9716-9e0d9914667c-scripts" (OuterVolumeSpecName: "scripts") pod "2b36c8f6-7ed7-4100-9716-9e0d9914667c" (UID: "2b36c8f6-7ed7-4100-9716-9e0d9914667c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:29:31 crc kubenswrapper[4697]: I0127 15:29:31.781741 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b36c8f6-7ed7-4100-9716-9e0d9914667c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2b36c8f6-7ed7-4100-9716-9e0d9914667c" (UID: "2b36c8f6-7ed7-4100-9716-9e0d9914667c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:29:31 crc kubenswrapper[4697]: I0127 15:29:31.820703 4697 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b36c8f6-7ed7-4100-9716-9e0d9914667c-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 15:29:31 crc kubenswrapper[4697]: I0127 15:29:31.820736 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qsfc9\" (UniqueName: \"kubernetes.io/projected/2b36c8f6-7ed7-4100-9716-9e0d9914667c-kube-api-access-qsfc9\") on node \"crc\" DevicePath \"\"" Jan 27 15:29:31 crc kubenswrapper[4697]: I0127 15:29:31.820752 4697 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b36c8f6-7ed7-4100-9716-9e0d9914667c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 15:29:31 crc kubenswrapper[4697]: I0127 15:29:31.847608 4697 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Jan 27 15:29:31 crc kubenswrapper[4697]: I0127 15:29:31.921880 4697 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Jan 27 15:29:31 crc kubenswrapper[4697]: I0127 15:29:31.928292 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b36c8f6-7ed7-4100-9716-9e0d9914667c-config-data" (OuterVolumeSpecName: "config-data") pod "2b36c8f6-7ed7-4100-9716-9e0d9914667c" (UID: "2b36c8f6-7ed7-4100-9716-9e0d9914667c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:29:32 crc kubenswrapper[4697]: I0127 15:29:32.023050 4697 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b36c8f6-7ed7-4100-9716-9e0d9914667c-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 15:29:32 crc kubenswrapper[4697]: I0127 15:29:32.134899 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 15:29:32 crc kubenswrapper[4697]: I0127 15:29:32.467658 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5b9dc56b78-cpxnx" event={"ID":"ca5e937a-90cf-44e0-bf5c-bcb75c95a2f4","Type":"ContainerStarted","Data":"94e5a0ea328ee095ebea3b739ec83ee42ff649968869720920ee234c3045166f"} Jan 27 15:29:32 crc kubenswrapper[4697]: I0127 15:29:32.468020 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5b9dc56b78-cpxnx" event={"ID":"ca5e937a-90cf-44e0-bf5c-bcb75c95a2f4","Type":"ContainerStarted","Data":"2f19baab7216992a48276b699e53288ce7e5407977454691cf502bb2a615dd2c"} Jan 27 15:29:32 crc kubenswrapper[4697]: I0127 15:29:32.477266 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2dccf05f-4c68-4ed9-91e5-6c96ec5ba1d6","Type":"ContainerStarted","Data":"a91ea5280664b3f6684f3d1af2971c6f9b18d7c75e10900d4d5f453ed001d33a"} Jan 27 15:29:32 crc kubenswrapper[4697]: I0127 15:29:32.480405 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5ccccf969c-jgqtz" event={"ID":"c83b7e83-b006-4f05-9f00-aa03173c05d9","Type":"ContainerStarted","Data":"fcd98db175548bf792a3dde4f85acce5344b265205cdc49b7c80a39ace32143d"} Jan 27 15:29:32 crc kubenswrapper[4697]: I0127 15:29:32.480430 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5ccccf969c-jgqtz" event={"ID":"c83b7e83-b006-4f05-9f00-aa03173c05d9","Type":"ContainerStarted","Data":"d9a2cb0992d090f183121f1d4ff95b09feb8cb2dcaa7f40e67590cadc230cdde"} Jan 27 15:29:32 crc kubenswrapper[4697]: I0127 15:29:32.481162 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-5ccccf969c-jgqtz" Jan 27 15:29:32 crc kubenswrapper[4697]: I0127 15:29:32.486954 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"2b36c8f6-7ed7-4100-9716-9e0d9914667c","Type":"ContainerDied","Data":"5e928e90885715afc384d4a938689d61e0426eb3f1b0905150d7bf6114270162"} Jan 27 15:29:32 crc kubenswrapper[4697]: I0127 15:29:32.487000 4697 scope.go:117] "RemoveContainer" containerID="17b80c45ce1bf256d46d60ca3f1a308792d9c56d5b944a9628ab01d9f3c96d8f" Jan 27 15:29:32 crc kubenswrapper[4697]: I0127 15:29:32.487220 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 15:29:32 crc kubenswrapper[4697]: I0127 15:29:32.492965 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-5b9dc56b78-cpxnx" podStartSLOduration=-9223371984.361834 podStartE2EDuration="52.492942485s" podCreationTimestamp="2026-01-27 15:28:40 +0000 UTC" firstStartedPulling="2026-01-27 15:28:42.073001926 +0000 UTC m=+1218.245401707" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:29:32.486212742 +0000 UTC m=+1268.658612523" watchObservedRunningTime="2026-01-27 15:29:32.492942485 +0000 UTC m=+1268.665342266" Jan 27 15:29:32 crc kubenswrapper[4697]: I0127 15:29:32.519647 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-5ccccf969c-jgqtz" podStartSLOduration=9.519626813 podStartE2EDuration="9.519626813s" podCreationTimestamp="2026-01-27 15:29:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:29:32.517662226 +0000 UTC m=+1268.690062017" watchObservedRunningTime="2026-01-27 15:29:32.519626813 +0000 UTC m=+1268.692026594" Jan 27 15:29:32 crc kubenswrapper[4697]: I0127 15:29:32.551210 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"70da0843-011d-422d-bc59-479d90e689a8","Type":"ContainerStarted","Data":"b032d78bfb5d09a5bcc4cce4a5692183cdd0441c6395fbb0780d86701d8bd0b2"} Jan 27 15:29:32 crc kubenswrapper[4697]: I0127 15:29:32.554034 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-5c6j2" event={"ID":"09a835cc-5807-48ce-a9f8-354d3182603f","Type":"ContainerStarted","Data":"7a1be61c999c8362c2811e0f505dab5f61ce2764c439a9bc30b6b44d69387c3e"} Jan 27 15:29:32 crc kubenswrapper[4697]: I0127 15:29:32.606807 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 15:29:32 crc kubenswrapper[4697]: I0127 15:29:32.606844 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 15:29:32 crc kubenswrapper[4697]: I0127 15:29:32.633869 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 15:29:32 crc kubenswrapper[4697]: E0127 15:29:32.634379 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b36c8f6-7ed7-4100-9716-9e0d9914667c" containerName="glance-httpd" Jan 27 15:29:32 crc kubenswrapper[4697]: I0127 15:29:32.634397 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b36c8f6-7ed7-4100-9716-9e0d9914667c" containerName="glance-httpd" Jan 27 15:29:32 crc kubenswrapper[4697]: E0127 15:29:32.634422 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b36c8f6-7ed7-4100-9716-9e0d9914667c" containerName="glance-log" Jan 27 15:29:32 crc kubenswrapper[4697]: I0127 15:29:32.634431 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b36c8f6-7ed7-4100-9716-9e0d9914667c" containerName="glance-log" Jan 27 15:29:32 crc kubenswrapper[4697]: I0127 15:29:32.634672 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b36c8f6-7ed7-4100-9716-9e0d9914667c" containerName="glance-httpd" Jan 27 15:29:32 crc kubenswrapper[4697]: I0127 15:29:32.634718 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b36c8f6-7ed7-4100-9716-9e0d9914667c" containerName="glance-log" Jan 27 15:29:32 crc kubenswrapper[4697]: I0127 15:29:32.635943 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 15:29:32 crc kubenswrapper[4697]: I0127 15:29:32.639138 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 27 15:29:32 crc kubenswrapper[4697]: I0127 15:29:32.639342 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 27 15:29:32 crc kubenswrapper[4697]: I0127 15:29:32.649922 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-5c6j2" podStartSLOduration=3.2955472439999998 podStartE2EDuration="1m1.649902058s" podCreationTimestamp="2026-01-27 15:28:31 +0000 UTC" firstStartedPulling="2026-01-27 15:28:33.58246343 +0000 UTC m=+1209.754863211" lastFinishedPulling="2026-01-27 15:29:31.936818244 +0000 UTC m=+1268.109218025" observedRunningTime="2026-01-27 15:29:32.594329479 +0000 UTC m=+1268.766729260" watchObservedRunningTime="2026-01-27 15:29:32.649902058 +0000 UTC m=+1268.822301839" Jan 27 15:29:32 crc kubenswrapper[4697]: I0127 15:29:32.687348 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 15:29:32 crc kubenswrapper[4697]: I0127 15:29:32.737981 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a2066c3-d242-4f3b-85bd-f407f06cded2-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"7a2066c3-d242-4f3b-85bd-f407f06cded2\") " pod="openstack/glance-default-internal-api-0" Jan 27 15:29:32 crc kubenswrapper[4697]: I0127 15:29:32.738036 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"7a2066c3-d242-4f3b-85bd-f407f06cded2\") " pod="openstack/glance-default-internal-api-0" Jan 27 15:29:32 crc kubenswrapper[4697]: I0127 15:29:32.738076 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7a2066c3-d242-4f3b-85bd-f407f06cded2-scripts\") pod \"glance-default-internal-api-0\" (UID: \"7a2066c3-d242-4f3b-85bd-f407f06cded2\") " pod="openstack/glance-default-internal-api-0" Jan 27 15:29:32 crc kubenswrapper[4697]: I0127 15:29:32.738327 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a2066c3-d242-4f3b-85bd-f407f06cded2-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"7a2066c3-d242-4f3b-85bd-f407f06cded2\") " pod="openstack/glance-default-internal-api-0" Jan 27 15:29:32 crc kubenswrapper[4697]: I0127 15:29:32.738400 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prfc5\" (UniqueName: \"kubernetes.io/projected/7a2066c3-d242-4f3b-85bd-f407f06cded2-kube-api-access-prfc5\") pod \"glance-default-internal-api-0\" (UID: \"7a2066c3-d242-4f3b-85bd-f407f06cded2\") " pod="openstack/glance-default-internal-api-0" Jan 27 15:29:32 crc kubenswrapper[4697]: I0127 15:29:32.738431 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7a2066c3-d242-4f3b-85bd-f407f06cded2-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"7a2066c3-d242-4f3b-85bd-f407f06cded2\") " pod="openstack/glance-default-internal-api-0" Jan 27 15:29:32 crc kubenswrapper[4697]: I0127 15:29:32.738463 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a2066c3-d242-4f3b-85bd-f407f06cded2-config-data\") pod \"glance-default-internal-api-0\" (UID: \"7a2066c3-d242-4f3b-85bd-f407f06cded2\") " pod="openstack/glance-default-internal-api-0" Jan 27 15:29:32 crc kubenswrapper[4697]: I0127 15:29:32.738487 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7a2066c3-d242-4f3b-85bd-f407f06cded2-logs\") pod \"glance-default-internal-api-0\" (UID: \"7a2066c3-d242-4f3b-85bd-f407f06cded2\") " pod="openstack/glance-default-internal-api-0" Jan 27 15:29:32 crc kubenswrapper[4697]: I0127 15:29:32.789604 4697 scope.go:117] "RemoveContainer" containerID="1bde98f760df90ce10716d4485d501650d35a743ca166d9a215f3368f405bf82" Jan 27 15:29:32 crc kubenswrapper[4697]: I0127 15:29:32.843559 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a2066c3-d242-4f3b-85bd-f407f06cded2-config-data\") pod \"glance-default-internal-api-0\" (UID: \"7a2066c3-d242-4f3b-85bd-f407f06cded2\") " pod="openstack/glance-default-internal-api-0" Jan 27 15:29:32 crc kubenswrapper[4697]: I0127 15:29:32.843611 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7a2066c3-d242-4f3b-85bd-f407f06cded2-logs\") pod \"glance-default-internal-api-0\" (UID: \"7a2066c3-d242-4f3b-85bd-f407f06cded2\") " pod="openstack/glance-default-internal-api-0" Jan 27 15:29:32 crc kubenswrapper[4697]: I0127 15:29:32.843679 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a2066c3-d242-4f3b-85bd-f407f06cded2-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"7a2066c3-d242-4f3b-85bd-f407f06cded2\") " pod="openstack/glance-default-internal-api-0" Jan 27 15:29:32 crc kubenswrapper[4697]: I0127 15:29:32.843708 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"7a2066c3-d242-4f3b-85bd-f407f06cded2\") " pod="openstack/glance-default-internal-api-0" Jan 27 15:29:32 crc kubenswrapper[4697]: I0127 15:29:32.843744 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7a2066c3-d242-4f3b-85bd-f407f06cded2-scripts\") pod \"glance-default-internal-api-0\" (UID: \"7a2066c3-d242-4f3b-85bd-f407f06cded2\") " pod="openstack/glance-default-internal-api-0" Jan 27 15:29:32 crc kubenswrapper[4697]: I0127 15:29:32.843825 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a2066c3-d242-4f3b-85bd-f407f06cded2-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"7a2066c3-d242-4f3b-85bd-f407f06cded2\") " pod="openstack/glance-default-internal-api-0" Jan 27 15:29:32 crc kubenswrapper[4697]: I0127 15:29:32.843906 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-prfc5\" (UniqueName: \"kubernetes.io/projected/7a2066c3-d242-4f3b-85bd-f407f06cded2-kube-api-access-prfc5\") pod \"glance-default-internal-api-0\" (UID: \"7a2066c3-d242-4f3b-85bd-f407f06cded2\") " pod="openstack/glance-default-internal-api-0" Jan 27 15:29:32 crc kubenswrapper[4697]: I0127 15:29:32.843944 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7a2066c3-d242-4f3b-85bd-f407f06cded2-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"7a2066c3-d242-4f3b-85bd-f407f06cded2\") " pod="openstack/glance-default-internal-api-0" Jan 27 15:29:32 crc kubenswrapper[4697]: I0127 15:29:32.844704 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7a2066c3-d242-4f3b-85bd-f407f06cded2-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"7a2066c3-d242-4f3b-85bd-f407f06cded2\") " pod="openstack/glance-default-internal-api-0" Jan 27 15:29:32 crc kubenswrapper[4697]: I0127 15:29:32.846945 4697 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"7a2066c3-d242-4f3b-85bd-f407f06cded2\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/glance-default-internal-api-0" Jan 27 15:29:32 crc kubenswrapper[4697]: I0127 15:29:32.850237 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7a2066c3-d242-4f3b-85bd-f407f06cded2-logs\") pod \"glance-default-internal-api-0\" (UID: \"7a2066c3-d242-4f3b-85bd-f407f06cded2\") " pod="openstack/glance-default-internal-api-0" Jan 27 15:29:32 crc kubenswrapper[4697]: I0127 15:29:32.855489 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a2066c3-d242-4f3b-85bd-f407f06cded2-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"7a2066c3-d242-4f3b-85bd-f407f06cded2\") " pod="openstack/glance-default-internal-api-0" Jan 27 15:29:32 crc kubenswrapper[4697]: I0127 15:29:32.863086 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7a2066c3-d242-4f3b-85bd-f407f06cded2-scripts\") pod \"glance-default-internal-api-0\" (UID: \"7a2066c3-d242-4f3b-85bd-f407f06cded2\") " pod="openstack/glance-default-internal-api-0" Jan 27 15:29:32 crc kubenswrapper[4697]: I0127 15:29:32.869365 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a2066c3-d242-4f3b-85bd-f407f06cded2-config-data\") pod \"glance-default-internal-api-0\" (UID: \"7a2066c3-d242-4f3b-85bd-f407f06cded2\") " pod="openstack/glance-default-internal-api-0" Jan 27 15:29:32 crc kubenswrapper[4697]: I0127 15:29:32.872095 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-prfc5\" (UniqueName: \"kubernetes.io/projected/7a2066c3-d242-4f3b-85bd-f407f06cded2-kube-api-access-prfc5\") pod \"glance-default-internal-api-0\" (UID: \"7a2066c3-d242-4f3b-85bd-f407f06cded2\") " pod="openstack/glance-default-internal-api-0" Jan 27 15:29:32 crc kubenswrapper[4697]: I0127 15:29:32.872977 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a2066c3-d242-4f3b-85bd-f407f06cded2-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"7a2066c3-d242-4f3b-85bd-f407f06cded2\") " pod="openstack/glance-default-internal-api-0" Jan 27 15:29:32 crc kubenswrapper[4697]: I0127 15:29:32.906534 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"7a2066c3-d242-4f3b-85bd-f407f06cded2\") " pod="openstack/glance-default-internal-api-0" Jan 27 15:29:32 crc kubenswrapper[4697]: I0127 15:29:32.990273 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 15:29:33 crc kubenswrapper[4697]: I0127 15:29:33.614151 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-n5g7m" event={"ID":"ba2a2abf-806a-4708-8f03-9e68c85c6c6c","Type":"ContainerStarted","Data":"cbfa7c85b9e2c7f5b3e7e417f0fd23351a97f5c2e8291eebc7a7e7770d3b08b2"} Jan 27 15:29:33 crc kubenswrapper[4697]: I0127 15:29:33.635916 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2dccf05f-4c68-4ed9-91e5-6c96ec5ba1d6","Type":"ContainerStarted","Data":"498ed7a6e35d21a4ad94e6d7396c394cf0a42cfc1834bbb7b8a985b27a7d0073"} Jan 27 15:29:33 crc kubenswrapper[4697]: I0127 15:29:33.659595 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-n5g7m" podStartSLOduration=4.5211168090000005 podStartE2EDuration="1m2.659570368s" podCreationTimestamp="2026-01-27 15:28:31 +0000 UTC" firstStartedPulling="2026-01-27 15:28:33.910151601 +0000 UTC m=+1210.082551382" lastFinishedPulling="2026-01-27 15:29:32.04860516 +0000 UTC m=+1268.221004941" observedRunningTime="2026-01-27 15:29:33.636205211 +0000 UTC m=+1269.808604992" watchObservedRunningTime="2026-01-27 15:29:33.659570368 +0000 UTC m=+1269.831970149" Jan 27 15:29:33 crc kubenswrapper[4697]: I0127 15:29:33.882431 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 15:29:33 crc kubenswrapper[4697]: W0127 15:29:33.927316 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7a2066c3_d242_4f3b_85bd_f407f06cded2.slice/crio-92bcb75ce9206db107e7d6b1ed242749fb56a10d637090c778578a996a4e257e WatchSource:0}: Error finding container 92bcb75ce9206db107e7d6b1ed242749fb56a10d637090c778578a996a4e257e: Status 404 returned error can't find the container with id 92bcb75ce9206db107e7d6b1ed242749fb56a10d637090c778578a996a4e257e Jan 27 15:29:34 crc kubenswrapper[4697]: I0127 15:29:34.582193 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b36c8f6-7ed7-4100-9716-9e0d9914667c" path="/var/lib/kubelet/pods/2b36c8f6-7ed7-4100-9716-9e0d9914667c/volumes" Jan 27 15:29:34 crc kubenswrapper[4697]: I0127 15:29:34.670367 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2dccf05f-4c68-4ed9-91e5-6c96ec5ba1d6","Type":"ContainerStarted","Data":"a58231e6612184f8a6d1d9111a036647b8f787183d1eb8701d7b6b23cbc57c25"} Jan 27 15:29:34 crc kubenswrapper[4697]: I0127 15:29:34.680450 4697 generic.go:334] "Generic (PLEG): container finished" podID="c11e83f3-61e4-4f13-89e2-cf9209760247" containerID="80dc09b6ac5700456b759d3d7ebde4333d93b1f0223b1146f3edfbff995bf507" exitCode=0 Jan 27 15:29:34 crc kubenswrapper[4697]: I0127 15:29:34.680501 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-xc9hp" event={"ID":"c11e83f3-61e4-4f13-89e2-cf9209760247","Type":"ContainerDied","Data":"80dc09b6ac5700456b759d3d7ebde4333d93b1f0223b1146f3edfbff995bf507"} Jan 27 15:29:34 crc kubenswrapper[4697]: I0127 15:29:34.686917 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7a2066c3-d242-4f3b-85bd-f407f06cded2","Type":"ContainerStarted","Data":"92bcb75ce9206db107e7d6b1ed242749fb56a10d637090c778578a996a4e257e"} Jan 27 15:29:34 crc kubenswrapper[4697]: I0127 15:29:34.708699 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=6.708678356 podStartE2EDuration="6.708678356s" podCreationTimestamp="2026-01-27 15:29:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:29:34.695741401 +0000 UTC m=+1270.868141182" watchObservedRunningTime="2026-01-27 15:29:34.708678356 +0000 UTC m=+1270.881078137" Jan 27 15:29:35 crc kubenswrapper[4697]: I0127 15:29:35.714761 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7a2066c3-d242-4f3b-85bd-f407f06cded2","Type":"ContainerStarted","Data":"65ccefbe4be02b1c5f60b5590cc3de45cd50c9089f2a515b32b18d017128ce7e"} Jan 27 15:29:36 crc kubenswrapper[4697]: I0127 15:29:36.036935 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-55f844cf75-kc2ll" Jan 27 15:29:36 crc kubenswrapper[4697]: I0127 15:29:36.141180 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-lnmrx"] Jan 27 15:29:36 crc kubenswrapper[4697]: I0127 15:29:36.142846 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-58dd9ff6bc-lnmrx" podUID="98cd1413-1ae7-49dd-91b9-d30f7947c4ea" containerName="dnsmasq-dns" containerID="cri-o://3dfcc3b51443f5b0547b3b33bbcd5d7c3d4c777901c6ff9a71b37261257fd236" gracePeriod=10 Jan 27 15:29:36 crc kubenswrapper[4697]: I0127 15:29:36.242169 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-xc9hp" Jan 27 15:29:36 crc kubenswrapper[4697]: I0127 15:29:36.339514 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pwtc9\" (UniqueName: \"kubernetes.io/projected/c11e83f3-61e4-4f13-89e2-cf9209760247-kube-api-access-pwtc9\") pod \"c11e83f3-61e4-4f13-89e2-cf9209760247\" (UID: \"c11e83f3-61e4-4f13-89e2-cf9209760247\") " Jan 27 15:29:36 crc kubenswrapper[4697]: I0127 15:29:36.339606 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c11e83f3-61e4-4f13-89e2-cf9209760247-logs\") pod \"c11e83f3-61e4-4f13-89e2-cf9209760247\" (UID: \"c11e83f3-61e4-4f13-89e2-cf9209760247\") " Jan 27 15:29:36 crc kubenswrapper[4697]: I0127 15:29:36.339664 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c11e83f3-61e4-4f13-89e2-cf9209760247-config-data\") pod \"c11e83f3-61e4-4f13-89e2-cf9209760247\" (UID: \"c11e83f3-61e4-4f13-89e2-cf9209760247\") " Jan 27 15:29:36 crc kubenswrapper[4697]: I0127 15:29:36.339752 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c11e83f3-61e4-4f13-89e2-cf9209760247-combined-ca-bundle\") pod \"c11e83f3-61e4-4f13-89e2-cf9209760247\" (UID: \"c11e83f3-61e4-4f13-89e2-cf9209760247\") " Jan 27 15:29:36 crc kubenswrapper[4697]: I0127 15:29:36.339876 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c11e83f3-61e4-4f13-89e2-cf9209760247-scripts\") pod \"c11e83f3-61e4-4f13-89e2-cf9209760247\" (UID: \"c11e83f3-61e4-4f13-89e2-cf9209760247\") " Jan 27 15:29:36 crc kubenswrapper[4697]: I0127 15:29:36.342331 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c11e83f3-61e4-4f13-89e2-cf9209760247-logs" (OuterVolumeSpecName: "logs") pod "c11e83f3-61e4-4f13-89e2-cf9209760247" (UID: "c11e83f3-61e4-4f13-89e2-cf9209760247"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:29:36 crc kubenswrapper[4697]: I0127 15:29:36.353052 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c11e83f3-61e4-4f13-89e2-cf9209760247-scripts" (OuterVolumeSpecName: "scripts") pod "c11e83f3-61e4-4f13-89e2-cf9209760247" (UID: "c11e83f3-61e4-4f13-89e2-cf9209760247"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:29:36 crc kubenswrapper[4697]: I0127 15:29:36.441876 4697 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c11e83f3-61e4-4f13-89e2-cf9209760247-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 15:29:36 crc kubenswrapper[4697]: I0127 15:29:36.441904 4697 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c11e83f3-61e4-4f13-89e2-cf9209760247-logs\") on node \"crc\" DevicePath \"\"" Jan 27 15:29:36 crc kubenswrapper[4697]: I0127 15:29:36.472058 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c11e83f3-61e4-4f13-89e2-cf9209760247-kube-api-access-pwtc9" (OuterVolumeSpecName: "kube-api-access-pwtc9") pod "c11e83f3-61e4-4f13-89e2-cf9209760247" (UID: "c11e83f3-61e4-4f13-89e2-cf9209760247"). InnerVolumeSpecName "kube-api-access-pwtc9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:29:36 crc kubenswrapper[4697]: I0127 15:29:36.472178 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c11e83f3-61e4-4f13-89e2-cf9209760247-config-data" (OuterVolumeSpecName: "config-data") pod "c11e83f3-61e4-4f13-89e2-cf9209760247" (UID: "c11e83f3-61e4-4f13-89e2-cf9209760247"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:29:36 crc kubenswrapper[4697]: I0127 15:29:36.485854 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c11e83f3-61e4-4f13-89e2-cf9209760247-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c11e83f3-61e4-4f13-89e2-cf9209760247" (UID: "c11e83f3-61e4-4f13-89e2-cf9209760247"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:29:36 crc kubenswrapper[4697]: I0127 15:29:36.548928 4697 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c11e83f3-61e4-4f13-89e2-cf9209760247-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 15:29:36 crc kubenswrapper[4697]: I0127 15:29:36.548955 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pwtc9\" (UniqueName: \"kubernetes.io/projected/c11e83f3-61e4-4f13-89e2-cf9209760247-kube-api-access-pwtc9\") on node \"crc\" DevicePath \"\"" Jan 27 15:29:36 crc kubenswrapper[4697]: I0127 15:29:36.548966 4697 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c11e83f3-61e4-4f13-89e2-cf9209760247-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 15:29:36 crc kubenswrapper[4697]: I0127 15:29:36.775060 4697 generic.go:334] "Generic (PLEG): container finished" podID="98cd1413-1ae7-49dd-91b9-d30f7947c4ea" containerID="3dfcc3b51443f5b0547b3b33bbcd5d7c3d4c777901c6ff9a71b37261257fd236" exitCode=0 Jan 27 15:29:36 crc kubenswrapper[4697]: I0127 15:29:36.775443 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-lnmrx" event={"ID":"98cd1413-1ae7-49dd-91b9-d30f7947c4ea","Type":"ContainerDied","Data":"3dfcc3b51443f5b0547b3b33bbcd5d7c3d4c777901c6ff9a71b37261257fd236"} Jan 27 15:29:36 crc kubenswrapper[4697]: I0127 15:29:36.778152 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-xc9hp" Jan 27 15:29:36 crc kubenswrapper[4697]: I0127 15:29:36.778148 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-xc9hp" event={"ID":"c11e83f3-61e4-4f13-89e2-cf9209760247","Type":"ContainerDied","Data":"8eda167125e7fa07dcab714d9553f94345c90fbfb19beb504cf2a2e4ea07bee6"} Jan 27 15:29:36 crc kubenswrapper[4697]: I0127 15:29:36.778194 4697 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8eda167125e7fa07dcab714d9553f94345c90fbfb19beb504cf2a2e4ea07bee6" Jan 27 15:29:36 crc kubenswrapper[4697]: I0127 15:29:36.803176 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7a2066c3-d242-4f3b-85bd-f407f06cded2","Type":"ContainerStarted","Data":"1e3a54332da2bdea46a62af6888ed6d77b42bc8849e412f65815336659540e93"} Jan 27 15:29:36 crc kubenswrapper[4697]: I0127 15:29:36.836908 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-lnmrx" Jan 27 15:29:36 crc kubenswrapper[4697]: I0127 15:29:36.854355 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.854329364 podStartE2EDuration="4.854329364s" podCreationTimestamp="2026-01-27 15:29:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:29:36.843617665 +0000 UTC m=+1273.016017446" watchObservedRunningTime="2026-01-27 15:29:36.854329364 +0000 UTC m=+1273.026729145" Jan 27 15:29:36 crc kubenswrapper[4697]: I0127 15:29:36.922019 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-85b9bd5db8-9x55q"] Jan 27 15:29:36 crc kubenswrapper[4697]: E0127 15:29:36.929585 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98cd1413-1ae7-49dd-91b9-d30f7947c4ea" containerName="dnsmasq-dns" Jan 27 15:29:36 crc kubenswrapper[4697]: I0127 15:29:36.929662 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="98cd1413-1ae7-49dd-91b9-d30f7947c4ea" containerName="dnsmasq-dns" Jan 27 15:29:36 crc kubenswrapper[4697]: E0127 15:29:36.929736 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c11e83f3-61e4-4f13-89e2-cf9209760247" containerName="placement-db-sync" Jan 27 15:29:36 crc kubenswrapper[4697]: I0127 15:29:36.929809 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="c11e83f3-61e4-4f13-89e2-cf9209760247" containerName="placement-db-sync" Jan 27 15:29:36 crc kubenswrapper[4697]: E0127 15:29:36.929887 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98cd1413-1ae7-49dd-91b9-d30f7947c4ea" containerName="init" Jan 27 15:29:36 crc kubenswrapper[4697]: I0127 15:29:36.929939 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="98cd1413-1ae7-49dd-91b9-d30f7947c4ea" containerName="init" Jan 27 15:29:36 crc kubenswrapper[4697]: I0127 15:29:36.930181 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="c11e83f3-61e4-4f13-89e2-cf9209760247" containerName="placement-db-sync" Jan 27 15:29:36 crc kubenswrapper[4697]: I0127 15:29:36.930276 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="98cd1413-1ae7-49dd-91b9-d30f7947c4ea" containerName="dnsmasq-dns" Jan 27 15:29:36 crc kubenswrapper[4697]: I0127 15:29:36.931226 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-85b9bd5db8-9x55q" Jan 27 15:29:36 crc kubenswrapper[4697]: I0127 15:29:36.934961 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-999wh" Jan 27 15:29:36 crc kubenswrapper[4697]: I0127 15:29:36.935157 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Jan 27 15:29:36 crc kubenswrapper[4697]: I0127 15:29:36.935279 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Jan 27 15:29:36 crc kubenswrapper[4697]: I0127 15:29:36.935546 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-85b9bd5db8-9x55q"] Jan 27 15:29:36 crc kubenswrapper[4697]: I0127 15:29:36.938129 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 27 15:29:36 crc kubenswrapper[4697]: I0127 15:29:36.938885 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 27 15:29:36 crc kubenswrapper[4697]: I0127 15:29:36.975110 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/98cd1413-1ae7-49dd-91b9-d30f7947c4ea-ovsdbserver-nb\") pod \"98cd1413-1ae7-49dd-91b9-d30f7947c4ea\" (UID: \"98cd1413-1ae7-49dd-91b9-d30f7947c4ea\") " Jan 27 15:29:36 crc kubenswrapper[4697]: I0127 15:29:36.975252 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98cd1413-1ae7-49dd-91b9-d30f7947c4ea-config\") pod \"98cd1413-1ae7-49dd-91b9-d30f7947c4ea\" (UID: \"98cd1413-1ae7-49dd-91b9-d30f7947c4ea\") " Jan 27 15:29:36 crc kubenswrapper[4697]: I0127 15:29:36.975307 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/98cd1413-1ae7-49dd-91b9-d30f7947c4ea-dns-swift-storage-0\") pod \"98cd1413-1ae7-49dd-91b9-d30f7947c4ea\" (UID: \"98cd1413-1ae7-49dd-91b9-d30f7947c4ea\") " Jan 27 15:29:36 crc kubenswrapper[4697]: I0127 15:29:36.975383 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/98cd1413-1ae7-49dd-91b9-d30f7947c4ea-ovsdbserver-sb\") pod \"98cd1413-1ae7-49dd-91b9-d30f7947c4ea\" (UID: \"98cd1413-1ae7-49dd-91b9-d30f7947c4ea\") " Jan 27 15:29:36 crc kubenswrapper[4697]: I0127 15:29:36.975418 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ghclv\" (UniqueName: \"kubernetes.io/projected/98cd1413-1ae7-49dd-91b9-d30f7947c4ea-kube-api-access-ghclv\") pod \"98cd1413-1ae7-49dd-91b9-d30f7947c4ea\" (UID: \"98cd1413-1ae7-49dd-91b9-d30f7947c4ea\") " Jan 27 15:29:36 crc kubenswrapper[4697]: I0127 15:29:36.975528 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/98cd1413-1ae7-49dd-91b9-d30f7947c4ea-dns-svc\") pod \"98cd1413-1ae7-49dd-91b9-d30f7947c4ea\" (UID: \"98cd1413-1ae7-49dd-91b9-d30f7947c4ea\") " Jan 27 15:29:36 crc kubenswrapper[4697]: I0127 15:29:36.975920 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5663a40f-33b6-4e0b-9f94-94aecd69e3af-logs\") pod \"placement-85b9bd5db8-9x55q\" (UID: \"5663a40f-33b6-4e0b-9f94-94aecd69e3af\") " pod="openstack/placement-85b9bd5db8-9x55q" Jan 27 15:29:36 crc kubenswrapper[4697]: I0127 15:29:36.976028 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5663a40f-33b6-4e0b-9f94-94aecd69e3af-config-data\") pod \"placement-85b9bd5db8-9x55q\" (UID: \"5663a40f-33b6-4e0b-9f94-94aecd69e3af\") " pod="openstack/placement-85b9bd5db8-9x55q" Jan 27 15:29:36 crc kubenswrapper[4697]: I0127 15:29:36.976081 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qcqfs\" (UniqueName: \"kubernetes.io/projected/5663a40f-33b6-4e0b-9f94-94aecd69e3af-kube-api-access-qcqfs\") pod \"placement-85b9bd5db8-9x55q\" (UID: \"5663a40f-33b6-4e0b-9f94-94aecd69e3af\") " pod="openstack/placement-85b9bd5db8-9x55q" Jan 27 15:29:36 crc kubenswrapper[4697]: I0127 15:29:36.976107 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5663a40f-33b6-4e0b-9f94-94aecd69e3af-public-tls-certs\") pod \"placement-85b9bd5db8-9x55q\" (UID: \"5663a40f-33b6-4e0b-9f94-94aecd69e3af\") " pod="openstack/placement-85b9bd5db8-9x55q" Jan 27 15:29:36 crc kubenswrapper[4697]: I0127 15:29:36.976145 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5663a40f-33b6-4e0b-9f94-94aecd69e3af-combined-ca-bundle\") pod \"placement-85b9bd5db8-9x55q\" (UID: \"5663a40f-33b6-4e0b-9f94-94aecd69e3af\") " pod="openstack/placement-85b9bd5db8-9x55q" Jan 27 15:29:36 crc kubenswrapper[4697]: I0127 15:29:36.976184 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5663a40f-33b6-4e0b-9f94-94aecd69e3af-scripts\") pod \"placement-85b9bd5db8-9x55q\" (UID: \"5663a40f-33b6-4e0b-9f94-94aecd69e3af\") " pod="openstack/placement-85b9bd5db8-9x55q" Jan 27 15:29:36 crc kubenswrapper[4697]: I0127 15:29:36.976288 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5663a40f-33b6-4e0b-9f94-94aecd69e3af-internal-tls-certs\") pod \"placement-85b9bd5db8-9x55q\" (UID: \"5663a40f-33b6-4e0b-9f94-94aecd69e3af\") " pod="openstack/placement-85b9bd5db8-9x55q" Jan 27 15:29:36 crc kubenswrapper[4697]: I0127 15:29:36.989996 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98cd1413-1ae7-49dd-91b9-d30f7947c4ea-kube-api-access-ghclv" (OuterVolumeSpecName: "kube-api-access-ghclv") pod "98cd1413-1ae7-49dd-91b9-d30f7947c4ea" (UID: "98cd1413-1ae7-49dd-91b9-d30f7947c4ea"). InnerVolumeSpecName "kube-api-access-ghclv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:29:37 crc kubenswrapper[4697]: I0127 15:29:37.049511 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98cd1413-1ae7-49dd-91b9-d30f7947c4ea-config" (OuterVolumeSpecName: "config") pod "98cd1413-1ae7-49dd-91b9-d30f7947c4ea" (UID: "98cd1413-1ae7-49dd-91b9-d30f7947c4ea"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:29:37 crc kubenswrapper[4697]: I0127 15:29:37.059333 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98cd1413-1ae7-49dd-91b9-d30f7947c4ea-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "98cd1413-1ae7-49dd-91b9-d30f7947c4ea" (UID: "98cd1413-1ae7-49dd-91b9-d30f7947c4ea"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:29:37 crc kubenswrapper[4697]: I0127 15:29:37.064322 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98cd1413-1ae7-49dd-91b9-d30f7947c4ea-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "98cd1413-1ae7-49dd-91b9-d30f7947c4ea" (UID: "98cd1413-1ae7-49dd-91b9-d30f7947c4ea"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:29:37 crc kubenswrapper[4697]: I0127 15:29:37.079780 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5663a40f-33b6-4e0b-9f94-94aecd69e3af-internal-tls-certs\") pod \"placement-85b9bd5db8-9x55q\" (UID: \"5663a40f-33b6-4e0b-9f94-94aecd69e3af\") " pod="openstack/placement-85b9bd5db8-9x55q" Jan 27 15:29:37 crc kubenswrapper[4697]: I0127 15:29:37.080143 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5663a40f-33b6-4e0b-9f94-94aecd69e3af-logs\") pod \"placement-85b9bd5db8-9x55q\" (UID: \"5663a40f-33b6-4e0b-9f94-94aecd69e3af\") " pod="openstack/placement-85b9bd5db8-9x55q" Jan 27 15:29:37 crc kubenswrapper[4697]: I0127 15:29:37.080266 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5663a40f-33b6-4e0b-9f94-94aecd69e3af-config-data\") pod \"placement-85b9bd5db8-9x55q\" (UID: \"5663a40f-33b6-4e0b-9f94-94aecd69e3af\") " pod="openstack/placement-85b9bd5db8-9x55q" Jan 27 15:29:37 crc kubenswrapper[4697]: I0127 15:29:37.080382 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qcqfs\" (UniqueName: \"kubernetes.io/projected/5663a40f-33b6-4e0b-9f94-94aecd69e3af-kube-api-access-qcqfs\") pod \"placement-85b9bd5db8-9x55q\" (UID: \"5663a40f-33b6-4e0b-9f94-94aecd69e3af\") " pod="openstack/placement-85b9bd5db8-9x55q" Jan 27 15:29:37 crc kubenswrapper[4697]: I0127 15:29:37.080468 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5663a40f-33b6-4e0b-9f94-94aecd69e3af-public-tls-certs\") pod \"placement-85b9bd5db8-9x55q\" (UID: \"5663a40f-33b6-4e0b-9f94-94aecd69e3af\") " pod="openstack/placement-85b9bd5db8-9x55q" Jan 27 15:29:37 crc kubenswrapper[4697]: I0127 15:29:37.080557 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5663a40f-33b6-4e0b-9f94-94aecd69e3af-combined-ca-bundle\") pod \"placement-85b9bd5db8-9x55q\" (UID: \"5663a40f-33b6-4e0b-9f94-94aecd69e3af\") " pod="openstack/placement-85b9bd5db8-9x55q" Jan 27 15:29:37 crc kubenswrapper[4697]: I0127 15:29:37.080645 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5663a40f-33b6-4e0b-9f94-94aecd69e3af-scripts\") pod \"placement-85b9bd5db8-9x55q\" (UID: \"5663a40f-33b6-4e0b-9f94-94aecd69e3af\") " pod="openstack/placement-85b9bd5db8-9x55q" Jan 27 15:29:37 crc kubenswrapper[4697]: I0127 15:29:37.080854 4697 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/98cd1413-1ae7-49dd-91b9-d30f7947c4ea-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 27 15:29:37 crc kubenswrapper[4697]: I0127 15:29:37.080944 4697 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/98cd1413-1ae7-49dd-91b9-d30f7947c4ea-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 15:29:37 crc kubenswrapper[4697]: I0127 15:29:37.081027 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ghclv\" (UniqueName: \"kubernetes.io/projected/98cd1413-1ae7-49dd-91b9-d30f7947c4ea-kube-api-access-ghclv\") on node \"crc\" DevicePath \"\"" Jan 27 15:29:37 crc kubenswrapper[4697]: I0127 15:29:37.081105 4697 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98cd1413-1ae7-49dd-91b9-d30f7947c4ea-config\") on node \"crc\" DevicePath \"\"" Jan 27 15:29:37 crc kubenswrapper[4697]: I0127 15:29:37.080583 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5663a40f-33b6-4e0b-9f94-94aecd69e3af-logs\") pod \"placement-85b9bd5db8-9x55q\" (UID: \"5663a40f-33b6-4e0b-9f94-94aecd69e3af\") " pod="openstack/placement-85b9bd5db8-9x55q" Jan 27 15:29:37 crc kubenswrapper[4697]: I0127 15:29:37.101082 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5663a40f-33b6-4e0b-9f94-94aecd69e3af-internal-tls-certs\") pod \"placement-85b9bd5db8-9x55q\" (UID: \"5663a40f-33b6-4e0b-9f94-94aecd69e3af\") " pod="openstack/placement-85b9bd5db8-9x55q" Jan 27 15:29:37 crc kubenswrapper[4697]: I0127 15:29:37.103008 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5663a40f-33b6-4e0b-9f94-94aecd69e3af-combined-ca-bundle\") pod \"placement-85b9bd5db8-9x55q\" (UID: \"5663a40f-33b6-4e0b-9f94-94aecd69e3af\") " pod="openstack/placement-85b9bd5db8-9x55q" Jan 27 15:29:37 crc kubenswrapper[4697]: I0127 15:29:37.103610 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5663a40f-33b6-4e0b-9f94-94aecd69e3af-scripts\") pod \"placement-85b9bd5db8-9x55q\" (UID: \"5663a40f-33b6-4e0b-9f94-94aecd69e3af\") " pod="openstack/placement-85b9bd5db8-9x55q" Jan 27 15:29:37 crc kubenswrapper[4697]: I0127 15:29:37.115515 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5663a40f-33b6-4e0b-9f94-94aecd69e3af-public-tls-certs\") pod \"placement-85b9bd5db8-9x55q\" (UID: \"5663a40f-33b6-4e0b-9f94-94aecd69e3af\") " pod="openstack/placement-85b9bd5db8-9x55q" Jan 27 15:29:37 crc kubenswrapper[4697]: I0127 15:29:37.116074 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5663a40f-33b6-4e0b-9f94-94aecd69e3af-config-data\") pod \"placement-85b9bd5db8-9x55q\" (UID: \"5663a40f-33b6-4e0b-9f94-94aecd69e3af\") " pod="openstack/placement-85b9bd5db8-9x55q" Jan 27 15:29:37 crc kubenswrapper[4697]: I0127 15:29:37.144606 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qcqfs\" (UniqueName: \"kubernetes.io/projected/5663a40f-33b6-4e0b-9f94-94aecd69e3af-kube-api-access-qcqfs\") pod \"placement-85b9bd5db8-9x55q\" (UID: \"5663a40f-33b6-4e0b-9f94-94aecd69e3af\") " pod="openstack/placement-85b9bd5db8-9x55q" Jan 27 15:29:37 crc kubenswrapper[4697]: I0127 15:29:37.148971 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98cd1413-1ae7-49dd-91b9-d30f7947c4ea-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "98cd1413-1ae7-49dd-91b9-d30f7947c4ea" (UID: "98cd1413-1ae7-49dd-91b9-d30f7947c4ea"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:29:37 crc kubenswrapper[4697]: I0127 15:29:37.157509 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98cd1413-1ae7-49dd-91b9-d30f7947c4ea-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "98cd1413-1ae7-49dd-91b9-d30f7947c4ea" (UID: "98cd1413-1ae7-49dd-91b9-d30f7947c4ea"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:29:37 crc kubenswrapper[4697]: I0127 15:29:37.183133 4697 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/98cd1413-1ae7-49dd-91b9-d30f7947c4ea-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 15:29:37 crc kubenswrapper[4697]: I0127 15:29:37.183176 4697 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/98cd1413-1ae7-49dd-91b9-d30f7947c4ea-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 15:29:37 crc kubenswrapper[4697]: I0127 15:29:37.260824 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-85b9bd5db8-9x55q" Jan 27 15:29:37 crc kubenswrapper[4697]: I0127 15:29:37.817200 4697 generic.go:334] "Generic (PLEG): container finished" podID="edc034d6-13db-4ae2-be4c-86e4dad22dc7" containerID="3b3e5994844f75460670ebf4405acc857c69bcbf2ea85d491c81da5c3d0a7c4f" exitCode=0 Jan 27 15:29:37 crc kubenswrapper[4697]: I0127 15:29:37.817263 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-pdrgr" event={"ID":"edc034d6-13db-4ae2-be4c-86e4dad22dc7","Type":"ContainerDied","Data":"3b3e5994844f75460670ebf4405acc857c69bcbf2ea85d491c81da5c3d0a7c4f"} Jan 27 15:29:37 crc kubenswrapper[4697]: I0127 15:29:37.817842 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-85b9bd5db8-9x55q"] Jan 27 15:29:37 crc kubenswrapper[4697]: I0127 15:29:37.822933 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-lnmrx" event={"ID":"98cd1413-1ae7-49dd-91b9-d30f7947c4ea","Type":"ContainerDied","Data":"371c6ec92a973351219930774e641200efaa1ca582388d332a524b6148494c90"} Jan 27 15:29:37 crc kubenswrapper[4697]: I0127 15:29:37.822987 4697 scope.go:117] "RemoveContainer" containerID="3dfcc3b51443f5b0547b3b33bbcd5d7c3d4c777901c6ff9a71b37261257fd236" Jan 27 15:29:37 crc kubenswrapper[4697]: I0127 15:29:37.823016 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-lnmrx" Jan 27 15:29:37 crc kubenswrapper[4697]: I0127 15:29:37.869495 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-lnmrx"] Jan 27 15:29:37 crc kubenswrapper[4697]: I0127 15:29:37.889386 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-lnmrx"] Jan 27 15:29:38 crc kubenswrapper[4697]: I0127 15:29:38.586427 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98cd1413-1ae7-49dd-91b9-d30f7947c4ea" path="/var/lib/kubelet/pods/98cd1413-1ae7-49dd-91b9-d30f7947c4ea/volumes" Jan 27 15:29:38 crc kubenswrapper[4697]: I0127 15:29:38.821202 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 27 15:29:38 crc kubenswrapper[4697]: I0127 15:29:38.821244 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 27 15:29:38 crc kubenswrapper[4697]: I0127 15:29:38.880147 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 27 15:29:38 crc kubenswrapper[4697]: I0127 15:29:38.880761 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 27 15:29:38 crc kubenswrapper[4697]: I0127 15:29:38.880844 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 27 15:29:39 crc kubenswrapper[4697]: I0127 15:29:39.842982 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 27 15:29:40 crc kubenswrapper[4697]: I0127 15:29:40.629715 4697 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5965fc65fb-dvhzz" podUID="d6ad161d-fe95-4ad3-8f60-1f1310b2974c" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.147:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.147:8443: connect: connection refused" Jan 27 15:29:40 crc kubenswrapper[4697]: I0127 15:29:40.918725 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-5b9dc56b78-cpxnx" Jan 27 15:29:40 crc kubenswrapper[4697]: I0127 15:29:40.919556 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5b9dc56b78-cpxnx" Jan 27 15:29:42 crc kubenswrapper[4697]: I0127 15:29:42.974586 4697 scope.go:117] "RemoveContainer" containerID="4e200fb4cff5eba7c81253d76dd7f2d682567abe1ed975e856f6bac846a345ca" Jan 27 15:29:42 crc kubenswrapper[4697]: I0127 15:29:42.991246 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 27 15:29:42 crc kubenswrapper[4697]: I0127 15:29:42.991502 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 27 15:29:43 crc kubenswrapper[4697]: I0127 15:29:43.056235 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 27 15:29:43 crc kubenswrapper[4697]: I0127 15:29:43.066592 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 27 15:29:43 crc kubenswrapper[4697]: I0127 15:29:43.093346 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-pdrgr" Jan 27 15:29:43 crc kubenswrapper[4697]: I0127 15:29:43.196301 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/edc034d6-13db-4ae2-be4c-86e4dad22dc7-fernet-keys\") pod \"edc034d6-13db-4ae2-be4c-86e4dad22dc7\" (UID: \"edc034d6-13db-4ae2-be4c-86e4dad22dc7\") " Jan 27 15:29:43 crc kubenswrapper[4697]: I0127 15:29:43.196375 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/edc034d6-13db-4ae2-be4c-86e4dad22dc7-credential-keys\") pod \"edc034d6-13db-4ae2-be4c-86e4dad22dc7\" (UID: \"edc034d6-13db-4ae2-be4c-86e4dad22dc7\") " Jan 27 15:29:43 crc kubenswrapper[4697]: I0127 15:29:43.196438 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edc034d6-13db-4ae2-be4c-86e4dad22dc7-combined-ca-bundle\") pod \"edc034d6-13db-4ae2-be4c-86e4dad22dc7\" (UID: \"edc034d6-13db-4ae2-be4c-86e4dad22dc7\") " Jan 27 15:29:43 crc kubenswrapper[4697]: I0127 15:29:43.196474 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edc034d6-13db-4ae2-be4c-86e4dad22dc7-config-data\") pod \"edc034d6-13db-4ae2-be4c-86e4dad22dc7\" (UID: \"edc034d6-13db-4ae2-be4c-86e4dad22dc7\") " Jan 27 15:29:43 crc kubenswrapper[4697]: I0127 15:29:43.196513 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4q8v8\" (UniqueName: \"kubernetes.io/projected/edc034d6-13db-4ae2-be4c-86e4dad22dc7-kube-api-access-4q8v8\") pod \"edc034d6-13db-4ae2-be4c-86e4dad22dc7\" (UID: \"edc034d6-13db-4ae2-be4c-86e4dad22dc7\") " Jan 27 15:29:43 crc kubenswrapper[4697]: I0127 15:29:43.196614 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/edc034d6-13db-4ae2-be4c-86e4dad22dc7-scripts\") pod \"edc034d6-13db-4ae2-be4c-86e4dad22dc7\" (UID: \"edc034d6-13db-4ae2-be4c-86e4dad22dc7\") " Jan 27 15:29:43 crc kubenswrapper[4697]: I0127 15:29:43.203456 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/edc034d6-13db-4ae2-be4c-86e4dad22dc7-scripts" (OuterVolumeSpecName: "scripts") pod "edc034d6-13db-4ae2-be4c-86e4dad22dc7" (UID: "edc034d6-13db-4ae2-be4c-86e4dad22dc7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:29:43 crc kubenswrapper[4697]: I0127 15:29:43.222422 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/edc034d6-13db-4ae2-be4c-86e4dad22dc7-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "edc034d6-13db-4ae2-be4c-86e4dad22dc7" (UID: "edc034d6-13db-4ae2-be4c-86e4dad22dc7"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:29:43 crc kubenswrapper[4697]: I0127 15:29:43.222460 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/edc034d6-13db-4ae2-be4c-86e4dad22dc7-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "edc034d6-13db-4ae2-be4c-86e4dad22dc7" (UID: "edc034d6-13db-4ae2-be4c-86e4dad22dc7"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:29:43 crc kubenswrapper[4697]: I0127 15:29:43.233407 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/edc034d6-13db-4ae2-be4c-86e4dad22dc7-kube-api-access-4q8v8" (OuterVolumeSpecName: "kube-api-access-4q8v8") pod "edc034d6-13db-4ae2-be4c-86e4dad22dc7" (UID: "edc034d6-13db-4ae2-be4c-86e4dad22dc7"). InnerVolumeSpecName "kube-api-access-4q8v8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:29:43 crc kubenswrapper[4697]: I0127 15:29:43.234031 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/edc034d6-13db-4ae2-be4c-86e4dad22dc7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "edc034d6-13db-4ae2-be4c-86e4dad22dc7" (UID: "edc034d6-13db-4ae2-be4c-86e4dad22dc7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:29:43 crc kubenswrapper[4697]: I0127 15:29:43.235892 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/edc034d6-13db-4ae2-be4c-86e4dad22dc7-config-data" (OuterVolumeSpecName: "config-data") pod "edc034d6-13db-4ae2-be4c-86e4dad22dc7" (UID: "edc034d6-13db-4ae2-be4c-86e4dad22dc7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:29:43 crc kubenswrapper[4697]: I0127 15:29:43.298460 4697 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/edc034d6-13db-4ae2-be4c-86e4dad22dc7-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 27 15:29:43 crc kubenswrapper[4697]: I0127 15:29:43.298625 4697 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/edc034d6-13db-4ae2-be4c-86e4dad22dc7-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 27 15:29:43 crc kubenswrapper[4697]: I0127 15:29:43.298718 4697 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edc034d6-13db-4ae2-be4c-86e4dad22dc7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 15:29:43 crc kubenswrapper[4697]: I0127 15:29:43.298828 4697 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edc034d6-13db-4ae2-be4c-86e4dad22dc7-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 15:29:43 crc kubenswrapper[4697]: I0127 15:29:43.298922 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4q8v8\" (UniqueName: \"kubernetes.io/projected/edc034d6-13db-4ae2-be4c-86e4dad22dc7-kube-api-access-4q8v8\") on node \"crc\" DevicePath \"\"" Jan 27 15:29:43 crc kubenswrapper[4697]: I0127 15:29:43.299008 4697 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/edc034d6-13db-4ae2-be4c-86e4dad22dc7-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 15:29:43 crc kubenswrapper[4697]: I0127 15:29:43.885815 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-85b9bd5db8-9x55q" event={"ID":"5663a40f-33b6-4e0b-9f94-94aecd69e3af","Type":"ContainerStarted","Data":"4f140fd286bdfdc08c41a8bf72e9b2098a27acadedf81aea681c406aebb0f516"} Jan 27 15:29:43 crc kubenswrapper[4697]: I0127 15:29:43.886304 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-85b9bd5db8-9x55q" Jan 27 15:29:43 crc kubenswrapper[4697]: I0127 15:29:43.886319 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-85b9bd5db8-9x55q" Jan 27 15:29:43 crc kubenswrapper[4697]: I0127 15:29:43.886328 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-85b9bd5db8-9x55q" event={"ID":"5663a40f-33b6-4e0b-9f94-94aecd69e3af","Type":"ContainerStarted","Data":"0825df35e0e36891935dca4a29ee6cf07f3510bf101a2c51d0d9b651f704d99f"} Jan 27 15:29:43 crc kubenswrapper[4697]: I0127 15:29:43.886339 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-85b9bd5db8-9x55q" event={"ID":"5663a40f-33b6-4e0b-9f94-94aecd69e3af","Type":"ContainerStarted","Data":"31b015ad183bf575dc03ef2600835a87243df667e85e35b76b39024439ff536e"} Jan 27 15:29:43 crc kubenswrapper[4697]: I0127 15:29:43.887971 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-pdrgr" event={"ID":"edc034d6-13db-4ae2-be4c-86e4dad22dc7","Type":"ContainerDied","Data":"11691313ad63b1f68626dd2978bcafb6f25ce99f09bc3e50982693d2d53fbc91"} Jan 27 15:29:43 crc kubenswrapper[4697]: I0127 15:29:43.888023 4697 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="11691313ad63b1f68626dd2978bcafb6f25ce99f09bc3e50982693d2d53fbc91" Jan 27 15:29:43 crc kubenswrapper[4697]: I0127 15:29:43.887995 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-pdrgr" Jan 27 15:29:43 crc kubenswrapper[4697]: I0127 15:29:43.894296 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"70da0843-011d-422d-bc59-479d90e689a8","Type":"ContainerStarted","Data":"f5f8232461a2a3788177d3ee719ad49bae6cb338cba5dd1441ba3675ebf46fe7"} Jan 27 15:29:43 crc kubenswrapper[4697]: I0127 15:29:43.894352 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 27 15:29:43 crc kubenswrapper[4697]: I0127 15:29:43.894545 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 27 15:29:43 crc kubenswrapper[4697]: I0127 15:29:43.916320 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-85b9bd5db8-9x55q" podStartSLOduration=7.916302743 podStartE2EDuration="7.916302743s" podCreationTimestamp="2026-01-27 15:29:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:29:43.914457649 +0000 UTC m=+1280.086857450" watchObservedRunningTime="2026-01-27 15:29:43.916302743 +0000 UTC m=+1280.088702514" Jan 27 15:29:44 crc kubenswrapper[4697]: I0127 15:29:44.306656 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-59df5b454d-5c7dx"] Jan 27 15:29:44 crc kubenswrapper[4697]: E0127 15:29:44.307309 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="edc034d6-13db-4ae2-be4c-86e4dad22dc7" containerName="keystone-bootstrap" Jan 27 15:29:44 crc kubenswrapper[4697]: I0127 15:29:44.307324 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="edc034d6-13db-4ae2-be4c-86e4dad22dc7" containerName="keystone-bootstrap" Jan 27 15:29:44 crc kubenswrapper[4697]: I0127 15:29:44.307501 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="edc034d6-13db-4ae2-be4c-86e4dad22dc7" containerName="keystone-bootstrap" Jan 27 15:29:44 crc kubenswrapper[4697]: I0127 15:29:44.308126 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-59df5b454d-5c7dx" Jan 27 15:29:44 crc kubenswrapper[4697]: I0127 15:29:44.312937 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 27 15:29:44 crc kubenswrapper[4697]: I0127 15:29:44.313103 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 27 15:29:44 crc kubenswrapper[4697]: I0127 15:29:44.313214 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-hr2gd" Jan 27 15:29:44 crc kubenswrapper[4697]: I0127 15:29:44.313425 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 27 15:29:44 crc kubenswrapper[4697]: I0127 15:29:44.314877 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-59df5b454d-5c7dx"] Jan 27 15:29:44 crc kubenswrapper[4697]: I0127 15:29:44.324428 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Jan 27 15:29:44 crc kubenswrapper[4697]: I0127 15:29:44.324964 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Jan 27 15:29:44 crc kubenswrapper[4697]: I0127 15:29:44.416521 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d505d1b9-c72c-4515-8f3f-f543d0276487-config-data\") pod \"keystone-59df5b454d-5c7dx\" (UID: \"d505d1b9-c72c-4515-8f3f-f543d0276487\") " pod="openstack/keystone-59df5b454d-5c7dx" Jan 27 15:29:44 crc kubenswrapper[4697]: I0127 15:29:44.416617 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d505d1b9-c72c-4515-8f3f-f543d0276487-fernet-keys\") pod \"keystone-59df5b454d-5c7dx\" (UID: \"d505d1b9-c72c-4515-8f3f-f543d0276487\") " pod="openstack/keystone-59df5b454d-5c7dx" Jan 27 15:29:44 crc kubenswrapper[4697]: I0127 15:29:44.416654 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d505d1b9-c72c-4515-8f3f-f543d0276487-credential-keys\") pod \"keystone-59df5b454d-5c7dx\" (UID: \"d505d1b9-c72c-4515-8f3f-f543d0276487\") " pod="openstack/keystone-59df5b454d-5c7dx" Jan 27 15:29:44 crc kubenswrapper[4697]: I0127 15:29:44.416689 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhqlz\" (UniqueName: \"kubernetes.io/projected/d505d1b9-c72c-4515-8f3f-f543d0276487-kube-api-access-nhqlz\") pod \"keystone-59df5b454d-5c7dx\" (UID: \"d505d1b9-c72c-4515-8f3f-f543d0276487\") " pod="openstack/keystone-59df5b454d-5c7dx" Jan 27 15:29:44 crc kubenswrapper[4697]: I0127 15:29:44.416716 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d505d1b9-c72c-4515-8f3f-f543d0276487-public-tls-certs\") pod \"keystone-59df5b454d-5c7dx\" (UID: \"d505d1b9-c72c-4515-8f3f-f543d0276487\") " pod="openstack/keystone-59df5b454d-5c7dx" Jan 27 15:29:44 crc kubenswrapper[4697]: I0127 15:29:44.416745 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d505d1b9-c72c-4515-8f3f-f543d0276487-internal-tls-certs\") pod \"keystone-59df5b454d-5c7dx\" (UID: \"d505d1b9-c72c-4515-8f3f-f543d0276487\") " pod="openstack/keystone-59df5b454d-5c7dx" Jan 27 15:29:44 crc kubenswrapper[4697]: I0127 15:29:44.416829 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d505d1b9-c72c-4515-8f3f-f543d0276487-scripts\") pod \"keystone-59df5b454d-5c7dx\" (UID: \"d505d1b9-c72c-4515-8f3f-f543d0276487\") " pod="openstack/keystone-59df5b454d-5c7dx" Jan 27 15:29:44 crc kubenswrapper[4697]: I0127 15:29:44.416890 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d505d1b9-c72c-4515-8f3f-f543d0276487-combined-ca-bundle\") pod \"keystone-59df5b454d-5c7dx\" (UID: \"d505d1b9-c72c-4515-8f3f-f543d0276487\") " pod="openstack/keystone-59df5b454d-5c7dx" Jan 27 15:29:44 crc kubenswrapper[4697]: I0127 15:29:44.518767 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d505d1b9-c72c-4515-8f3f-f543d0276487-public-tls-certs\") pod \"keystone-59df5b454d-5c7dx\" (UID: \"d505d1b9-c72c-4515-8f3f-f543d0276487\") " pod="openstack/keystone-59df5b454d-5c7dx" Jan 27 15:29:44 crc kubenswrapper[4697]: I0127 15:29:44.518849 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d505d1b9-c72c-4515-8f3f-f543d0276487-internal-tls-certs\") pod \"keystone-59df5b454d-5c7dx\" (UID: \"d505d1b9-c72c-4515-8f3f-f543d0276487\") " pod="openstack/keystone-59df5b454d-5c7dx" Jan 27 15:29:44 crc kubenswrapper[4697]: I0127 15:29:44.518880 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d505d1b9-c72c-4515-8f3f-f543d0276487-scripts\") pod \"keystone-59df5b454d-5c7dx\" (UID: \"d505d1b9-c72c-4515-8f3f-f543d0276487\") " pod="openstack/keystone-59df5b454d-5c7dx" Jan 27 15:29:44 crc kubenswrapper[4697]: I0127 15:29:44.518926 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d505d1b9-c72c-4515-8f3f-f543d0276487-combined-ca-bundle\") pod \"keystone-59df5b454d-5c7dx\" (UID: \"d505d1b9-c72c-4515-8f3f-f543d0276487\") " pod="openstack/keystone-59df5b454d-5c7dx" Jan 27 15:29:44 crc kubenswrapper[4697]: I0127 15:29:44.518972 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d505d1b9-c72c-4515-8f3f-f543d0276487-config-data\") pod \"keystone-59df5b454d-5c7dx\" (UID: \"d505d1b9-c72c-4515-8f3f-f543d0276487\") " pod="openstack/keystone-59df5b454d-5c7dx" Jan 27 15:29:44 crc kubenswrapper[4697]: I0127 15:29:44.519011 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d505d1b9-c72c-4515-8f3f-f543d0276487-fernet-keys\") pod \"keystone-59df5b454d-5c7dx\" (UID: \"d505d1b9-c72c-4515-8f3f-f543d0276487\") " pod="openstack/keystone-59df5b454d-5c7dx" Jan 27 15:29:44 crc kubenswrapper[4697]: I0127 15:29:44.519036 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d505d1b9-c72c-4515-8f3f-f543d0276487-credential-keys\") pod \"keystone-59df5b454d-5c7dx\" (UID: \"d505d1b9-c72c-4515-8f3f-f543d0276487\") " pod="openstack/keystone-59df5b454d-5c7dx" Jan 27 15:29:44 crc kubenswrapper[4697]: I0127 15:29:44.519059 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhqlz\" (UniqueName: \"kubernetes.io/projected/d505d1b9-c72c-4515-8f3f-f543d0276487-kube-api-access-nhqlz\") pod \"keystone-59df5b454d-5c7dx\" (UID: \"d505d1b9-c72c-4515-8f3f-f543d0276487\") " pod="openstack/keystone-59df5b454d-5c7dx" Jan 27 15:29:44 crc kubenswrapper[4697]: I0127 15:29:44.541718 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d505d1b9-c72c-4515-8f3f-f543d0276487-config-data\") pod \"keystone-59df5b454d-5c7dx\" (UID: \"d505d1b9-c72c-4515-8f3f-f543d0276487\") " pod="openstack/keystone-59df5b454d-5c7dx" Jan 27 15:29:44 crc kubenswrapper[4697]: I0127 15:29:44.548449 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d505d1b9-c72c-4515-8f3f-f543d0276487-combined-ca-bundle\") pod \"keystone-59df5b454d-5c7dx\" (UID: \"d505d1b9-c72c-4515-8f3f-f543d0276487\") " pod="openstack/keystone-59df5b454d-5c7dx" Jan 27 15:29:44 crc kubenswrapper[4697]: I0127 15:29:44.549518 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhqlz\" (UniqueName: \"kubernetes.io/projected/d505d1b9-c72c-4515-8f3f-f543d0276487-kube-api-access-nhqlz\") pod \"keystone-59df5b454d-5c7dx\" (UID: \"d505d1b9-c72c-4515-8f3f-f543d0276487\") " pod="openstack/keystone-59df5b454d-5c7dx" Jan 27 15:29:44 crc kubenswrapper[4697]: I0127 15:29:44.555302 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d505d1b9-c72c-4515-8f3f-f543d0276487-credential-keys\") pod \"keystone-59df5b454d-5c7dx\" (UID: \"d505d1b9-c72c-4515-8f3f-f543d0276487\") " pod="openstack/keystone-59df5b454d-5c7dx" Jan 27 15:29:44 crc kubenswrapper[4697]: I0127 15:29:44.556020 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d505d1b9-c72c-4515-8f3f-f543d0276487-scripts\") pod \"keystone-59df5b454d-5c7dx\" (UID: \"d505d1b9-c72c-4515-8f3f-f543d0276487\") " pod="openstack/keystone-59df5b454d-5c7dx" Jan 27 15:29:44 crc kubenswrapper[4697]: I0127 15:29:44.556242 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d505d1b9-c72c-4515-8f3f-f543d0276487-internal-tls-certs\") pod \"keystone-59df5b454d-5c7dx\" (UID: \"d505d1b9-c72c-4515-8f3f-f543d0276487\") " pod="openstack/keystone-59df5b454d-5c7dx" Jan 27 15:29:44 crc kubenswrapper[4697]: I0127 15:29:44.556365 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d505d1b9-c72c-4515-8f3f-f543d0276487-public-tls-certs\") pod \"keystone-59df5b454d-5c7dx\" (UID: \"d505d1b9-c72c-4515-8f3f-f543d0276487\") " pod="openstack/keystone-59df5b454d-5c7dx" Jan 27 15:29:44 crc kubenswrapper[4697]: I0127 15:29:44.561246 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d505d1b9-c72c-4515-8f3f-f543d0276487-fernet-keys\") pod \"keystone-59df5b454d-5c7dx\" (UID: \"d505d1b9-c72c-4515-8f3f-f543d0276487\") " pod="openstack/keystone-59df5b454d-5c7dx" Jan 27 15:29:44 crc kubenswrapper[4697]: I0127 15:29:44.634937 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-59df5b454d-5c7dx" Jan 27 15:29:45 crc kubenswrapper[4697]: I0127 15:29:45.277729 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-59df5b454d-5c7dx"] Jan 27 15:29:45 crc kubenswrapper[4697]: W0127 15:29:45.289432 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd505d1b9_c72c_4515_8f3f_f543d0276487.slice/crio-98bce31eb548ef8d3bb310f07c2606ba0abea6422bd1f379c3e9758ab4bf4752 WatchSource:0}: Error finding container 98bce31eb548ef8d3bb310f07c2606ba0abea6422bd1f379c3e9758ab4bf4752: Status 404 returned error can't find the container with id 98bce31eb548ef8d3bb310f07c2606ba0abea6422bd1f379c3e9758ab4bf4752 Jan 27 15:29:45 crc kubenswrapper[4697]: I0127 15:29:45.927328 4697 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 15:29:45 crc kubenswrapper[4697]: I0127 15:29:45.927633 4697 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 15:29:45 crc kubenswrapper[4697]: I0127 15:29:45.927623 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-59df5b454d-5c7dx" event={"ID":"d505d1b9-c72c-4515-8f3f-f543d0276487","Type":"ContainerStarted","Data":"4f1d28dabe58226ee2134a981cffd80f06be52abb645b50e79948534c306f81a"} Jan 27 15:29:45 crc kubenswrapper[4697]: I0127 15:29:45.927678 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-59df5b454d-5c7dx" event={"ID":"d505d1b9-c72c-4515-8f3f-f543d0276487","Type":"ContainerStarted","Data":"98bce31eb548ef8d3bb310f07c2606ba0abea6422bd1f379c3e9758ab4bf4752"} Jan 27 15:29:46 crc kubenswrapper[4697]: I0127 15:29:46.934630 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-59df5b454d-5c7dx" Jan 27 15:29:46 crc kubenswrapper[4697]: I0127 15:29:46.957127 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-59df5b454d-5c7dx" podStartSLOduration=2.9571073390000002 podStartE2EDuration="2.957107339s" podCreationTimestamp="2026-01-27 15:29:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:29:46.95469158 +0000 UTC m=+1283.127091361" watchObservedRunningTime="2026-01-27 15:29:46.957107339 +0000 UTC m=+1283.129507120" Jan 27 15:29:50 crc kubenswrapper[4697]: I0127 15:29:50.628985 4697 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5965fc65fb-dvhzz" podUID="d6ad161d-fe95-4ad3-8f60-1f1310b2974c" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.147:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.147:8443: connect: connection refused" Jan 27 15:29:50 crc kubenswrapper[4697]: I0127 15:29:50.629828 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-5965fc65fb-dvhzz" Jan 27 15:29:50 crc kubenswrapper[4697]: I0127 15:29:50.630967 4697 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"e54450188c94f7298427d91a62c88df853535928735655dd6ef49dea887a8a99"} pod="openstack/horizon-5965fc65fb-dvhzz" containerMessage="Container horizon failed startup probe, will be restarted" Jan 27 15:29:50 crc kubenswrapper[4697]: I0127 15:29:50.631019 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5965fc65fb-dvhzz" podUID="d6ad161d-fe95-4ad3-8f60-1f1310b2974c" containerName="horizon" containerID="cri-o://e54450188c94f7298427d91a62c88df853535928735655dd6ef49dea887a8a99" gracePeriod=30 Jan 27 15:29:50 crc kubenswrapper[4697]: I0127 15:29:50.922095 4697 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5b9dc56b78-cpxnx" podUID="ca5e937a-90cf-44e0-bf5c-bcb75c95a2f4" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.148:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.148:8443: connect: connection refused" Jan 27 15:29:51 crc kubenswrapper[4697]: I0127 15:29:51.987825 4697 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-6887cfc8d4-v8f57" podUID="dc00891c-0cae-42c0-bb0a-8e78bd146365" containerName="neutron-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 27 15:29:51 crc kubenswrapper[4697]: I0127 15:29:51.988249 4697 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/neutron-6887cfc8d4-v8f57" podUID="dc00891c-0cae-42c0-bb0a-8e78bd146365" containerName="neutron-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 27 15:29:51 crc kubenswrapper[4697]: I0127 15:29:51.989067 4697 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/neutron-6887cfc8d4-v8f57" podUID="dc00891c-0cae-42c0-bb0a-8e78bd146365" containerName="neutron-api" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 27 15:29:55 crc kubenswrapper[4697]: I0127 15:29:55.020569 4697 generic.go:334] "Generic (PLEG): container finished" podID="09a835cc-5807-48ce-a9f8-354d3182603f" containerID="7a1be61c999c8362c2811e0f505dab5f61ce2764c439a9bc30b6b44d69387c3e" exitCode=0 Jan 27 15:29:55 crc kubenswrapper[4697]: I0127 15:29:55.020638 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-5c6j2" event={"ID":"09a835cc-5807-48ce-a9f8-354d3182603f","Type":"ContainerDied","Data":"7a1be61c999c8362c2811e0f505dab5f61ce2764c439a9bc30b6b44d69387c3e"} Jan 27 15:29:55 crc kubenswrapper[4697]: I0127 15:29:55.108585 4697 patch_prober.go:28] interesting pod/machine-config-daemon-wz495 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 15:29:55 crc kubenswrapper[4697]: I0127 15:29:55.108658 4697 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 15:29:56 crc kubenswrapper[4697]: I0127 15:29:56.841056 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 27 15:29:56 crc kubenswrapper[4697]: I0127 15:29:56.841916 4697 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 15:29:56 crc kubenswrapper[4697]: I0127 15:29:56.848659 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 27 15:29:56 crc kubenswrapper[4697]: I0127 15:29:56.867888 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 27 15:29:56 crc kubenswrapper[4697]: I0127 15:29:56.868006 4697 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 15:29:56 crc kubenswrapper[4697]: I0127 15:29:56.901921 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 27 15:29:57 crc kubenswrapper[4697]: I0127 15:29:57.922486 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-5c6j2" Jan 27 15:29:58 crc kubenswrapper[4697]: I0127 15:29:58.044500 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-5c6j2" event={"ID":"09a835cc-5807-48ce-a9f8-354d3182603f","Type":"ContainerDied","Data":"5105b6a0f7dadb3760c94a88c4372981669e160e61a7e82b3bb341cb466c55c2"} Jan 27 15:29:58 crc kubenswrapper[4697]: I0127 15:29:58.044537 4697 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5105b6a0f7dadb3760c94a88c4372981669e160e61a7e82b3bb341cb466c55c2" Jan 27 15:29:58 crc kubenswrapper[4697]: I0127 15:29:58.044588 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-5c6j2" Jan 27 15:29:58 crc kubenswrapper[4697]: I0127 15:29:58.075692 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hgvx9\" (UniqueName: \"kubernetes.io/projected/09a835cc-5807-48ce-a9f8-354d3182603f-kube-api-access-hgvx9\") pod \"09a835cc-5807-48ce-a9f8-354d3182603f\" (UID: \"09a835cc-5807-48ce-a9f8-354d3182603f\") " Jan 27 15:29:58 crc kubenswrapper[4697]: I0127 15:29:58.075893 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/09a835cc-5807-48ce-a9f8-354d3182603f-db-sync-config-data\") pod \"09a835cc-5807-48ce-a9f8-354d3182603f\" (UID: \"09a835cc-5807-48ce-a9f8-354d3182603f\") " Jan 27 15:29:58 crc kubenswrapper[4697]: I0127 15:29:58.075992 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09a835cc-5807-48ce-a9f8-354d3182603f-combined-ca-bundle\") pod \"09a835cc-5807-48ce-a9f8-354d3182603f\" (UID: \"09a835cc-5807-48ce-a9f8-354d3182603f\") " Jan 27 15:29:58 crc kubenswrapper[4697]: I0127 15:29:58.081965 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09a835cc-5807-48ce-a9f8-354d3182603f-kube-api-access-hgvx9" (OuterVolumeSpecName: "kube-api-access-hgvx9") pod "09a835cc-5807-48ce-a9f8-354d3182603f" (UID: "09a835cc-5807-48ce-a9f8-354d3182603f"). InnerVolumeSpecName "kube-api-access-hgvx9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:29:58 crc kubenswrapper[4697]: I0127 15:29:58.094479 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09a835cc-5807-48ce-a9f8-354d3182603f-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "09a835cc-5807-48ce-a9f8-354d3182603f" (UID: "09a835cc-5807-48ce-a9f8-354d3182603f"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:29:58 crc kubenswrapper[4697]: I0127 15:29:58.129811 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09a835cc-5807-48ce-a9f8-354d3182603f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "09a835cc-5807-48ce-a9f8-354d3182603f" (UID: "09a835cc-5807-48ce-a9f8-354d3182603f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:29:58 crc kubenswrapper[4697]: I0127 15:29:58.178026 4697 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09a835cc-5807-48ce-a9f8-354d3182603f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 15:29:58 crc kubenswrapper[4697]: I0127 15:29:58.178062 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hgvx9\" (UniqueName: \"kubernetes.io/projected/09a835cc-5807-48ce-a9f8-354d3182603f-kube-api-access-hgvx9\") on node \"crc\" DevicePath \"\"" Jan 27 15:29:58 crc kubenswrapper[4697]: I0127 15:29:58.178074 4697 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/09a835cc-5807-48ce-a9f8-354d3182603f-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 15:29:59 crc kubenswrapper[4697]: I0127 15:29:59.317180 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-8c986997f-97nkx"] Jan 27 15:29:59 crc kubenswrapper[4697]: E0127 15:29:59.317851 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09a835cc-5807-48ce-a9f8-354d3182603f" containerName="barbican-db-sync" Jan 27 15:29:59 crc kubenswrapper[4697]: I0127 15:29:59.317866 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="09a835cc-5807-48ce-a9f8-354d3182603f" containerName="barbican-db-sync" Jan 27 15:29:59 crc kubenswrapper[4697]: I0127 15:29:59.318088 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="09a835cc-5807-48ce-a9f8-354d3182603f" containerName="barbican-db-sync" Jan 27 15:29:59 crc kubenswrapper[4697]: I0127 15:29:59.318968 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-8c986997f-97nkx" Jan 27 15:29:59 crc kubenswrapper[4697]: I0127 15:29:59.324351 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 27 15:29:59 crc kubenswrapper[4697]: I0127 15:29:59.324566 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-4sphp" Jan 27 15:29:59 crc kubenswrapper[4697]: I0127 15:29:59.324740 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Jan 27 15:29:59 crc kubenswrapper[4697]: I0127 15:29:59.348435 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-8c986997f-97nkx"] Jan 27 15:29:59 crc kubenswrapper[4697]: I0127 15:29:59.422384 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-6b5997ff6-w2vq4"] Jan 27 15:29:59 crc kubenswrapper[4697]: I0127 15:29:59.423828 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-6b5997ff6-w2vq4" Jan 27 15:29:59 crc kubenswrapper[4697]: I0127 15:29:59.425392 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c283033b-665a-4e84-b347-5ab724df37be-config-data\") pod \"barbican-worker-8c986997f-97nkx\" (UID: \"c283033b-665a-4e84-b347-5ab724df37be\") " pod="openstack/barbican-worker-8c986997f-97nkx" Jan 27 15:29:59 crc kubenswrapper[4697]: I0127 15:29:59.425445 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c283033b-665a-4e84-b347-5ab724df37be-config-data-custom\") pod \"barbican-worker-8c986997f-97nkx\" (UID: \"c283033b-665a-4e84-b347-5ab724df37be\") " pod="openstack/barbican-worker-8c986997f-97nkx" Jan 27 15:29:59 crc kubenswrapper[4697]: I0127 15:29:59.425476 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztk64\" (UniqueName: \"kubernetes.io/projected/c283033b-665a-4e84-b347-5ab724df37be-kube-api-access-ztk64\") pod \"barbican-worker-8c986997f-97nkx\" (UID: \"c283033b-665a-4e84-b347-5ab724df37be\") " pod="openstack/barbican-worker-8c986997f-97nkx" Jan 27 15:29:59 crc kubenswrapper[4697]: I0127 15:29:59.425492 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c283033b-665a-4e84-b347-5ab724df37be-combined-ca-bundle\") pod \"barbican-worker-8c986997f-97nkx\" (UID: \"c283033b-665a-4e84-b347-5ab724df37be\") " pod="openstack/barbican-worker-8c986997f-97nkx" Jan 27 15:29:59 crc kubenswrapper[4697]: I0127 15:29:59.425525 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c283033b-665a-4e84-b347-5ab724df37be-logs\") pod \"barbican-worker-8c986997f-97nkx\" (UID: \"c283033b-665a-4e84-b347-5ab724df37be\") " pod="openstack/barbican-worker-8c986997f-97nkx" Jan 27 15:29:59 crc kubenswrapper[4697]: I0127 15:29:59.430058 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Jan 27 15:29:59 crc kubenswrapper[4697]: I0127 15:29:59.457013 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-72dd2"] Jan 27 15:29:59 crc kubenswrapper[4697]: I0127 15:29:59.458445 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-72dd2" Jan 27 15:29:59 crc kubenswrapper[4697]: I0127 15:29:59.483550 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-6b5997ff6-w2vq4"] Jan 27 15:29:59 crc kubenswrapper[4697]: I0127 15:29:59.502674 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-72dd2"] Jan 27 15:29:59 crc kubenswrapper[4697]: I0127 15:29:59.529368 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ab6ee6b-e923-4905-8d8d-56f96e3bd471-config-data\") pod \"barbican-keystone-listener-6b5997ff6-w2vq4\" (UID: \"4ab6ee6b-e923-4905-8d8d-56f96e3bd471\") " pod="openstack/barbican-keystone-listener-6b5997ff6-w2vq4" Jan 27 15:29:59 crc kubenswrapper[4697]: I0127 15:29:59.529419 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6rxx\" (UniqueName: \"kubernetes.io/projected/4ab6ee6b-e923-4905-8d8d-56f96e3bd471-kube-api-access-m6rxx\") pod \"barbican-keystone-listener-6b5997ff6-w2vq4\" (UID: \"4ab6ee6b-e923-4905-8d8d-56f96e3bd471\") " pod="openstack/barbican-keystone-listener-6b5997ff6-w2vq4" Jan 27 15:29:59 crc kubenswrapper[4697]: I0127 15:29:59.529441 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ab6ee6b-e923-4905-8d8d-56f96e3bd471-combined-ca-bundle\") pod \"barbican-keystone-listener-6b5997ff6-w2vq4\" (UID: \"4ab6ee6b-e923-4905-8d8d-56f96e3bd471\") " pod="openstack/barbican-keystone-listener-6b5997ff6-w2vq4" Jan 27 15:29:59 crc kubenswrapper[4697]: I0127 15:29:59.529470 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/eeb8f1bf-1cfd-43f2-83ec-b322061636f4-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-72dd2\" (UID: \"eeb8f1bf-1cfd-43f2-83ec-b322061636f4\") " pod="openstack/dnsmasq-dns-85ff748b95-72dd2" Jan 27 15:29:59 crc kubenswrapper[4697]: I0127 15:29:59.529494 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eeb8f1bf-1cfd-43f2-83ec-b322061636f4-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-72dd2\" (UID: \"eeb8f1bf-1cfd-43f2-83ec-b322061636f4\") " pod="openstack/dnsmasq-dns-85ff748b95-72dd2" Jan 27 15:29:59 crc kubenswrapper[4697]: I0127 15:29:59.529519 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eeb8f1bf-1cfd-43f2-83ec-b322061636f4-config\") pod \"dnsmasq-dns-85ff748b95-72dd2\" (UID: \"eeb8f1bf-1cfd-43f2-83ec-b322061636f4\") " pod="openstack/dnsmasq-dns-85ff748b95-72dd2" Jan 27 15:29:59 crc kubenswrapper[4697]: I0127 15:29:59.529537 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eeb8f1bf-1cfd-43f2-83ec-b322061636f4-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-72dd2\" (UID: \"eeb8f1bf-1cfd-43f2-83ec-b322061636f4\") " pod="openstack/dnsmasq-dns-85ff748b95-72dd2" Jan 27 15:29:59 crc kubenswrapper[4697]: I0127 15:29:59.529563 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4ab6ee6b-e923-4905-8d8d-56f96e3bd471-logs\") pod \"barbican-keystone-listener-6b5997ff6-w2vq4\" (UID: \"4ab6ee6b-e923-4905-8d8d-56f96e3bd471\") " pod="openstack/barbican-keystone-listener-6b5997ff6-w2vq4" Jan 27 15:29:59 crc kubenswrapper[4697]: I0127 15:29:59.529586 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c283033b-665a-4e84-b347-5ab724df37be-config-data\") pod \"barbican-worker-8c986997f-97nkx\" (UID: \"c283033b-665a-4e84-b347-5ab724df37be\") " pod="openstack/barbican-worker-8c986997f-97nkx" Jan 27 15:29:59 crc kubenswrapper[4697]: I0127 15:29:59.529629 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c283033b-665a-4e84-b347-5ab724df37be-config-data-custom\") pod \"barbican-worker-8c986997f-97nkx\" (UID: \"c283033b-665a-4e84-b347-5ab724df37be\") " pod="openstack/barbican-worker-8c986997f-97nkx" Jan 27 15:29:59 crc kubenswrapper[4697]: I0127 15:29:59.529645 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eeb8f1bf-1cfd-43f2-83ec-b322061636f4-dns-svc\") pod \"dnsmasq-dns-85ff748b95-72dd2\" (UID: \"eeb8f1bf-1cfd-43f2-83ec-b322061636f4\") " pod="openstack/dnsmasq-dns-85ff748b95-72dd2" Jan 27 15:29:59 crc kubenswrapper[4697]: I0127 15:29:59.529674 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ztk64\" (UniqueName: \"kubernetes.io/projected/c283033b-665a-4e84-b347-5ab724df37be-kube-api-access-ztk64\") pod \"barbican-worker-8c986997f-97nkx\" (UID: \"c283033b-665a-4e84-b347-5ab724df37be\") " pod="openstack/barbican-worker-8c986997f-97nkx" Jan 27 15:29:59 crc kubenswrapper[4697]: I0127 15:29:59.529693 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c283033b-665a-4e84-b347-5ab724df37be-combined-ca-bundle\") pod \"barbican-worker-8c986997f-97nkx\" (UID: \"c283033b-665a-4e84-b347-5ab724df37be\") " pod="openstack/barbican-worker-8c986997f-97nkx" Jan 27 15:29:59 crc kubenswrapper[4697]: I0127 15:29:59.529708 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f64v7\" (UniqueName: \"kubernetes.io/projected/eeb8f1bf-1cfd-43f2-83ec-b322061636f4-kube-api-access-f64v7\") pod \"dnsmasq-dns-85ff748b95-72dd2\" (UID: \"eeb8f1bf-1cfd-43f2-83ec-b322061636f4\") " pod="openstack/dnsmasq-dns-85ff748b95-72dd2" Jan 27 15:29:59 crc kubenswrapper[4697]: I0127 15:29:59.529738 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4ab6ee6b-e923-4905-8d8d-56f96e3bd471-config-data-custom\") pod \"barbican-keystone-listener-6b5997ff6-w2vq4\" (UID: \"4ab6ee6b-e923-4905-8d8d-56f96e3bd471\") " pod="openstack/barbican-keystone-listener-6b5997ff6-w2vq4" Jan 27 15:29:59 crc kubenswrapper[4697]: I0127 15:29:59.529764 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c283033b-665a-4e84-b347-5ab724df37be-logs\") pod \"barbican-worker-8c986997f-97nkx\" (UID: \"c283033b-665a-4e84-b347-5ab724df37be\") " pod="openstack/barbican-worker-8c986997f-97nkx" Jan 27 15:29:59 crc kubenswrapper[4697]: I0127 15:29:59.530176 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c283033b-665a-4e84-b347-5ab724df37be-logs\") pod \"barbican-worker-8c986997f-97nkx\" (UID: \"c283033b-665a-4e84-b347-5ab724df37be\") " pod="openstack/barbican-worker-8c986997f-97nkx" Jan 27 15:29:59 crc kubenswrapper[4697]: I0127 15:29:59.540634 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c283033b-665a-4e84-b347-5ab724df37be-config-data-custom\") pod \"barbican-worker-8c986997f-97nkx\" (UID: \"c283033b-665a-4e84-b347-5ab724df37be\") " pod="openstack/barbican-worker-8c986997f-97nkx" Jan 27 15:29:59 crc kubenswrapper[4697]: I0127 15:29:59.541179 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c283033b-665a-4e84-b347-5ab724df37be-combined-ca-bundle\") pod \"barbican-worker-8c986997f-97nkx\" (UID: \"c283033b-665a-4e84-b347-5ab724df37be\") " pod="openstack/barbican-worker-8c986997f-97nkx" Jan 27 15:29:59 crc kubenswrapper[4697]: I0127 15:29:59.561393 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c283033b-665a-4e84-b347-5ab724df37be-config-data\") pod \"barbican-worker-8c986997f-97nkx\" (UID: \"c283033b-665a-4e84-b347-5ab724df37be\") " pod="openstack/barbican-worker-8c986997f-97nkx" Jan 27 15:29:59 crc kubenswrapper[4697]: I0127 15:29:59.592336 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ztk64\" (UniqueName: \"kubernetes.io/projected/c283033b-665a-4e84-b347-5ab724df37be-kube-api-access-ztk64\") pod \"barbican-worker-8c986997f-97nkx\" (UID: \"c283033b-665a-4e84-b347-5ab724df37be\") " pod="openstack/barbican-worker-8c986997f-97nkx" Jan 27 15:29:59 crc kubenswrapper[4697]: I0127 15:29:59.631983 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eeb8f1bf-1cfd-43f2-83ec-b322061636f4-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-72dd2\" (UID: \"eeb8f1bf-1cfd-43f2-83ec-b322061636f4\") " pod="openstack/dnsmasq-dns-85ff748b95-72dd2" Jan 27 15:29:59 crc kubenswrapper[4697]: I0127 15:29:59.632036 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4ab6ee6b-e923-4905-8d8d-56f96e3bd471-logs\") pod \"barbican-keystone-listener-6b5997ff6-w2vq4\" (UID: \"4ab6ee6b-e923-4905-8d8d-56f96e3bd471\") " pod="openstack/barbican-keystone-listener-6b5997ff6-w2vq4" Jan 27 15:29:59 crc kubenswrapper[4697]: I0127 15:29:59.632080 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eeb8f1bf-1cfd-43f2-83ec-b322061636f4-dns-svc\") pod \"dnsmasq-dns-85ff748b95-72dd2\" (UID: \"eeb8f1bf-1cfd-43f2-83ec-b322061636f4\") " pod="openstack/dnsmasq-dns-85ff748b95-72dd2" Jan 27 15:29:59 crc kubenswrapper[4697]: I0127 15:29:59.632110 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f64v7\" (UniqueName: \"kubernetes.io/projected/eeb8f1bf-1cfd-43f2-83ec-b322061636f4-kube-api-access-f64v7\") pod \"dnsmasq-dns-85ff748b95-72dd2\" (UID: \"eeb8f1bf-1cfd-43f2-83ec-b322061636f4\") " pod="openstack/dnsmasq-dns-85ff748b95-72dd2" Jan 27 15:29:59 crc kubenswrapper[4697]: I0127 15:29:59.632135 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4ab6ee6b-e923-4905-8d8d-56f96e3bd471-config-data-custom\") pod \"barbican-keystone-listener-6b5997ff6-w2vq4\" (UID: \"4ab6ee6b-e923-4905-8d8d-56f96e3bd471\") " pod="openstack/barbican-keystone-listener-6b5997ff6-w2vq4" Jan 27 15:29:59 crc kubenswrapper[4697]: I0127 15:29:59.632171 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ab6ee6b-e923-4905-8d8d-56f96e3bd471-config-data\") pod \"barbican-keystone-listener-6b5997ff6-w2vq4\" (UID: \"4ab6ee6b-e923-4905-8d8d-56f96e3bd471\") " pod="openstack/barbican-keystone-listener-6b5997ff6-w2vq4" Jan 27 15:29:59 crc kubenswrapper[4697]: I0127 15:29:59.632203 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6rxx\" (UniqueName: \"kubernetes.io/projected/4ab6ee6b-e923-4905-8d8d-56f96e3bd471-kube-api-access-m6rxx\") pod \"barbican-keystone-listener-6b5997ff6-w2vq4\" (UID: \"4ab6ee6b-e923-4905-8d8d-56f96e3bd471\") " pod="openstack/barbican-keystone-listener-6b5997ff6-w2vq4" Jan 27 15:29:59 crc kubenswrapper[4697]: I0127 15:29:59.632222 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ab6ee6b-e923-4905-8d8d-56f96e3bd471-combined-ca-bundle\") pod \"barbican-keystone-listener-6b5997ff6-w2vq4\" (UID: \"4ab6ee6b-e923-4905-8d8d-56f96e3bd471\") " pod="openstack/barbican-keystone-listener-6b5997ff6-w2vq4" Jan 27 15:29:59 crc kubenswrapper[4697]: I0127 15:29:59.632249 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/eeb8f1bf-1cfd-43f2-83ec-b322061636f4-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-72dd2\" (UID: \"eeb8f1bf-1cfd-43f2-83ec-b322061636f4\") " pod="openstack/dnsmasq-dns-85ff748b95-72dd2" Jan 27 15:29:59 crc kubenswrapper[4697]: I0127 15:29:59.632271 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eeb8f1bf-1cfd-43f2-83ec-b322061636f4-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-72dd2\" (UID: \"eeb8f1bf-1cfd-43f2-83ec-b322061636f4\") " pod="openstack/dnsmasq-dns-85ff748b95-72dd2" Jan 27 15:29:59 crc kubenswrapper[4697]: I0127 15:29:59.632294 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eeb8f1bf-1cfd-43f2-83ec-b322061636f4-config\") pod \"dnsmasq-dns-85ff748b95-72dd2\" (UID: \"eeb8f1bf-1cfd-43f2-83ec-b322061636f4\") " pod="openstack/dnsmasq-dns-85ff748b95-72dd2" Jan 27 15:29:59 crc kubenswrapper[4697]: I0127 15:29:59.633214 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eeb8f1bf-1cfd-43f2-83ec-b322061636f4-config\") pod \"dnsmasq-dns-85ff748b95-72dd2\" (UID: \"eeb8f1bf-1cfd-43f2-83ec-b322061636f4\") " pod="openstack/dnsmasq-dns-85ff748b95-72dd2" Jan 27 15:29:59 crc kubenswrapper[4697]: I0127 15:29:59.633767 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eeb8f1bf-1cfd-43f2-83ec-b322061636f4-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-72dd2\" (UID: \"eeb8f1bf-1cfd-43f2-83ec-b322061636f4\") " pod="openstack/dnsmasq-dns-85ff748b95-72dd2" Jan 27 15:29:59 crc kubenswrapper[4697]: I0127 15:29:59.634122 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4ab6ee6b-e923-4905-8d8d-56f96e3bd471-logs\") pod \"barbican-keystone-listener-6b5997ff6-w2vq4\" (UID: \"4ab6ee6b-e923-4905-8d8d-56f96e3bd471\") " pod="openstack/barbican-keystone-listener-6b5997ff6-w2vq4" Jan 27 15:29:59 crc kubenswrapper[4697]: I0127 15:29:59.637589 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ab6ee6b-e923-4905-8d8d-56f96e3bd471-combined-ca-bundle\") pod \"barbican-keystone-listener-6b5997ff6-w2vq4\" (UID: \"4ab6ee6b-e923-4905-8d8d-56f96e3bd471\") " pod="openstack/barbican-keystone-listener-6b5997ff6-w2vq4" Jan 27 15:29:59 crc kubenswrapper[4697]: I0127 15:29:59.638341 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/eeb8f1bf-1cfd-43f2-83ec-b322061636f4-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-72dd2\" (UID: \"eeb8f1bf-1cfd-43f2-83ec-b322061636f4\") " pod="openstack/dnsmasq-dns-85ff748b95-72dd2" Jan 27 15:29:59 crc kubenswrapper[4697]: I0127 15:29:59.638849 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eeb8f1bf-1cfd-43f2-83ec-b322061636f4-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-72dd2\" (UID: \"eeb8f1bf-1cfd-43f2-83ec-b322061636f4\") " pod="openstack/dnsmasq-dns-85ff748b95-72dd2" Jan 27 15:29:59 crc kubenswrapper[4697]: I0127 15:29:59.639231 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eeb8f1bf-1cfd-43f2-83ec-b322061636f4-dns-svc\") pod \"dnsmasq-dns-85ff748b95-72dd2\" (UID: \"eeb8f1bf-1cfd-43f2-83ec-b322061636f4\") " pod="openstack/dnsmasq-dns-85ff748b95-72dd2" Jan 27 15:29:59 crc kubenswrapper[4697]: I0127 15:29:59.641186 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ab6ee6b-e923-4905-8d8d-56f96e3bd471-config-data\") pod \"barbican-keystone-listener-6b5997ff6-w2vq4\" (UID: \"4ab6ee6b-e923-4905-8d8d-56f96e3bd471\") " pod="openstack/barbican-keystone-listener-6b5997ff6-w2vq4" Jan 27 15:29:59 crc kubenswrapper[4697]: I0127 15:29:59.648807 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4ab6ee6b-e923-4905-8d8d-56f96e3bd471-config-data-custom\") pod \"barbican-keystone-listener-6b5997ff6-w2vq4\" (UID: \"4ab6ee6b-e923-4905-8d8d-56f96e3bd471\") " pod="openstack/barbican-keystone-listener-6b5997ff6-w2vq4" Jan 27 15:29:59 crc kubenswrapper[4697]: I0127 15:29:59.656613 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-6b4dc8dd8d-w99s5"] Jan 27 15:29:59 crc kubenswrapper[4697]: I0127 15:29:59.661561 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f64v7\" (UniqueName: \"kubernetes.io/projected/eeb8f1bf-1cfd-43f2-83ec-b322061636f4-kube-api-access-f64v7\") pod \"dnsmasq-dns-85ff748b95-72dd2\" (UID: \"eeb8f1bf-1cfd-43f2-83ec-b322061636f4\") " pod="openstack/dnsmasq-dns-85ff748b95-72dd2" Jan 27 15:29:59 crc kubenswrapper[4697]: I0127 15:29:59.662479 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6rxx\" (UniqueName: \"kubernetes.io/projected/4ab6ee6b-e923-4905-8d8d-56f96e3bd471-kube-api-access-m6rxx\") pod \"barbican-keystone-listener-6b5997ff6-w2vq4\" (UID: \"4ab6ee6b-e923-4905-8d8d-56f96e3bd471\") " pod="openstack/barbican-keystone-listener-6b5997ff6-w2vq4" Jan 27 15:29:59 crc kubenswrapper[4697]: I0127 15:29:59.673229 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6b4dc8dd8d-w99s5"] Jan 27 15:29:59 crc kubenswrapper[4697]: I0127 15:29:59.673393 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6b4dc8dd8d-w99s5" Jan 27 15:29:59 crc kubenswrapper[4697]: I0127 15:29:59.676365 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Jan 27 15:29:59 crc kubenswrapper[4697]: I0127 15:29:59.676385 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-8c986997f-97nkx" Jan 27 15:29:59 crc kubenswrapper[4697]: I0127 15:29:59.737739 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/001c4d9a-f883-48ed-aafa-9b820b5b9380-config-data\") pod \"barbican-api-6b4dc8dd8d-w99s5\" (UID: \"001c4d9a-f883-48ed-aafa-9b820b5b9380\") " pod="openstack/barbican-api-6b4dc8dd8d-w99s5" Jan 27 15:29:59 crc kubenswrapper[4697]: I0127 15:29:59.737811 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5924n\" (UniqueName: \"kubernetes.io/projected/001c4d9a-f883-48ed-aafa-9b820b5b9380-kube-api-access-5924n\") pod \"barbican-api-6b4dc8dd8d-w99s5\" (UID: \"001c4d9a-f883-48ed-aafa-9b820b5b9380\") " pod="openstack/barbican-api-6b4dc8dd8d-w99s5" Jan 27 15:29:59 crc kubenswrapper[4697]: I0127 15:29:59.737871 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/001c4d9a-f883-48ed-aafa-9b820b5b9380-config-data-custom\") pod \"barbican-api-6b4dc8dd8d-w99s5\" (UID: \"001c4d9a-f883-48ed-aafa-9b820b5b9380\") " pod="openstack/barbican-api-6b4dc8dd8d-w99s5" Jan 27 15:29:59 crc kubenswrapper[4697]: I0127 15:29:59.737921 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/001c4d9a-f883-48ed-aafa-9b820b5b9380-logs\") pod \"barbican-api-6b4dc8dd8d-w99s5\" (UID: \"001c4d9a-f883-48ed-aafa-9b820b5b9380\") " pod="openstack/barbican-api-6b4dc8dd8d-w99s5" Jan 27 15:29:59 crc kubenswrapper[4697]: I0127 15:29:59.738227 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/001c4d9a-f883-48ed-aafa-9b820b5b9380-combined-ca-bundle\") pod \"barbican-api-6b4dc8dd8d-w99s5\" (UID: \"001c4d9a-f883-48ed-aafa-9b820b5b9380\") " pod="openstack/barbican-api-6b4dc8dd8d-w99s5" Jan 27 15:29:59 crc kubenswrapper[4697]: I0127 15:29:59.819221 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-6b5997ff6-w2vq4" Jan 27 15:29:59 crc kubenswrapper[4697]: I0127 15:29:59.842873 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/001c4d9a-f883-48ed-aafa-9b820b5b9380-config-data-custom\") pod \"barbican-api-6b4dc8dd8d-w99s5\" (UID: \"001c4d9a-f883-48ed-aafa-9b820b5b9380\") " pod="openstack/barbican-api-6b4dc8dd8d-w99s5" Jan 27 15:29:59 crc kubenswrapper[4697]: I0127 15:29:59.842962 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/001c4d9a-f883-48ed-aafa-9b820b5b9380-logs\") pod \"barbican-api-6b4dc8dd8d-w99s5\" (UID: \"001c4d9a-f883-48ed-aafa-9b820b5b9380\") " pod="openstack/barbican-api-6b4dc8dd8d-w99s5" Jan 27 15:29:59 crc kubenswrapper[4697]: I0127 15:29:59.842990 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/001c4d9a-f883-48ed-aafa-9b820b5b9380-combined-ca-bundle\") pod \"barbican-api-6b4dc8dd8d-w99s5\" (UID: \"001c4d9a-f883-48ed-aafa-9b820b5b9380\") " pod="openstack/barbican-api-6b4dc8dd8d-w99s5" Jan 27 15:29:59 crc kubenswrapper[4697]: I0127 15:29:59.843086 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/001c4d9a-f883-48ed-aafa-9b820b5b9380-config-data\") pod \"barbican-api-6b4dc8dd8d-w99s5\" (UID: \"001c4d9a-f883-48ed-aafa-9b820b5b9380\") " pod="openstack/barbican-api-6b4dc8dd8d-w99s5" Jan 27 15:29:59 crc kubenswrapper[4697]: I0127 15:29:59.843128 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5924n\" (UniqueName: \"kubernetes.io/projected/001c4d9a-f883-48ed-aafa-9b820b5b9380-kube-api-access-5924n\") pod \"barbican-api-6b4dc8dd8d-w99s5\" (UID: \"001c4d9a-f883-48ed-aafa-9b820b5b9380\") " pod="openstack/barbican-api-6b4dc8dd8d-w99s5" Jan 27 15:29:59 crc kubenswrapper[4697]: I0127 15:29:59.848244 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/001c4d9a-f883-48ed-aafa-9b820b5b9380-logs\") pod \"barbican-api-6b4dc8dd8d-w99s5\" (UID: \"001c4d9a-f883-48ed-aafa-9b820b5b9380\") " pod="openstack/barbican-api-6b4dc8dd8d-w99s5" Jan 27 15:29:59 crc kubenswrapper[4697]: I0127 15:29:59.852539 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/001c4d9a-f883-48ed-aafa-9b820b5b9380-config-data-custom\") pod \"barbican-api-6b4dc8dd8d-w99s5\" (UID: \"001c4d9a-f883-48ed-aafa-9b820b5b9380\") " pod="openstack/barbican-api-6b4dc8dd8d-w99s5" Jan 27 15:29:59 crc kubenswrapper[4697]: I0127 15:29:59.852579 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/001c4d9a-f883-48ed-aafa-9b820b5b9380-combined-ca-bundle\") pod \"barbican-api-6b4dc8dd8d-w99s5\" (UID: \"001c4d9a-f883-48ed-aafa-9b820b5b9380\") " pod="openstack/barbican-api-6b4dc8dd8d-w99s5" Jan 27 15:29:59 crc kubenswrapper[4697]: I0127 15:29:59.858019 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/001c4d9a-f883-48ed-aafa-9b820b5b9380-config-data\") pod \"barbican-api-6b4dc8dd8d-w99s5\" (UID: \"001c4d9a-f883-48ed-aafa-9b820b5b9380\") " pod="openstack/barbican-api-6b4dc8dd8d-w99s5" Jan 27 15:29:59 crc kubenswrapper[4697]: I0127 15:29:59.865459 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5924n\" (UniqueName: \"kubernetes.io/projected/001c4d9a-f883-48ed-aafa-9b820b5b9380-kube-api-access-5924n\") pod \"barbican-api-6b4dc8dd8d-w99s5\" (UID: \"001c4d9a-f883-48ed-aafa-9b820b5b9380\") " pod="openstack/barbican-api-6b4dc8dd8d-w99s5" Jan 27 15:29:59 crc kubenswrapper[4697]: I0127 15:29:59.923047 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-72dd2" Jan 27 15:30:00 crc kubenswrapper[4697]: I0127 15:30:00.024833 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6b4dc8dd8d-w99s5" Jan 27 15:30:00 crc kubenswrapper[4697]: I0127 15:30:00.120094 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"70da0843-011d-422d-bc59-479d90e689a8","Type":"ContainerStarted","Data":"fdfb3301b52faa56ab862269be9076c39667ffde1d926c21799f9da554b86682"} Jan 27 15:30:00 crc kubenswrapper[4697]: I0127 15:30:00.123946 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 27 15:30:00 crc kubenswrapper[4697]: I0127 15:30:00.121275 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="70da0843-011d-422d-bc59-479d90e689a8" containerName="proxy-httpd" containerID="cri-o://fdfb3301b52faa56ab862269be9076c39667ffde1d926c21799f9da554b86682" gracePeriod=30 Jan 27 15:30:00 crc kubenswrapper[4697]: I0127 15:30:00.120990 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="70da0843-011d-422d-bc59-479d90e689a8" containerName="ceilometer-central-agent" containerID="cri-o://023d0b7bbc4282457b9c42149fe43bd96f28c8d5b0006f9f50340bf622320c7a" gracePeriod=30 Jan 27 15:30:00 crc kubenswrapper[4697]: I0127 15:30:00.121307 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="70da0843-011d-422d-bc59-479d90e689a8" containerName="ceilometer-notification-agent" containerID="cri-o://b032d78bfb5d09a5bcc4cce4a5692183cdd0441c6395fbb0780d86701d8bd0b2" gracePeriod=30 Jan 27 15:30:00 crc kubenswrapper[4697]: I0127 15:30:00.121293 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="70da0843-011d-422d-bc59-479d90e689a8" containerName="sg-core" containerID="cri-o://f5f8232461a2a3788177d3ee719ad49bae6cb338cba5dd1441ba3675ebf46fe7" gracePeriod=30 Jan 27 15:30:00 crc kubenswrapper[4697]: I0127 15:30:00.170873 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492130-v7pxt"] Jan 27 15:30:00 crc kubenswrapper[4697]: I0127 15:30:00.172509 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492130-v7pxt" Jan 27 15:30:00 crc kubenswrapper[4697]: I0127 15:30:00.183651 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492130-v7pxt"] Jan 27 15:30:00 crc kubenswrapper[4697]: I0127 15:30:00.187982 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 27 15:30:00 crc kubenswrapper[4697]: I0127 15:30:00.188456 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 27 15:30:00 crc kubenswrapper[4697]: I0127 15:30:00.196532 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.454482704 podStartE2EDuration="1m29.196513247s" podCreationTimestamp="2026-01-27 15:28:31 +0000 UTC" firstStartedPulling="2026-01-27 15:28:33.41906821 +0000 UTC m=+1209.591467981" lastFinishedPulling="2026-01-27 15:29:59.161098733 +0000 UTC m=+1295.333498524" observedRunningTime="2026-01-27 15:30:00.159339155 +0000 UTC m=+1296.331738936" watchObservedRunningTime="2026-01-27 15:30:00.196513247 +0000 UTC m=+1296.368913028" Jan 27 15:30:00 crc kubenswrapper[4697]: I0127 15:30:00.279817 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f4c4714f-1111-4d49-88a7-e1ac4dfa70b6-config-volume\") pod \"collect-profiles-29492130-v7pxt\" (UID: \"f4c4714f-1111-4d49-88a7-e1ac4dfa70b6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492130-v7pxt" Jan 27 15:30:00 crc kubenswrapper[4697]: I0127 15:30:00.279991 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f4c4714f-1111-4d49-88a7-e1ac4dfa70b6-secret-volume\") pod \"collect-profiles-29492130-v7pxt\" (UID: \"f4c4714f-1111-4d49-88a7-e1ac4dfa70b6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492130-v7pxt" Jan 27 15:30:00 crc kubenswrapper[4697]: I0127 15:30:00.300053 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hm7fz\" (UniqueName: \"kubernetes.io/projected/f4c4714f-1111-4d49-88a7-e1ac4dfa70b6-kube-api-access-hm7fz\") pod \"collect-profiles-29492130-v7pxt\" (UID: \"f4c4714f-1111-4d49-88a7-e1ac4dfa70b6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492130-v7pxt" Jan 27 15:30:00 crc kubenswrapper[4697]: I0127 15:30:00.403311 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f4c4714f-1111-4d49-88a7-e1ac4dfa70b6-config-volume\") pod \"collect-profiles-29492130-v7pxt\" (UID: \"f4c4714f-1111-4d49-88a7-e1ac4dfa70b6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492130-v7pxt" Jan 27 15:30:00 crc kubenswrapper[4697]: I0127 15:30:00.403413 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f4c4714f-1111-4d49-88a7-e1ac4dfa70b6-secret-volume\") pod \"collect-profiles-29492130-v7pxt\" (UID: \"f4c4714f-1111-4d49-88a7-e1ac4dfa70b6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492130-v7pxt" Jan 27 15:30:00 crc kubenswrapper[4697]: I0127 15:30:00.403531 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hm7fz\" (UniqueName: \"kubernetes.io/projected/f4c4714f-1111-4d49-88a7-e1ac4dfa70b6-kube-api-access-hm7fz\") pod \"collect-profiles-29492130-v7pxt\" (UID: \"f4c4714f-1111-4d49-88a7-e1ac4dfa70b6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492130-v7pxt" Jan 27 15:30:00 crc kubenswrapper[4697]: I0127 15:30:00.405311 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f4c4714f-1111-4d49-88a7-e1ac4dfa70b6-config-volume\") pod \"collect-profiles-29492130-v7pxt\" (UID: \"f4c4714f-1111-4d49-88a7-e1ac4dfa70b6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492130-v7pxt" Jan 27 15:30:00 crc kubenswrapper[4697]: I0127 15:30:00.429321 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hm7fz\" (UniqueName: \"kubernetes.io/projected/f4c4714f-1111-4d49-88a7-e1ac4dfa70b6-kube-api-access-hm7fz\") pod \"collect-profiles-29492130-v7pxt\" (UID: \"f4c4714f-1111-4d49-88a7-e1ac4dfa70b6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492130-v7pxt" Jan 27 15:30:00 crc kubenswrapper[4697]: I0127 15:30:00.443672 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f4c4714f-1111-4d49-88a7-e1ac4dfa70b6-secret-volume\") pod \"collect-profiles-29492130-v7pxt\" (UID: \"f4c4714f-1111-4d49-88a7-e1ac4dfa70b6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492130-v7pxt" Jan 27 15:30:00 crc kubenswrapper[4697]: I0127 15:30:00.529460 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492130-v7pxt" Jan 27 15:30:00 crc kubenswrapper[4697]: I0127 15:30:00.640030 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-5ccccf969c-jgqtz" Jan 27 15:30:00 crc kubenswrapper[4697]: I0127 15:30:00.761214 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-8c986997f-97nkx"] Jan 27 15:30:00 crc kubenswrapper[4697]: I0127 15:30:00.888874 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6887cfc8d4-v8f57"] Jan 27 15:30:00 crc kubenswrapper[4697]: I0127 15:30:00.889473 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6887cfc8d4-v8f57" podUID="dc00891c-0cae-42c0-bb0a-8e78bd146365" containerName="neutron-api" containerID="cri-o://2a16194145ec6654978b540e58e66ba2b45b349503991b45948adac1968da332" gracePeriod=30 Jan 27 15:30:00 crc kubenswrapper[4697]: I0127 15:30:00.890297 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6887cfc8d4-v8f57" podUID="dc00891c-0cae-42c0-bb0a-8e78bd146365" containerName="neutron-httpd" containerID="cri-o://41730bf612d1b12077a746b67d7a91f4a81462e0cc7cdf86fe3d450b1e672c0a" gracePeriod=30 Jan 27 15:30:00 crc kubenswrapper[4697]: I0127 15:30:00.903800 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-6887cfc8d4-v8f57" Jan 27 15:30:00 crc kubenswrapper[4697]: I0127 15:30:00.915176 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-6b5997ff6-w2vq4"] Jan 27 15:30:00 crc kubenswrapper[4697]: I0127 15:30:00.926830 4697 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5b9dc56b78-cpxnx" podUID="ca5e937a-90cf-44e0-bf5c-bcb75c95a2f4" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.148:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.148:8443: connect: connection refused" Jan 27 15:30:01 crc kubenswrapper[4697]: I0127 15:30:01.001847 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-dff8b9f65-4b4q2"] Jan 27 15:30:01 crc kubenswrapper[4697]: I0127 15:30:01.016972 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dff8b9f65-4b4q2" Jan 27 15:30:01 crc kubenswrapper[4697]: I0127 15:30:01.020497 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-dff8b9f65-4b4q2"] Jan 27 15:30:01 crc kubenswrapper[4697]: I0127 15:30:01.046350 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-72dd2"] Jan 27 15:30:01 crc kubenswrapper[4697]: I0127 15:30:01.138911 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-8c986997f-97nkx" event={"ID":"c283033b-665a-4e84-b347-5ab724df37be","Type":"ContainerStarted","Data":"299b2482a42658560cefef26aa162b5bd2607341ddb0732f3df6a4bd3ea8518d"} Jan 27 15:30:01 crc kubenswrapper[4697]: I0127 15:30:01.140335 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-72dd2" event={"ID":"eeb8f1bf-1cfd-43f2-83ec-b322061636f4","Type":"ContainerStarted","Data":"6de3994444687fe2f9ca9cc271a1a86c1a186b094847c2a0bf93998f3b34d520"} Jan 27 15:30:01 crc kubenswrapper[4697]: I0127 15:30:01.141105 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6b5997ff6-w2vq4" event={"ID":"4ab6ee6b-e923-4905-8d8d-56f96e3bd471","Type":"ContainerStarted","Data":"93ec8f85caa82aaca2754b37068f752aeddec82854716d7f594225ab1d641871"} Jan 27 15:30:01 crc kubenswrapper[4697]: I0127 15:30:01.142935 4697 generic.go:334] "Generic (PLEG): container finished" podID="70da0843-011d-422d-bc59-479d90e689a8" containerID="f5f8232461a2a3788177d3ee719ad49bae6cb338cba5dd1441ba3675ebf46fe7" exitCode=2 Jan 27 15:30:01 crc kubenswrapper[4697]: I0127 15:30:01.142961 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"70da0843-011d-422d-bc59-479d90e689a8","Type":"ContainerDied","Data":"f5f8232461a2a3788177d3ee719ad49bae6cb338cba5dd1441ba3675ebf46fe7"} Jan 27 15:30:01 crc kubenswrapper[4697]: I0127 15:30:01.178508 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzk2c\" (UniqueName: \"kubernetes.io/projected/f67513b8-77d5-4a24-b1ee-ce73e70cb72d-kube-api-access-qzk2c\") pod \"neutron-dff8b9f65-4b4q2\" (UID: \"f67513b8-77d5-4a24-b1ee-ce73e70cb72d\") " pod="openstack/neutron-dff8b9f65-4b4q2" Jan 27 15:30:01 crc kubenswrapper[4697]: I0127 15:30:01.178571 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f67513b8-77d5-4a24-b1ee-ce73e70cb72d-ovndb-tls-certs\") pod \"neutron-dff8b9f65-4b4q2\" (UID: \"f67513b8-77d5-4a24-b1ee-ce73e70cb72d\") " pod="openstack/neutron-dff8b9f65-4b4q2" Jan 27 15:30:01 crc kubenswrapper[4697]: I0127 15:30:01.178621 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f67513b8-77d5-4a24-b1ee-ce73e70cb72d-internal-tls-certs\") pod \"neutron-dff8b9f65-4b4q2\" (UID: \"f67513b8-77d5-4a24-b1ee-ce73e70cb72d\") " pod="openstack/neutron-dff8b9f65-4b4q2" Jan 27 15:30:01 crc kubenswrapper[4697]: I0127 15:30:01.178663 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f67513b8-77d5-4a24-b1ee-ce73e70cb72d-combined-ca-bundle\") pod \"neutron-dff8b9f65-4b4q2\" (UID: \"f67513b8-77d5-4a24-b1ee-ce73e70cb72d\") " pod="openstack/neutron-dff8b9f65-4b4q2" Jan 27 15:30:01 crc kubenswrapper[4697]: I0127 15:30:01.178687 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f67513b8-77d5-4a24-b1ee-ce73e70cb72d-httpd-config\") pod \"neutron-dff8b9f65-4b4q2\" (UID: \"f67513b8-77d5-4a24-b1ee-ce73e70cb72d\") " pod="openstack/neutron-dff8b9f65-4b4q2" Jan 27 15:30:01 crc kubenswrapper[4697]: I0127 15:30:01.181152 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f67513b8-77d5-4a24-b1ee-ce73e70cb72d-config\") pod \"neutron-dff8b9f65-4b4q2\" (UID: \"f67513b8-77d5-4a24-b1ee-ce73e70cb72d\") " pod="openstack/neutron-dff8b9f65-4b4q2" Jan 27 15:30:01 crc kubenswrapper[4697]: I0127 15:30:01.181188 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f67513b8-77d5-4a24-b1ee-ce73e70cb72d-public-tls-certs\") pod \"neutron-dff8b9f65-4b4q2\" (UID: \"f67513b8-77d5-4a24-b1ee-ce73e70cb72d\") " pod="openstack/neutron-dff8b9f65-4b4q2" Jan 27 15:30:01 crc kubenswrapper[4697]: I0127 15:30:01.241303 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6b4dc8dd8d-w99s5"] Jan 27 15:30:01 crc kubenswrapper[4697]: I0127 15:30:01.282844 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f67513b8-77d5-4a24-b1ee-ce73e70cb72d-combined-ca-bundle\") pod \"neutron-dff8b9f65-4b4q2\" (UID: \"f67513b8-77d5-4a24-b1ee-ce73e70cb72d\") " pod="openstack/neutron-dff8b9f65-4b4q2" Jan 27 15:30:01 crc kubenswrapper[4697]: I0127 15:30:01.282891 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f67513b8-77d5-4a24-b1ee-ce73e70cb72d-httpd-config\") pod \"neutron-dff8b9f65-4b4q2\" (UID: \"f67513b8-77d5-4a24-b1ee-ce73e70cb72d\") " pod="openstack/neutron-dff8b9f65-4b4q2" Jan 27 15:30:01 crc kubenswrapper[4697]: I0127 15:30:01.282984 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f67513b8-77d5-4a24-b1ee-ce73e70cb72d-config\") pod \"neutron-dff8b9f65-4b4q2\" (UID: \"f67513b8-77d5-4a24-b1ee-ce73e70cb72d\") " pod="openstack/neutron-dff8b9f65-4b4q2" Jan 27 15:30:01 crc kubenswrapper[4697]: I0127 15:30:01.283002 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f67513b8-77d5-4a24-b1ee-ce73e70cb72d-public-tls-certs\") pod \"neutron-dff8b9f65-4b4q2\" (UID: \"f67513b8-77d5-4a24-b1ee-ce73e70cb72d\") " pod="openstack/neutron-dff8b9f65-4b4q2" Jan 27 15:30:01 crc kubenswrapper[4697]: I0127 15:30:01.283050 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qzk2c\" (UniqueName: \"kubernetes.io/projected/f67513b8-77d5-4a24-b1ee-ce73e70cb72d-kube-api-access-qzk2c\") pod \"neutron-dff8b9f65-4b4q2\" (UID: \"f67513b8-77d5-4a24-b1ee-ce73e70cb72d\") " pod="openstack/neutron-dff8b9f65-4b4q2" Jan 27 15:30:01 crc kubenswrapper[4697]: I0127 15:30:01.283091 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f67513b8-77d5-4a24-b1ee-ce73e70cb72d-ovndb-tls-certs\") pod \"neutron-dff8b9f65-4b4q2\" (UID: \"f67513b8-77d5-4a24-b1ee-ce73e70cb72d\") " pod="openstack/neutron-dff8b9f65-4b4q2" Jan 27 15:30:01 crc kubenswrapper[4697]: I0127 15:30:01.283122 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f67513b8-77d5-4a24-b1ee-ce73e70cb72d-internal-tls-certs\") pod \"neutron-dff8b9f65-4b4q2\" (UID: \"f67513b8-77d5-4a24-b1ee-ce73e70cb72d\") " pod="openstack/neutron-dff8b9f65-4b4q2" Jan 27 15:30:01 crc kubenswrapper[4697]: I0127 15:30:01.301565 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f67513b8-77d5-4a24-b1ee-ce73e70cb72d-combined-ca-bundle\") pod \"neutron-dff8b9f65-4b4q2\" (UID: \"f67513b8-77d5-4a24-b1ee-ce73e70cb72d\") " pod="openstack/neutron-dff8b9f65-4b4q2" Jan 27 15:30:01 crc kubenswrapper[4697]: I0127 15:30:01.302108 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f67513b8-77d5-4a24-b1ee-ce73e70cb72d-ovndb-tls-certs\") pod \"neutron-dff8b9f65-4b4q2\" (UID: \"f67513b8-77d5-4a24-b1ee-ce73e70cb72d\") " pod="openstack/neutron-dff8b9f65-4b4q2" Jan 27 15:30:01 crc kubenswrapper[4697]: I0127 15:30:01.302680 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f67513b8-77d5-4a24-b1ee-ce73e70cb72d-internal-tls-certs\") pod \"neutron-dff8b9f65-4b4q2\" (UID: \"f67513b8-77d5-4a24-b1ee-ce73e70cb72d\") " pod="openstack/neutron-dff8b9f65-4b4q2" Jan 27 15:30:01 crc kubenswrapper[4697]: I0127 15:30:01.304834 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/f67513b8-77d5-4a24-b1ee-ce73e70cb72d-config\") pod \"neutron-dff8b9f65-4b4q2\" (UID: \"f67513b8-77d5-4a24-b1ee-ce73e70cb72d\") " pod="openstack/neutron-dff8b9f65-4b4q2" Jan 27 15:30:01 crc kubenswrapper[4697]: I0127 15:30:01.308445 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qzk2c\" (UniqueName: \"kubernetes.io/projected/f67513b8-77d5-4a24-b1ee-ce73e70cb72d-kube-api-access-qzk2c\") pod \"neutron-dff8b9f65-4b4q2\" (UID: \"f67513b8-77d5-4a24-b1ee-ce73e70cb72d\") " pod="openstack/neutron-dff8b9f65-4b4q2" Jan 27 15:30:01 crc kubenswrapper[4697]: I0127 15:30:01.312989 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f67513b8-77d5-4a24-b1ee-ce73e70cb72d-public-tls-certs\") pod \"neutron-dff8b9f65-4b4q2\" (UID: \"f67513b8-77d5-4a24-b1ee-ce73e70cb72d\") " pod="openstack/neutron-dff8b9f65-4b4q2" Jan 27 15:30:01 crc kubenswrapper[4697]: I0127 15:30:01.315639 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f67513b8-77d5-4a24-b1ee-ce73e70cb72d-httpd-config\") pod \"neutron-dff8b9f65-4b4q2\" (UID: \"f67513b8-77d5-4a24-b1ee-ce73e70cb72d\") " pod="openstack/neutron-dff8b9f65-4b4q2" Jan 27 15:30:01 crc kubenswrapper[4697]: I0127 15:30:01.347236 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dff8b9f65-4b4q2" Jan 27 15:30:01 crc kubenswrapper[4697]: I0127 15:30:01.425845 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492130-v7pxt"] Jan 27 15:30:01 crc kubenswrapper[4697]: W0127 15:30:01.498970 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4c4714f_1111_4d49_88a7_e1ac4dfa70b6.slice/crio-f6d65dcb8aadd847e861c19965b4943405b17070f9740d328b80a840a25793b9 WatchSource:0}: Error finding container f6d65dcb8aadd847e861c19965b4943405b17070f9740d328b80a840a25793b9: Status 404 returned error can't find the container with id f6d65dcb8aadd847e861c19965b4943405b17070f9740d328b80a840a25793b9 Jan 27 15:30:02 crc kubenswrapper[4697]: I0127 15:30:02.132409 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-dff8b9f65-4b4q2"] Jan 27 15:30:02 crc kubenswrapper[4697]: W0127 15:30:02.138055 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf67513b8_77d5_4a24_b1ee_ce73e70cb72d.slice/crio-5bb36edd027161e666946273961372a57fcd7bedbfc0f0ceecf8db325fcb4c92 WatchSource:0}: Error finding container 5bb36edd027161e666946273961372a57fcd7bedbfc0f0ceecf8db325fcb4c92: Status 404 returned error can't find the container with id 5bb36edd027161e666946273961372a57fcd7bedbfc0f0ceecf8db325fcb4c92 Jan 27 15:30:02 crc kubenswrapper[4697]: I0127 15:30:02.176624 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6b4dc8dd8d-w99s5" event={"ID":"001c4d9a-f883-48ed-aafa-9b820b5b9380","Type":"ContainerStarted","Data":"91b87d86adc1c0b52193f2a6f5c7b0dee6f084e2b7f4e6c72564bfb2aed13025"} Jan 27 15:30:02 crc kubenswrapper[4697]: I0127 15:30:02.176994 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6b4dc8dd8d-w99s5" event={"ID":"001c4d9a-f883-48ed-aafa-9b820b5b9380","Type":"ContainerStarted","Data":"aebb4dda8066d437a180f84f6e241be7585d73595fe974a8e5e3e09dddc2d863"} Jan 27 15:30:02 crc kubenswrapper[4697]: I0127 15:30:02.201755 4697 generic.go:334] "Generic (PLEG): container finished" podID="eeb8f1bf-1cfd-43f2-83ec-b322061636f4" containerID="c46d7ff452f451007d7792bf9de829af4c060507675f15590dd8da47187dc3cf" exitCode=0 Jan 27 15:30:02 crc kubenswrapper[4697]: I0127 15:30:02.202496 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-72dd2" event={"ID":"eeb8f1bf-1cfd-43f2-83ec-b322061636f4","Type":"ContainerDied","Data":"c46d7ff452f451007d7792bf9de829af4c060507675f15590dd8da47187dc3cf"} Jan 27 15:30:02 crc kubenswrapper[4697]: I0127 15:30:02.242121 4697 generic.go:334] "Generic (PLEG): container finished" podID="dc00891c-0cae-42c0-bb0a-8e78bd146365" containerID="41730bf612d1b12077a746b67d7a91f4a81462e0cc7cdf86fe3d450b1e672c0a" exitCode=0 Jan 27 15:30:02 crc kubenswrapper[4697]: I0127 15:30:02.242226 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6887cfc8d4-v8f57" event={"ID":"dc00891c-0cae-42c0-bb0a-8e78bd146365","Type":"ContainerDied","Data":"41730bf612d1b12077a746b67d7a91f4a81462e0cc7cdf86fe3d450b1e672c0a"} Jan 27 15:30:02 crc kubenswrapper[4697]: I0127 15:30:02.259260 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492130-v7pxt" event={"ID":"f4c4714f-1111-4d49-88a7-e1ac4dfa70b6","Type":"ContainerStarted","Data":"a26dc14ed26f93a6b72b3e2ca898160523a4bf5818de3bb80b7087b1a0496cd3"} Jan 27 15:30:02 crc kubenswrapper[4697]: I0127 15:30:02.259305 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492130-v7pxt" event={"ID":"f4c4714f-1111-4d49-88a7-e1ac4dfa70b6","Type":"ContainerStarted","Data":"f6d65dcb8aadd847e861c19965b4943405b17070f9740d328b80a840a25793b9"} Jan 27 15:30:02 crc kubenswrapper[4697]: I0127 15:30:02.273736 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dff8b9f65-4b4q2" event={"ID":"f67513b8-77d5-4a24-b1ee-ce73e70cb72d","Type":"ContainerStarted","Data":"5bb36edd027161e666946273961372a57fcd7bedbfc0f0ceecf8db325fcb4c92"} Jan 27 15:30:02 crc kubenswrapper[4697]: I0127 15:30:02.300436 4697 generic.go:334] "Generic (PLEG): container finished" podID="70da0843-011d-422d-bc59-479d90e689a8" containerID="b032d78bfb5d09a5bcc4cce4a5692183cdd0441c6395fbb0780d86701d8bd0b2" exitCode=0 Jan 27 15:30:02 crc kubenswrapper[4697]: I0127 15:30:02.300466 4697 generic.go:334] "Generic (PLEG): container finished" podID="70da0843-011d-422d-bc59-479d90e689a8" containerID="023d0b7bbc4282457b9c42149fe43bd96f28c8d5b0006f9f50340bf622320c7a" exitCode=0 Jan 27 15:30:02 crc kubenswrapper[4697]: I0127 15:30:02.300490 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"70da0843-011d-422d-bc59-479d90e689a8","Type":"ContainerDied","Data":"b032d78bfb5d09a5bcc4cce4a5692183cdd0441c6395fbb0780d86701d8bd0b2"} Jan 27 15:30:02 crc kubenswrapper[4697]: I0127 15:30:02.300513 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"70da0843-011d-422d-bc59-479d90e689a8","Type":"ContainerDied","Data":"023d0b7bbc4282457b9c42149fe43bd96f28c8d5b0006f9f50340bf622320c7a"} Jan 27 15:30:02 crc kubenswrapper[4697]: I0127 15:30:02.300794 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29492130-v7pxt" podStartSLOduration=2.30076967 podStartE2EDuration="2.30076967s" podCreationTimestamp="2026-01-27 15:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:30:02.281871501 +0000 UTC m=+1298.454271282" watchObservedRunningTime="2026-01-27 15:30:02.30076967 +0000 UTC m=+1298.473169441" Jan 27 15:30:03 crc kubenswrapper[4697]: I0127 15:30:03.313605 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dff8b9f65-4b4q2" event={"ID":"f67513b8-77d5-4a24-b1ee-ce73e70cb72d","Type":"ContainerStarted","Data":"fb3384cff0d76aafcf382fffd5450941c7768c5840c8a8874f454d4ae436ed7a"} Jan 27 15:30:03 crc kubenswrapper[4697]: I0127 15:30:03.314061 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-dff8b9f65-4b4q2" Jan 27 15:30:03 crc kubenswrapper[4697]: I0127 15:30:03.314074 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dff8b9f65-4b4q2" event={"ID":"f67513b8-77d5-4a24-b1ee-ce73e70cb72d","Type":"ContainerStarted","Data":"4fd15709c2d44b0e0709edad444970d56eb4e91782cae2485df1e9ec7b6c1654"} Jan 27 15:30:03 crc kubenswrapper[4697]: I0127 15:30:03.315580 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6b4dc8dd8d-w99s5" event={"ID":"001c4d9a-f883-48ed-aafa-9b820b5b9380","Type":"ContainerStarted","Data":"d2678d6a9b780b8efb89eb2ab113aff36e204d7ea628e403d7a8ef6567fe3963"} Jan 27 15:30:03 crc kubenswrapper[4697]: I0127 15:30:03.315841 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6b4dc8dd8d-w99s5" Jan 27 15:30:03 crc kubenswrapper[4697]: I0127 15:30:03.315910 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6b4dc8dd8d-w99s5" Jan 27 15:30:03 crc kubenswrapper[4697]: I0127 15:30:03.317729 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-72dd2" event={"ID":"eeb8f1bf-1cfd-43f2-83ec-b322061636f4","Type":"ContainerStarted","Data":"668e06fcd5cdf328c971098fbe1bab0532296b30c846cc20b9c22cd7ae2b7078"} Jan 27 15:30:03 crc kubenswrapper[4697]: I0127 15:30:03.317918 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-85ff748b95-72dd2" Jan 27 15:30:03 crc kubenswrapper[4697]: I0127 15:30:03.319461 4697 generic.go:334] "Generic (PLEG): container finished" podID="f4c4714f-1111-4d49-88a7-e1ac4dfa70b6" containerID="a26dc14ed26f93a6b72b3e2ca898160523a4bf5818de3bb80b7087b1a0496cd3" exitCode=0 Jan 27 15:30:03 crc kubenswrapper[4697]: I0127 15:30:03.319496 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492130-v7pxt" event={"ID":"f4c4714f-1111-4d49-88a7-e1ac4dfa70b6","Type":"ContainerDied","Data":"a26dc14ed26f93a6b72b3e2ca898160523a4bf5818de3bb80b7087b1a0496cd3"} Jan 27 15:30:03 crc kubenswrapper[4697]: I0127 15:30:03.356297 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-dff8b9f65-4b4q2" podStartSLOduration=3.356276403 podStartE2EDuration="3.356276403s" podCreationTimestamp="2026-01-27 15:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:30:03.332314991 +0000 UTC m=+1299.504714792" watchObservedRunningTime="2026-01-27 15:30:03.356276403 +0000 UTC m=+1299.528676184" Jan 27 15:30:03 crc kubenswrapper[4697]: I0127 15:30:03.382345 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-85ff748b95-72dd2" podStartSLOduration=4.3823229470000005 podStartE2EDuration="4.382322947s" podCreationTimestamp="2026-01-27 15:29:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:30:03.362916905 +0000 UTC m=+1299.535316706" watchObservedRunningTime="2026-01-27 15:30:03.382322947 +0000 UTC m=+1299.554722718" Jan 27 15:30:03 crc kubenswrapper[4697]: I0127 15:30:03.487265 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-6b4dc8dd8d-w99s5" podStartSLOduration=4.487243036 podStartE2EDuration="4.487243036s" podCreationTimestamp="2026-01-27 15:29:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:30:03.441420932 +0000 UTC m=+1299.613820713" watchObservedRunningTime="2026-01-27 15:30:03.487243036 +0000 UTC m=+1299.659642817" Jan 27 15:30:03 crc kubenswrapper[4697]: I0127 15:30:03.586400 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-cbf684fd-9bzgt"] Jan 27 15:30:03 crc kubenswrapper[4697]: I0127 15:30:03.589593 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-cbf684fd-9bzgt" Jan 27 15:30:03 crc kubenswrapper[4697]: I0127 15:30:03.591860 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Jan 27 15:30:03 crc kubenswrapper[4697]: I0127 15:30:03.592414 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Jan 27 15:30:03 crc kubenswrapper[4697]: I0127 15:30:03.675351 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/94a26d25-9d4f-4d9e-becb-5fef1852a9cc-public-tls-certs\") pod \"barbican-api-cbf684fd-9bzgt\" (UID: \"94a26d25-9d4f-4d9e-becb-5fef1852a9cc\") " pod="openstack/barbican-api-cbf684fd-9bzgt" Jan 27 15:30:03 crc kubenswrapper[4697]: I0127 15:30:03.675421 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94a26d25-9d4f-4d9e-becb-5fef1852a9cc-config-data\") pod \"barbican-api-cbf684fd-9bzgt\" (UID: \"94a26d25-9d4f-4d9e-becb-5fef1852a9cc\") " pod="openstack/barbican-api-cbf684fd-9bzgt" Jan 27 15:30:03 crc kubenswrapper[4697]: I0127 15:30:03.675466 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtvm8\" (UniqueName: \"kubernetes.io/projected/94a26d25-9d4f-4d9e-becb-5fef1852a9cc-kube-api-access-wtvm8\") pod \"barbican-api-cbf684fd-9bzgt\" (UID: \"94a26d25-9d4f-4d9e-becb-5fef1852a9cc\") " pod="openstack/barbican-api-cbf684fd-9bzgt" Jan 27 15:30:03 crc kubenswrapper[4697]: I0127 15:30:03.675530 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94a26d25-9d4f-4d9e-becb-5fef1852a9cc-combined-ca-bundle\") pod \"barbican-api-cbf684fd-9bzgt\" (UID: \"94a26d25-9d4f-4d9e-becb-5fef1852a9cc\") " pod="openstack/barbican-api-cbf684fd-9bzgt" Jan 27 15:30:03 crc kubenswrapper[4697]: I0127 15:30:03.675560 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/94a26d25-9d4f-4d9e-becb-5fef1852a9cc-config-data-custom\") pod \"barbican-api-cbf684fd-9bzgt\" (UID: \"94a26d25-9d4f-4d9e-becb-5fef1852a9cc\") " pod="openstack/barbican-api-cbf684fd-9bzgt" Jan 27 15:30:03 crc kubenswrapper[4697]: I0127 15:30:03.675584 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/94a26d25-9d4f-4d9e-becb-5fef1852a9cc-internal-tls-certs\") pod \"barbican-api-cbf684fd-9bzgt\" (UID: \"94a26d25-9d4f-4d9e-becb-5fef1852a9cc\") " pod="openstack/barbican-api-cbf684fd-9bzgt" Jan 27 15:30:03 crc kubenswrapper[4697]: I0127 15:30:03.675609 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/94a26d25-9d4f-4d9e-becb-5fef1852a9cc-logs\") pod \"barbican-api-cbf684fd-9bzgt\" (UID: \"94a26d25-9d4f-4d9e-becb-5fef1852a9cc\") " pod="openstack/barbican-api-cbf684fd-9bzgt" Jan 27 15:30:03 crc kubenswrapper[4697]: I0127 15:30:03.676541 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-cbf684fd-9bzgt"] Jan 27 15:30:03 crc kubenswrapper[4697]: I0127 15:30:03.777030 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wtvm8\" (UniqueName: \"kubernetes.io/projected/94a26d25-9d4f-4d9e-becb-5fef1852a9cc-kube-api-access-wtvm8\") pod \"barbican-api-cbf684fd-9bzgt\" (UID: \"94a26d25-9d4f-4d9e-becb-5fef1852a9cc\") " pod="openstack/barbican-api-cbf684fd-9bzgt" Jan 27 15:30:03 crc kubenswrapper[4697]: I0127 15:30:03.777120 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94a26d25-9d4f-4d9e-becb-5fef1852a9cc-combined-ca-bundle\") pod \"barbican-api-cbf684fd-9bzgt\" (UID: \"94a26d25-9d4f-4d9e-becb-5fef1852a9cc\") " pod="openstack/barbican-api-cbf684fd-9bzgt" Jan 27 15:30:03 crc kubenswrapper[4697]: I0127 15:30:03.777143 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/94a26d25-9d4f-4d9e-becb-5fef1852a9cc-config-data-custom\") pod \"barbican-api-cbf684fd-9bzgt\" (UID: \"94a26d25-9d4f-4d9e-becb-5fef1852a9cc\") " pod="openstack/barbican-api-cbf684fd-9bzgt" Jan 27 15:30:03 crc kubenswrapper[4697]: I0127 15:30:03.777188 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/94a26d25-9d4f-4d9e-becb-5fef1852a9cc-internal-tls-certs\") pod \"barbican-api-cbf684fd-9bzgt\" (UID: \"94a26d25-9d4f-4d9e-becb-5fef1852a9cc\") " pod="openstack/barbican-api-cbf684fd-9bzgt" Jan 27 15:30:03 crc kubenswrapper[4697]: I0127 15:30:03.777212 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/94a26d25-9d4f-4d9e-becb-5fef1852a9cc-logs\") pod \"barbican-api-cbf684fd-9bzgt\" (UID: \"94a26d25-9d4f-4d9e-becb-5fef1852a9cc\") " pod="openstack/barbican-api-cbf684fd-9bzgt" Jan 27 15:30:03 crc kubenswrapper[4697]: I0127 15:30:03.777304 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/94a26d25-9d4f-4d9e-becb-5fef1852a9cc-public-tls-certs\") pod \"barbican-api-cbf684fd-9bzgt\" (UID: \"94a26d25-9d4f-4d9e-becb-5fef1852a9cc\") " pod="openstack/barbican-api-cbf684fd-9bzgt" Jan 27 15:30:03 crc kubenswrapper[4697]: I0127 15:30:03.777345 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94a26d25-9d4f-4d9e-becb-5fef1852a9cc-config-data\") pod \"barbican-api-cbf684fd-9bzgt\" (UID: \"94a26d25-9d4f-4d9e-becb-5fef1852a9cc\") " pod="openstack/barbican-api-cbf684fd-9bzgt" Jan 27 15:30:03 crc kubenswrapper[4697]: I0127 15:30:03.779879 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/94a26d25-9d4f-4d9e-becb-5fef1852a9cc-logs\") pod \"barbican-api-cbf684fd-9bzgt\" (UID: \"94a26d25-9d4f-4d9e-becb-5fef1852a9cc\") " pod="openstack/barbican-api-cbf684fd-9bzgt" Jan 27 15:30:03 crc kubenswrapper[4697]: I0127 15:30:03.783859 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/94a26d25-9d4f-4d9e-becb-5fef1852a9cc-config-data-custom\") pod \"barbican-api-cbf684fd-9bzgt\" (UID: \"94a26d25-9d4f-4d9e-becb-5fef1852a9cc\") " pod="openstack/barbican-api-cbf684fd-9bzgt" Jan 27 15:30:03 crc kubenswrapper[4697]: I0127 15:30:03.788540 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/94a26d25-9d4f-4d9e-becb-5fef1852a9cc-public-tls-certs\") pod \"barbican-api-cbf684fd-9bzgt\" (UID: \"94a26d25-9d4f-4d9e-becb-5fef1852a9cc\") " pod="openstack/barbican-api-cbf684fd-9bzgt" Jan 27 15:30:03 crc kubenswrapper[4697]: I0127 15:30:03.791330 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94a26d25-9d4f-4d9e-becb-5fef1852a9cc-config-data\") pod \"barbican-api-cbf684fd-9bzgt\" (UID: \"94a26d25-9d4f-4d9e-becb-5fef1852a9cc\") " pod="openstack/barbican-api-cbf684fd-9bzgt" Jan 27 15:30:03 crc kubenswrapper[4697]: I0127 15:30:03.794685 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/94a26d25-9d4f-4d9e-becb-5fef1852a9cc-internal-tls-certs\") pod \"barbican-api-cbf684fd-9bzgt\" (UID: \"94a26d25-9d4f-4d9e-becb-5fef1852a9cc\") " pod="openstack/barbican-api-cbf684fd-9bzgt" Jan 27 15:30:03 crc kubenswrapper[4697]: I0127 15:30:03.822566 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94a26d25-9d4f-4d9e-becb-5fef1852a9cc-combined-ca-bundle\") pod \"barbican-api-cbf684fd-9bzgt\" (UID: \"94a26d25-9d4f-4d9e-becb-5fef1852a9cc\") " pod="openstack/barbican-api-cbf684fd-9bzgt" Jan 27 15:30:03 crc kubenswrapper[4697]: I0127 15:30:03.831345 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wtvm8\" (UniqueName: \"kubernetes.io/projected/94a26d25-9d4f-4d9e-becb-5fef1852a9cc-kube-api-access-wtvm8\") pod \"barbican-api-cbf684fd-9bzgt\" (UID: \"94a26d25-9d4f-4d9e-becb-5fef1852a9cc\") " pod="openstack/barbican-api-cbf684fd-9bzgt" Jan 27 15:30:03 crc kubenswrapper[4697]: I0127 15:30:03.967359 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-cbf684fd-9bzgt" Jan 27 15:30:04 crc kubenswrapper[4697]: I0127 15:30:04.356155 4697 generic.go:334] "Generic (PLEG): container finished" podID="dc00891c-0cae-42c0-bb0a-8e78bd146365" containerID="2a16194145ec6654978b540e58e66ba2b45b349503991b45948adac1968da332" exitCode=0 Jan 27 15:30:04 crc kubenswrapper[4697]: I0127 15:30:04.356715 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6887cfc8d4-v8f57" event={"ID":"dc00891c-0cae-42c0-bb0a-8e78bd146365","Type":"ContainerDied","Data":"2a16194145ec6654978b540e58e66ba2b45b349503991b45948adac1968da332"} Jan 27 15:30:05 crc kubenswrapper[4697]: I0127 15:30:05.417696 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6887cfc8d4-v8f57" event={"ID":"dc00891c-0cae-42c0-bb0a-8e78bd146365","Type":"ContainerDied","Data":"fc8e63de561ed76bb1d4d955372538f84c96e36748660036ee2ee2c41d5d4e68"} Jan 27 15:30:05 crc kubenswrapper[4697]: I0127 15:30:05.418130 4697 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fc8e63de561ed76bb1d4d955372538f84c96e36748660036ee2ee2c41d5d4e68" Jan 27 15:30:05 crc kubenswrapper[4697]: I0127 15:30:05.446229 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492130-v7pxt" event={"ID":"f4c4714f-1111-4d49-88a7-e1ac4dfa70b6","Type":"ContainerDied","Data":"f6d65dcb8aadd847e861c19965b4943405b17070f9740d328b80a840a25793b9"} Jan 27 15:30:05 crc kubenswrapper[4697]: I0127 15:30:05.446371 4697 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f6d65dcb8aadd847e861c19965b4943405b17070f9740d328b80a840a25793b9" Jan 27 15:30:05 crc kubenswrapper[4697]: I0127 15:30:05.459325 4697 generic.go:334] "Generic (PLEG): container finished" podID="ba2a2abf-806a-4708-8f03-9e68c85c6c6c" containerID="cbfa7c85b9e2c7f5b3e7e417f0fd23351a97f5c2e8291eebc7a7e7770d3b08b2" exitCode=0 Jan 27 15:30:05 crc kubenswrapper[4697]: I0127 15:30:05.459562 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-n5g7m" event={"ID":"ba2a2abf-806a-4708-8f03-9e68c85c6c6c","Type":"ContainerDied","Data":"cbfa7c85b9e2c7f5b3e7e417f0fd23351a97f5c2e8291eebc7a7e7770d3b08b2"} Jan 27 15:30:05 crc kubenswrapper[4697]: I0127 15:30:05.519222 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6887cfc8d4-v8f57" Jan 27 15:30:05 crc kubenswrapper[4697]: I0127 15:30:05.653545 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492130-v7pxt" Jan 27 15:30:05 crc kubenswrapper[4697]: I0127 15:30:05.737708 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nprcp\" (UniqueName: \"kubernetes.io/projected/dc00891c-0cae-42c0-bb0a-8e78bd146365-kube-api-access-nprcp\") pod \"dc00891c-0cae-42c0-bb0a-8e78bd146365\" (UID: \"dc00891c-0cae-42c0-bb0a-8e78bd146365\") " Jan 27 15:30:05 crc kubenswrapper[4697]: I0127 15:30:05.737830 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f4c4714f-1111-4d49-88a7-e1ac4dfa70b6-secret-volume\") pod \"f4c4714f-1111-4d49-88a7-e1ac4dfa70b6\" (UID: \"f4c4714f-1111-4d49-88a7-e1ac4dfa70b6\") " Jan 27 15:30:05 crc kubenswrapper[4697]: I0127 15:30:05.737875 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc00891c-0cae-42c0-bb0a-8e78bd146365-ovndb-tls-certs\") pod \"dc00891c-0cae-42c0-bb0a-8e78bd146365\" (UID: \"dc00891c-0cae-42c0-bb0a-8e78bd146365\") " Jan 27 15:30:05 crc kubenswrapper[4697]: I0127 15:30:05.737932 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f4c4714f-1111-4d49-88a7-e1ac4dfa70b6-config-volume\") pod \"f4c4714f-1111-4d49-88a7-e1ac4dfa70b6\" (UID: \"f4c4714f-1111-4d49-88a7-e1ac4dfa70b6\") " Jan 27 15:30:05 crc kubenswrapper[4697]: I0127 15:30:05.738006 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hm7fz\" (UniqueName: \"kubernetes.io/projected/f4c4714f-1111-4d49-88a7-e1ac4dfa70b6-kube-api-access-hm7fz\") pod \"f4c4714f-1111-4d49-88a7-e1ac4dfa70b6\" (UID: \"f4c4714f-1111-4d49-88a7-e1ac4dfa70b6\") " Jan 27 15:30:05 crc kubenswrapper[4697]: I0127 15:30:05.738045 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc00891c-0cae-42c0-bb0a-8e78bd146365-combined-ca-bundle\") pod \"dc00891c-0cae-42c0-bb0a-8e78bd146365\" (UID: \"dc00891c-0cae-42c0-bb0a-8e78bd146365\") " Jan 27 15:30:05 crc kubenswrapper[4697]: I0127 15:30:05.738108 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/dc00891c-0cae-42c0-bb0a-8e78bd146365-httpd-config\") pod \"dc00891c-0cae-42c0-bb0a-8e78bd146365\" (UID: \"dc00891c-0cae-42c0-bb0a-8e78bd146365\") " Jan 27 15:30:05 crc kubenswrapper[4697]: I0127 15:30:05.738158 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/dc00891c-0cae-42c0-bb0a-8e78bd146365-config\") pod \"dc00891c-0cae-42c0-bb0a-8e78bd146365\" (UID: \"dc00891c-0cae-42c0-bb0a-8e78bd146365\") " Jan 27 15:30:05 crc kubenswrapper[4697]: I0127 15:30:05.749857 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4c4714f-1111-4d49-88a7-e1ac4dfa70b6-config-volume" (OuterVolumeSpecName: "config-volume") pod "f4c4714f-1111-4d49-88a7-e1ac4dfa70b6" (UID: "f4c4714f-1111-4d49-88a7-e1ac4dfa70b6"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:30:05 crc kubenswrapper[4697]: I0127 15:30:05.765008 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc00891c-0cae-42c0-bb0a-8e78bd146365-kube-api-access-nprcp" (OuterVolumeSpecName: "kube-api-access-nprcp") pod "dc00891c-0cae-42c0-bb0a-8e78bd146365" (UID: "dc00891c-0cae-42c0-bb0a-8e78bd146365"). InnerVolumeSpecName "kube-api-access-nprcp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:30:05 crc kubenswrapper[4697]: I0127 15:30:05.766869 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4c4714f-1111-4d49-88a7-e1ac4dfa70b6-kube-api-access-hm7fz" (OuterVolumeSpecName: "kube-api-access-hm7fz") pod "f4c4714f-1111-4d49-88a7-e1ac4dfa70b6" (UID: "f4c4714f-1111-4d49-88a7-e1ac4dfa70b6"). InnerVolumeSpecName "kube-api-access-hm7fz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:30:05 crc kubenswrapper[4697]: I0127 15:30:05.771416 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4c4714f-1111-4d49-88a7-e1ac4dfa70b6-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "f4c4714f-1111-4d49-88a7-e1ac4dfa70b6" (UID: "f4c4714f-1111-4d49-88a7-e1ac4dfa70b6"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:30:05 crc kubenswrapper[4697]: I0127 15:30:05.778583 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc00891c-0cae-42c0-bb0a-8e78bd146365-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "dc00891c-0cae-42c0-bb0a-8e78bd146365" (UID: "dc00891c-0cae-42c0-bb0a-8e78bd146365"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:30:05 crc kubenswrapper[4697]: I0127 15:30:05.842540 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nprcp\" (UniqueName: \"kubernetes.io/projected/dc00891c-0cae-42c0-bb0a-8e78bd146365-kube-api-access-nprcp\") on node \"crc\" DevicePath \"\"" Jan 27 15:30:05 crc kubenswrapper[4697]: I0127 15:30:05.842605 4697 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f4c4714f-1111-4d49-88a7-e1ac4dfa70b6-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 27 15:30:05 crc kubenswrapper[4697]: I0127 15:30:05.842624 4697 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f4c4714f-1111-4d49-88a7-e1ac4dfa70b6-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 15:30:05 crc kubenswrapper[4697]: I0127 15:30:05.842642 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hm7fz\" (UniqueName: \"kubernetes.io/projected/f4c4714f-1111-4d49-88a7-e1ac4dfa70b6-kube-api-access-hm7fz\") on node \"crc\" DevicePath \"\"" Jan 27 15:30:05 crc kubenswrapper[4697]: I0127 15:30:05.842659 4697 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/dc00891c-0cae-42c0-bb0a-8e78bd146365-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 27 15:30:05 crc kubenswrapper[4697]: I0127 15:30:05.860709 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc00891c-0cae-42c0-bb0a-8e78bd146365-config" (OuterVolumeSpecName: "config") pod "dc00891c-0cae-42c0-bb0a-8e78bd146365" (UID: "dc00891c-0cae-42c0-bb0a-8e78bd146365"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:30:05 crc kubenswrapper[4697]: I0127 15:30:05.901920 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc00891c-0cae-42c0-bb0a-8e78bd146365-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dc00891c-0cae-42c0-bb0a-8e78bd146365" (UID: "dc00891c-0cae-42c0-bb0a-8e78bd146365"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:30:05 crc kubenswrapper[4697]: I0127 15:30:05.945097 4697 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc00891c-0cae-42c0-bb0a-8e78bd146365-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 15:30:05 crc kubenswrapper[4697]: I0127 15:30:05.945125 4697 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/dc00891c-0cae-42c0-bb0a-8e78bd146365-config\") on node \"crc\" DevicePath \"\"" Jan 27 15:30:05 crc kubenswrapper[4697]: I0127 15:30:05.975379 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc00891c-0cae-42c0-bb0a-8e78bd146365-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "dc00891c-0cae-42c0-bb0a-8e78bd146365" (UID: "dc00891c-0cae-42c0-bb0a-8e78bd146365"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:30:06 crc kubenswrapper[4697]: I0127 15:30:06.047037 4697 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc00891c-0cae-42c0-bb0a-8e78bd146365-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 15:30:06 crc kubenswrapper[4697]: I0127 15:30:06.114185 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-cbf684fd-9bzgt"] Jan 27 15:30:06 crc kubenswrapper[4697]: I0127 15:30:06.501492 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-8c986997f-97nkx" event={"ID":"c283033b-665a-4e84-b347-5ab724df37be","Type":"ContainerStarted","Data":"300765f044897b9e5c572186de7228036848068d0ca251fd1f351c0fb1e12796"} Jan 27 15:30:06 crc kubenswrapper[4697]: I0127 15:30:06.501848 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-8c986997f-97nkx" event={"ID":"c283033b-665a-4e84-b347-5ab724df37be","Type":"ContainerStarted","Data":"482c196aba506f89ebce88a8f3687e6901629642f38d8cb28af97fdfe207d1ab"} Jan 27 15:30:06 crc kubenswrapper[4697]: I0127 15:30:06.520140 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6b5997ff6-w2vq4" event={"ID":"4ab6ee6b-e923-4905-8d8d-56f96e3bd471","Type":"ContainerStarted","Data":"bdb9a61c06fd9d9cf7a8cb53db3c93662e94f84731db101dae1fa59267859efb"} Jan 27 15:30:06 crc kubenswrapper[4697]: I0127 15:30:06.520192 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6b5997ff6-w2vq4" event={"ID":"4ab6ee6b-e923-4905-8d8d-56f96e3bd471","Type":"ContainerStarted","Data":"7b4a8b9cc1cfd10cda1f2a8984d0be6c1d4414199c3319dac30dd5bd286be664"} Jan 27 15:30:06 crc kubenswrapper[4697]: I0127 15:30:06.528385 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-8c986997f-97nkx" podStartSLOduration=2.8221655119999998 podStartE2EDuration="7.528363249s" podCreationTimestamp="2026-01-27 15:29:59 +0000 UTC" firstStartedPulling="2026-01-27 15:30:00.758002029 +0000 UTC m=+1296.930401800" lastFinishedPulling="2026-01-27 15:30:05.464199756 +0000 UTC m=+1301.636599537" observedRunningTime="2026-01-27 15:30:06.528174735 +0000 UTC m=+1302.700574516" watchObservedRunningTime="2026-01-27 15:30:06.528363249 +0000 UTC m=+1302.700763050" Jan 27 15:30:06 crc kubenswrapper[4697]: I0127 15:30:06.550327 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6887cfc8d4-v8f57" Jan 27 15:30:06 crc kubenswrapper[4697]: I0127 15:30:06.557349 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-cbf684fd-9bzgt" event={"ID":"94a26d25-9d4f-4d9e-becb-5fef1852a9cc","Type":"ContainerStarted","Data":"e557d0baecfde8e2b49162e91ee84aefbb8ff4b7d3fc845ee3cd9129c44fc87d"} Jan 27 15:30:06 crc kubenswrapper[4697]: I0127 15:30:06.557397 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-cbf684fd-9bzgt" event={"ID":"94a26d25-9d4f-4d9e-becb-5fef1852a9cc","Type":"ContainerStarted","Data":"e2a8a5027d97fe3772df8203e92e1fb92e68bcc547e660f3f4720853f409af62"} Jan 27 15:30:06 crc kubenswrapper[4697]: I0127 15:30:06.557468 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492130-v7pxt" Jan 27 15:30:06 crc kubenswrapper[4697]: I0127 15:30:06.599678 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-6b5997ff6-w2vq4" podStartSLOduration=3.076264436 podStartE2EDuration="7.599658561s" podCreationTimestamp="2026-01-27 15:29:59 +0000 UTC" firstStartedPulling="2026-01-27 15:30:00.872256445 +0000 UTC m=+1297.044656226" lastFinishedPulling="2026-01-27 15:30:05.39565057 +0000 UTC m=+1301.568050351" observedRunningTime="2026-01-27 15:30:06.557203899 +0000 UTC m=+1302.729603680" watchObservedRunningTime="2026-01-27 15:30:06.599658561 +0000 UTC m=+1302.772058332" Jan 27 15:30:06 crc kubenswrapper[4697]: I0127 15:30:06.612426 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6887cfc8d4-v8f57"] Jan 27 15:30:06 crc kubenswrapper[4697]: I0127 15:30:06.624337 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-6887cfc8d4-v8f57"] Jan 27 15:30:07 crc kubenswrapper[4697]: I0127 15:30:07.037570 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-n5g7m" Jan 27 15:30:07 crc kubenswrapper[4697]: I0127 15:30:07.097987 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ba2a2abf-806a-4708-8f03-9e68c85c6c6c-scripts\") pod \"ba2a2abf-806a-4708-8f03-9e68c85c6c6c\" (UID: \"ba2a2abf-806a-4708-8f03-9e68c85c6c6c\") " Jan 27 15:30:07 crc kubenswrapper[4697]: I0127 15:30:07.098039 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6mjlc\" (UniqueName: \"kubernetes.io/projected/ba2a2abf-806a-4708-8f03-9e68c85c6c6c-kube-api-access-6mjlc\") pod \"ba2a2abf-806a-4708-8f03-9e68c85c6c6c\" (UID: \"ba2a2abf-806a-4708-8f03-9e68c85c6c6c\") " Jan 27 15:30:07 crc kubenswrapper[4697]: I0127 15:30:07.098192 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba2a2abf-806a-4708-8f03-9e68c85c6c6c-combined-ca-bundle\") pod \"ba2a2abf-806a-4708-8f03-9e68c85c6c6c\" (UID: \"ba2a2abf-806a-4708-8f03-9e68c85c6c6c\") " Jan 27 15:30:07 crc kubenswrapper[4697]: I0127 15:30:07.098220 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ba2a2abf-806a-4708-8f03-9e68c85c6c6c-db-sync-config-data\") pod \"ba2a2abf-806a-4708-8f03-9e68c85c6c6c\" (UID: \"ba2a2abf-806a-4708-8f03-9e68c85c6c6c\") " Jan 27 15:30:07 crc kubenswrapper[4697]: I0127 15:30:07.098261 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba2a2abf-806a-4708-8f03-9e68c85c6c6c-config-data\") pod \"ba2a2abf-806a-4708-8f03-9e68c85c6c6c\" (UID: \"ba2a2abf-806a-4708-8f03-9e68c85c6c6c\") " Jan 27 15:30:07 crc kubenswrapper[4697]: I0127 15:30:07.098379 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ba2a2abf-806a-4708-8f03-9e68c85c6c6c-etc-machine-id\") pod \"ba2a2abf-806a-4708-8f03-9e68c85c6c6c\" (UID: \"ba2a2abf-806a-4708-8f03-9e68c85c6c6c\") " Jan 27 15:30:07 crc kubenswrapper[4697]: I0127 15:30:07.098772 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ba2a2abf-806a-4708-8f03-9e68c85c6c6c-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "ba2a2abf-806a-4708-8f03-9e68c85c6c6c" (UID: "ba2a2abf-806a-4708-8f03-9e68c85c6c6c"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 15:30:07 crc kubenswrapper[4697]: I0127 15:30:07.123155 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba2a2abf-806a-4708-8f03-9e68c85c6c6c-scripts" (OuterVolumeSpecName: "scripts") pod "ba2a2abf-806a-4708-8f03-9e68c85c6c6c" (UID: "ba2a2abf-806a-4708-8f03-9e68c85c6c6c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:30:07 crc kubenswrapper[4697]: I0127 15:30:07.130434 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba2a2abf-806a-4708-8f03-9e68c85c6c6c-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "ba2a2abf-806a-4708-8f03-9e68c85c6c6c" (UID: "ba2a2abf-806a-4708-8f03-9e68c85c6c6c"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:30:07 crc kubenswrapper[4697]: I0127 15:30:07.148973 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba2a2abf-806a-4708-8f03-9e68c85c6c6c-kube-api-access-6mjlc" (OuterVolumeSpecName: "kube-api-access-6mjlc") pod "ba2a2abf-806a-4708-8f03-9e68c85c6c6c" (UID: "ba2a2abf-806a-4708-8f03-9e68c85c6c6c"). InnerVolumeSpecName "kube-api-access-6mjlc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:30:07 crc kubenswrapper[4697]: I0127 15:30:07.201933 4697 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ba2a2abf-806a-4708-8f03-9e68c85c6c6c-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 15:30:07 crc kubenswrapper[4697]: I0127 15:30:07.201965 4697 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ba2a2abf-806a-4708-8f03-9e68c85c6c6c-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 27 15:30:07 crc kubenswrapper[4697]: I0127 15:30:07.201974 4697 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ba2a2abf-806a-4708-8f03-9e68c85c6c6c-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 15:30:07 crc kubenswrapper[4697]: I0127 15:30:07.201983 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6mjlc\" (UniqueName: \"kubernetes.io/projected/ba2a2abf-806a-4708-8f03-9e68c85c6c6c-kube-api-access-6mjlc\") on node \"crc\" DevicePath \"\"" Jan 27 15:30:07 crc kubenswrapper[4697]: I0127 15:30:07.207924 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba2a2abf-806a-4708-8f03-9e68c85c6c6c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ba2a2abf-806a-4708-8f03-9e68c85c6c6c" (UID: "ba2a2abf-806a-4708-8f03-9e68c85c6c6c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:30:07 crc kubenswrapper[4697]: I0127 15:30:07.220748 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba2a2abf-806a-4708-8f03-9e68c85c6c6c-config-data" (OuterVolumeSpecName: "config-data") pod "ba2a2abf-806a-4708-8f03-9e68c85c6c6c" (UID: "ba2a2abf-806a-4708-8f03-9e68c85c6c6c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:30:07 crc kubenswrapper[4697]: I0127 15:30:07.303648 4697 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba2a2abf-806a-4708-8f03-9e68c85c6c6c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 15:30:07 crc kubenswrapper[4697]: I0127 15:30:07.303679 4697 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba2a2abf-806a-4708-8f03-9e68c85c6c6c-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 15:30:07 crc kubenswrapper[4697]: I0127 15:30:07.560475 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-cbf684fd-9bzgt" event={"ID":"94a26d25-9d4f-4d9e-becb-5fef1852a9cc","Type":"ContainerStarted","Data":"0f364eb96d2379988fb7eaa248cf237985c37d2528e0a0e748e2ecf30e2482f3"} Jan 27 15:30:07 crc kubenswrapper[4697]: I0127 15:30:07.560805 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-cbf684fd-9bzgt" Jan 27 15:30:07 crc kubenswrapper[4697]: I0127 15:30:07.560818 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-cbf684fd-9bzgt" Jan 27 15:30:07 crc kubenswrapper[4697]: I0127 15:30:07.563124 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-n5g7m" event={"ID":"ba2a2abf-806a-4708-8f03-9e68c85c6c6c","Type":"ContainerDied","Data":"2aa0ff461f3adbdfdc82aad5f7b5540544320bb4a0072566931df2dd64d8ae47"} Jan 27 15:30:07 crc kubenswrapper[4697]: I0127 15:30:07.563175 4697 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2aa0ff461f3adbdfdc82aad5f7b5540544320bb4a0072566931df2dd64d8ae47" Jan 27 15:30:07 crc kubenswrapper[4697]: I0127 15:30:07.563179 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-n5g7m" Jan 27 15:30:07 crc kubenswrapper[4697]: I0127 15:30:07.594933 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-cbf684fd-9bzgt" podStartSLOduration=4.594912161 podStartE2EDuration="4.594912161s" podCreationTimestamp="2026-01-27 15:30:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:30:07.584346714 +0000 UTC m=+1303.756746495" watchObservedRunningTime="2026-01-27 15:30:07.594912161 +0000 UTC m=+1303.767311942" Jan 27 15:30:07 crc kubenswrapper[4697]: I0127 15:30:07.836368 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 15:30:07 crc kubenswrapper[4697]: E0127 15:30:07.836840 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4c4714f-1111-4d49-88a7-e1ac4dfa70b6" containerName="collect-profiles" Jan 27 15:30:07 crc kubenswrapper[4697]: I0127 15:30:07.836866 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4c4714f-1111-4d49-88a7-e1ac4dfa70b6" containerName="collect-profiles" Jan 27 15:30:07 crc kubenswrapper[4697]: E0127 15:30:07.836891 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba2a2abf-806a-4708-8f03-9e68c85c6c6c" containerName="cinder-db-sync" Jan 27 15:30:07 crc kubenswrapper[4697]: I0127 15:30:07.836900 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba2a2abf-806a-4708-8f03-9e68c85c6c6c" containerName="cinder-db-sync" Jan 27 15:30:07 crc kubenswrapper[4697]: E0127 15:30:07.836928 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc00891c-0cae-42c0-bb0a-8e78bd146365" containerName="neutron-httpd" Jan 27 15:30:07 crc kubenswrapper[4697]: I0127 15:30:07.836937 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc00891c-0cae-42c0-bb0a-8e78bd146365" containerName="neutron-httpd" Jan 27 15:30:07 crc kubenswrapper[4697]: E0127 15:30:07.836963 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc00891c-0cae-42c0-bb0a-8e78bd146365" containerName="neutron-api" Jan 27 15:30:07 crc kubenswrapper[4697]: I0127 15:30:07.836987 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc00891c-0cae-42c0-bb0a-8e78bd146365" containerName="neutron-api" Jan 27 15:30:07 crc kubenswrapper[4697]: I0127 15:30:07.837209 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba2a2abf-806a-4708-8f03-9e68c85c6c6c" containerName="cinder-db-sync" Jan 27 15:30:07 crc kubenswrapper[4697]: I0127 15:30:07.837232 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4c4714f-1111-4d49-88a7-e1ac4dfa70b6" containerName="collect-profiles" Jan 27 15:30:07 crc kubenswrapper[4697]: I0127 15:30:07.837248 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc00891c-0cae-42c0-bb0a-8e78bd146365" containerName="neutron-httpd" Jan 27 15:30:07 crc kubenswrapper[4697]: I0127 15:30:07.837261 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc00891c-0cae-42c0-bb0a-8e78bd146365" containerName="neutron-api" Jan 27 15:30:07 crc kubenswrapper[4697]: I0127 15:30:07.838359 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 27 15:30:07 crc kubenswrapper[4697]: I0127 15:30:07.845099 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-h7rhn" Jan 27 15:30:07 crc kubenswrapper[4697]: I0127 15:30:07.845571 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 27 15:30:07 crc kubenswrapper[4697]: I0127 15:30:07.845759 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 27 15:30:07 crc kubenswrapper[4697]: I0127 15:30:07.848218 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 27 15:30:07 crc kubenswrapper[4697]: I0127 15:30:07.876714 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 15:30:07 crc kubenswrapper[4697]: I0127 15:30:07.912827 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/33a73e4d-a656-45bf-bb16-39ddc92e053b-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"33a73e4d-a656-45bf-bb16-39ddc92e053b\") " pod="openstack/cinder-scheduler-0" Jan 27 15:30:07 crc kubenswrapper[4697]: I0127 15:30:07.912945 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33a73e4d-a656-45bf-bb16-39ddc92e053b-config-data\") pod \"cinder-scheduler-0\" (UID: \"33a73e4d-a656-45bf-bb16-39ddc92e053b\") " pod="openstack/cinder-scheduler-0" Jan 27 15:30:07 crc kubenswrapper[4697]: I0127 15:30:07.912972 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/33a73e4d-a656-45bf-bb16-39ddc92e053b-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"33a73e4d-a656-45bf-bb16-39ddc92e053b\") " pod="openstack/cinder-scheduler-0" Jan 27 15:30:07 crc kubenswrapper[4697]: I0127 15:30:07.913080 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xft8\" (UniqueName: \"kubernetes.io/projected/33a73e4d-a656-45bf-bb16-39ddc92e053b-kube-api-access-6xft8\") pod \"cinder-scheduler-0\" (UID: \"33a73e4d-a656-45bf-bb16-39ddc92e053b\") " pod="openstack/cinder-scheduler-0" Jan 27 15:30:07 crc kubenswrapper[4697]: I0127 15:30:07.913124 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33a73e4d-a656-45bf-bb16-39ddc92e053b-scripts\") pod \"cinder-scheduler-0\" (UID: \"33a73e4d-a656-45bf-bb16-39ddc92e053b\") " pod="openstack/cinder-scheduler-0" Jan 27 15:30:07 crc kubenswrapper[4697]: I0127 15:30:07.913168 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33a73e4d-a656-45bf-bb16-39ddc92e053b-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"33a73e4d-a656-45bf-bb16-39ddc92e053b\") " pod="openstack/cinder-scheduler-0" Jan 27 15:30:07 crc kubenswrapper[4697]: I0127 15:30:07.938917 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-72dd2"] Jan 27 15:30:07 crc kubenswrapper[4697]: I0127 15:30:07.939335 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-85ff748b95-72dd2" podUID="eeb8f1bf-1cfd-43f2-83ec-b322061636f4" containerName="dnsmasq-dns" containerID="cri-o://668e06fcd5cdf328c971098fbe1bab0532296b30c846cc20b9c22cd7ae2b7078" gracePeriod=10 Jan 27 15:30:07 crc kubenswrapper[4697]: I0127 15:30:07.946887 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-85ff748b95-72dd2" Jan 27 15:30:08 crc kubenswrapper[4697]: I0127 15:30:08.024227 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-4ngx7"] Jan 27 15:30:08 crc kubenswrapper[4697]: I0127 15:30:08.027072 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33a73e4d-a656-45bf-bb16-39ddc92e053b-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"33a73e4d-a656-45bf-bb16-39ddc92e053b\") " pod="openstack/cinder-scheduler-0" Jan 27 15:30:08 crc kubenswrapper[4697]: I0127 15:30:08.027158 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/33a73e4d-a656-45bf-bb16-39ddc92e053b-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"33a73e4d-a656-45bf-bb16-39ddc92e053b\") " pod="openstack/cinder-scheduler-0" Jan 27 15:30:08 crc kubenswrapper[4697]: I0127 15:30:08.027234 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33a73e4d-a656-45bf-bb16-39ddc92e053b-config-data\") pod \"cinder-scheduler-0\" (UID: \"33a73e4d-a656-45bf-bb16-39ddc92e053b\") " pod="openstack/cinder-scheduler-0" Jan 27 15:30:08 crc kubenswrapper[4697]: I0127 15:30:08.027260 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/33a73e4d-a656-45bf-bb16-39ddc92e053b-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"33a73e4d-a656-45bf-bb16-39ddc92e053b\") " pod="openstack/cinder-scheduler-0" Jan 27 15:30:08 crc kubenswrapper[4697]: I0127 15:30:08.027383 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6xft8\" (UniqueName: \"kubernetes.io/projected/33a73e4d-a656-45bf-bb16-39ddc92e053b-kube-api-access-6xft8\") pod \"cinder-scheduler-0\" (UID: \"33a73e4d-a656-45bf-bb16-39ddc92e053b\") " pod="openstack/cinder-scheduler-0" Jan 27 15:30:08 crc kubenswrapper[4697]: I0127 15:30:08.027426 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33a73e4d-a656-45bf-bb16-39ddc92e053b-scripts\") pod \"cinder-scheduler-0\" (UID: \"33a73e4d-a656-45bf-bb16-39ddc92e053b\") " pod="openstack/cinder-scheduler-0" Jan 27 15:30:08 crc kubenswrapper[4697]: I0127 15:30:08.031901 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/33a73e4d-a656-45bf-bb16-39ddc92e053b-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"33a73e4d-a656-45bf-bb16-39ddc92e053b\") " pod="openstack/cinder-scheduler-0" Jan 27 15:30:08 crc kubenswrapper[4697]: I0127 15:30:08.096452 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/33a73e4d-a656-45bf-bb16-39ddc92e053b-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"33a73e4d-a656-45bf-bb16-39ddc92e053b\") " pod="openstack/cinder-scheduler-0" Jan 27 15:30:08 crc kubenswrapper[4697]: I0127 15:30:08.110011 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33a73e4d-a656-45bf-bb16-39ddc92e053b-scripts\") pod \"cinder-scheduler-0\" (UID: \"33a73e4d-a656-45bf-bb16-39ddc92e053b\") " pod="openstack/cinder-scheduler-0" Jan 27 15:30:08 crc kubenswrapper[4697]: I0127 15:30:08.112745 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-4ngx7" Jan 27 15:30:08 crc kubenswrapper[4697]: I0127 15:30:08.126152 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33a73e4d-a656-45bf-bb16-39ddc92e053b-config-data\") pod \"cinder-scheduler-0\" (UID: \"33a73e4d-a656-45bf-bb16-39ddc92e053b\") " pod="openstack/cinder-scheduler-0" Jan 27 15:30:08 crc kubenswrapper[4697]: I0127 15:30:08.135057 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33a73e4d-a656-45bf-bb16-39ddc92e053b-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"33a73e4d-a656-45bf-bb16-39ddc92e053b\") " pod="openstack/cinder-scheduler-0" Jan 27 15:30:08 crc kubenswrapper[4697]: I0127 15:30:08.160650 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6xft8\" (UniqueName: \"kubernetes.io/projected/33a73e4d-a656-45bf-bb16-39ddc92e053b-kube-api-access-6xft8\") pod \"cinder-scheduler-0\" (UID: \"33a73e4d-a656-45bf-bb16-39ddc92e053b\") " pod="openstack/cinder-scheduler-0" Jan 27 15:30:08 crc kubenswrapper[4697]: I0127 15:30:08.211370 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-4ngx7"] Jan 27 15:30:08 crc kubenswrapper[4697]: I0127 15:30:08.240616 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/29e0ecdc-1d14-468a-bc68-d6cfaf89ffa6-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-4ngx7\" (UID: \"29e0ecdc-1d14-468a-bc68-d6cfaf89ffa6\") " pod="openstack/dnsmasq-dns-5c9776ccc5-4ngx7" Jan 27 15:30:08 crc kubenswrapper[4697]: I0127 15:30:08.240757 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29e0ecdc-1d14-468a-bc68-d6cfaf89ffa6-config\") pod \"dnsmasq-dns-5c9776ccc5-4ngx7\" (UID: \"29e0ecdc-1d14-468a-bc68-d6cfaf89ffa6\") " pod="openstack/dnsmasq-dns-5c9776ccc5-4ngx7" Jan 27 15:30:08 crc kubenswrapper[4697]: I0127 15:30:08.241173 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/29e0ecdc-1d14-468a-bc68-d6cfaf89ffa6-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-4ngx7\" (UID: \"29e0ecdc-1d14-468a-bc68-d6cfaf89ffa6\") " pod="openstack/dnsmasq-dns-5c9776ccc5-4ngx7" Jan 27 15:30:08 crc kubenswrapper[4697]: I0127 15:30:08.241216 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzvzt\" (UniqueName: \"kubernetes.io/projected/29e0ecdc-1d14-468a-bc68-d6cfaf89ffa6-kube-api-access-lzvzt\") pod \"dnsmasq-dns-5c9776ccc5-4ngx7\" (UID: \"29e0ecdc-1d14-468a-bc68-d6cfaf89ffa6\") " pod="openstack/dnsmasq-dns-5c9776ccc5-4ngx7" Jan 27 15:30:08 crc kubenswrapper[4697]: I0127 15:30:08.241255 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/29e0ecdc-1d14-468a-bc68-d6cfaf89ffa6-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-4ngx7\" (UID: \"29e0ecdc-1d14-468a-bc68-d6cfaf89ffa6\") " pod="openstack/dnsmasq-dns-5c9776ccc5-4ngx7" Jan 27 15:30:08 crc kubenswrapper[4697]: I0127 15:30:08.241297 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/29e0ecdc-1d14-468a-bc68-d6cfaf89ffa6-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-4ngx7\" (UID: \"29e0ecdc-1d14-468a-bc68-d6cfaf89ffa6\") " pod="openstack/dnsmasq-dns-5c9776ccc5-4ngx7" Jan 27 15:30:08 crc kubenswrapper[4697]: I0127 15:30:08.243043 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 27 15:30:08 crc kubenswrapper[4697]: I0127 15:30:08.249466 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 27 15:30:08 crc kubenswrapper[4697]: I0127 15:30:08.251761 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 27 15:30:08 crc kubenswrapper[4697]: I0127 15:30:08.260235 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 27 15:30:08 crc kubenswrapper[4697]: I0127 15:30:08.342598 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/29e0ecdc-1d14-468a-bc68-d6cfaf89ffa6-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-4ngx7\" (UID: \"29e0ecdc-1d14-468a-bc68-d6cfaf89ffa6\") " pod="openstack/dnsmasq-dns-5c9776ccc5-4ngx7" Jan 27 15:30:08 crc kubenswrapper[4697]: I0127 15:30:08.342644 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2676b51f-0e15-469e-98f6-8e2a76d4204f-etc-machine-id\") pod \"cinder-api-0\" (UID: \"2676b51f-0e15-469e-98f6-8e2a76d4204f\") " pod="openstack/cinder-api-0" Jan 27 15:30:08 crc kubenswrapper[4697]: I0127 15:30:08.342679 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lzvzt\" (UniqueName: \"kubernetes.io/projected/29e0ecdc-1d14-468a-bc68-d6cfaf89ffa6-kube-api-access-lzvzt\") pod \"dnsmasq-dns-5c9776ccc5-4ngx7\" (UID: \"29e0ecdc-1d14-468a-bc68-d6cfaf89ffa6\") " pod="openstack/dnsmasq-dns-5c9776ccc5-4ngx7" Jan 27 15:30:08 crc kubenswrapper[4697]: I0127 15:30:08.342710 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/29e0ecdc-1d14-468a-bc68-d6cfaf89ffa6-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-4ngx7\" (UID: \"29e0ecdc-1d14-468a-bc68-d6cfaf89ffa6\") " pod="openstack/dnsmasq-dns-5c9776ccc5-4ngx7" Jan 27 15:30:08 crc kubenswrapper[4697]: I0127 15:30:08.342730 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbr22\" (UniqueName: \"kubernetes.io/projected/2676b51f-0e15-469e-98f6-8e2a76d4204f-kube-api-access-kbr22\") pod \"cinder-api-0\" (UID: \"2676b51f-0e15-469e-98f6-8e2a76d4204f\") " pod="openstack/cinder-api-0" Jan 27 15:30:08 crc kubenswrapper[4697]: I0127 15:30:08.342759 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2676b51f-0e15-469e-98f6-8e2a76d4204f-config-data-custom\") pod \"cinder-api-0\" (UID: \"2676b51f-0e15-469e-98f6-8e2a76d4204f\") " pod="openstack/cinder-api-0" Jan 27 15:30:08 crc kubenswrapper[4697]: I0127 15:30:08.343064 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/29e0ecdc-1d14-468a-bc68-d6cfaf89ffa6-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-4ngx7\" (UID: \"29e0ecdc-1d14-468a-bc68-d6cfaf89ffa6\") " pod="openstack/dnsmasq-dns-5c9776ccc5-4ngx7" Jan 27 15:30:08 crc kubenswrapper[4697]: I0127 15:30:08.343121 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2676b51f-0e15-469e-98f6-8e2a76d4204f-scripts\") pod \"cinder-api-0\" (UID: \"2676b51f-0e15-469e-98f6-8e2a76d4204f\") " pod="openstack/cinder-api-0" Jan 27 15:30:08 crc kubenswrapper[4697]: I0127 15:30:08.343202 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/29e0ecdc-1d14-468a-bc68-d6cfaf89ffa6-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-4ngx7\" (UID: \"29e0ecdc-1d14-468a-bc68-d6cfaf89ffa6\") " pod="openstack/dnsmasq-dns-5c9776ccc5-4ngx7" Jan 27 15:30:08 crc kubenswrapper[4697]: I0127 15:30:08.343223 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2676b51f-0e15-469e-98f6-8e2a76d4204f-logs\") pod \"cinder-api-0\" (UID: \"2676b51f-0e15-469e-98f6-8e2a76d4204f\") " pod="openstack/cinder-api-0" Jan 27 15:30:08 crc kubenswrapper[4697]: I0127 15:30:08.343249 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2676b51f-0e15-469e-98f6-8e2a76d4204f-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"2676b51f-0e15-469e-98f6-8e2a76d4204f\") " pod="openstack/cinder-api-0" Jan 27 15:30:08 crc kubenswrapper[4697]: I0127 15:30:08.343280 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29e0ecdc-1d14-468a-bc68-d6cfaf89ffa6-config\") pod \"dnsmasq-dns-5c9776ccc5-4ngx7\" (UID: \"29e0ecdc-1d14-468a-bc68-d6cfaf89ffa6\") " pod="openstack/dnsmasq-dns-5c9776ccc5-4ngx7" Jan 27 15:30:08 crc kubenswrapper[4697]: I0127 15:30:08.343295 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2676b51f-0e15-469e-98f6-8e2a76d4204f-config-data\") pod \"cinder-api-0\" (UID: \"2676b51f-0e15-469e-98f6-8e2a76d4204f\") " pod="openstack/cinder-api-0" Jan 27 15:30:08 crc kubenswrapper[4697]: I0127 15:30:08.344138 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/29e0ecdc-1d14-468a-bc68-d6cfaf89ffa6-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-4ngx7\" (UID: \"29e0ecdc-1d14-468a-bc68-d6cfaf89ffa6\") " pod="openstack/dnsmasq-dns-5c9776ccc5-4ngx7" Jan 27 15:30:08 crc kubenswrapper[4697]: I0127 15:30:08.344967 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/29e0ecdc-1d14-468a-bc68-d6cfaf89ffa6-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-4ngx7\" (UID: \"29e0ecdc-1d14-468a-bc68-d6cfaf89ffa6\") " pod="openstack/dnsmasq-dns-5c9776ccc5-4ngx7" Jan 27 15:30:08 crc kubenswrapper[4697]: I0127 15:30:08.345453 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/29e0ecdc-1d14-468a-bc68-d6cfaf89ffa6-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-4ngx7\" (UID: \"29e0ecdc-1d14-468a-bc68-d6cfaf89ffa6\") " pod="openstack/dnsmasq-dns-5c9776ccc5-4ngx7" Jan 27 15:30:08 crc kubenswrapper[4697]: I0127 15:30:08.346025 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/29e0ecdc-1d14-468a-bc68-d6cfaf89ffa6-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-4ngx7\" (UID: \"29e0ecdc-1d14-468a-bc68-d6cfaf89ffa6\") " pod="openstack/dnsmasq-dns-5c9776ccc5-4ngx7" Jan 27 15:30:08 crc kubenswrapper[4697]: I0127 15:30:08.346575 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29e0ecdc-1d14-468a-bc68-d6cfaf89ffa6-config\") pod \"dnsmasq-dns-5c9776ccc5-4ngx7\" (UID: \"29e0ecdc-1d14-468a-bc68-d6cfaf89ffa6\") " pod="openstack/dnsmasq-dns-5c9776ccc5-4ngx7" Jan 27 15:30:08 crc kubenswrapper[4697]: I0127 15:30:08.375592 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzvzt\" (UniqueName: \"kubernetes.io/projected/29e0ecdc-1d14-468a-bc68-d6cfaf89ffa6-kube-api-access-lzvzt\") pod \"dnsmasq-dns-5c9776ccc5-4ngx7\" (UID: \"29e0ecdc-1d14-468a-bc68-d6cfaf89ffa6\") " pod="openstack/dnsmasq-dns-5c9776ccc5-4ngx7" Jan 27 15:30:08 crc kubenswrapper[4697]: I0127 15:30:08.444290 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2676b51f-0e15-469e-98f6-8e2a76d4204f-scripts\") pod \"cinder-api-0\" (UID: \"2676b51f-0e15-469e-98f6-8e2a76d4204f\") " pod="openstack/cinder-api-0" Jan 27 15:30:08 crc kubenswrapper[4697]: I0127 15:30:08.444610 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2676b51f-0e15-469e-98f6-8e2a76d4204f-logs\") pod \"cinder-api-0\" (UID: \"2676b51f-0e15-469e-98f6-8e2a76d4204f\") " pod="openstack/cinder-api-0" Jan 27 15:30:08 crc kubenswrapper[4697]: I0127 15:30:08.444637 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2676b51f-0e15-469e-98f6-8e2a76d4204f-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"2676b51f-0e15-469e-98f6-8e2a76d4204f\") " pod="openstack/cinder-api-0" Jan 27 15:30:08 crc kubenswrapper[4697]: I0127 15:30:08.444663 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2676b51f-0e15-469e-98f6-8e2a76d4204f-config-data\") pod \"cinder-api-0\" (UID: \"2676b51f-0e15-469e-98f6-8e2a76d4204f\") " pod="openstack/cinder-api-0" Jan 27 15:30:08 crc kubenswrapper[4697]: I0127 15:30:08.444684 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2676b51f-0e15-469e-98f6-8e2a76d4204f-etc-machine-id\") pod \"cinder-api-0\" (UID: \"2676b51f-0e15-469e-98f6-8e2a76d4204f\") " pod="openstack/cinder-api-0" Jan 27 15:30:08 crc kubenswrapper[4697]: I0127 15:30:08.444737 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kbr22\" (UniqueName: \"kubernetes.io/projected/2676b51f-0e15-469e-98f6-8e2a76d4204f-kube-api-access-kbr22\") pod \"cinder-api-0\" (UID: \"2676b51f-0e15-469e-98f6-8e2a76d4204f\") " pod="openstack/cinder-api-0" Jan 27 15:30:08 crc kubenswrapper[4697]: I0127 15:30:08.445218 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2676b51f-0e15-469e-98f6-8e2a76d4204f-config-data-custom\") pod \"cinder-api-0\" (UID: \"2676b51f-0e15-469e-98f6-8e2a76d4204f\") " pod="openstack/cinder-api-0" Jan 27 15:30:08 crc kubenswrapper[4697]: I0127 15:30:08.445589 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2676b51f-0e15-469e-98f6-8e2a76d4204f-etc-machine-id\") pod \"cinder-api-0\" (UID: \"2676b51f-0e15-469e-98f6-8e2a76d4204f\") " pod="openstack/cinder-api-0" Jan 27 15:30:08 crc kubenswrapper[4697]: I0127 15:30:08.446081 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2676b51f-0e15-469e-98f6-8e2a76d4204f-logs\") pod \"cinder-api-0\" (UID: \"2676b51f-0e15-469e-98f6-8e2a76d4204f\") " pod="openstack/cinder-api-0" Jan 27 15:30:08 crc kubenswrapper[4697]: I0127 15:30:08.455869 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2676b51f-0e15-469e-98f6-8e2a76d4204f-config-data-custom\") pod \"cinder-api-0\" (UID: \"2676b51f-0e15-469e-98f6-8e2a76d4204f\") " pod="openstack/cinder-api-0" Jan 27 15:30:08 crc kubenswrapper[4697]: I0127 15:30:08.455985 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2676b51f-0e15-469e-98f6-8e2a76d4204f-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"2676b51f-0e15-469e-98f6-8e2a76d4204f\") " pod="openstack/cinder-api-0" Jan 27 15:30:08 crc kubenswrapper[4697]: I0127 15:30:08.456612 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2676b51f-0e15-469e-98f6-8e2a76d4204f-config-data\") pod \"cinder-api-0\" (UID: \"2676b51f-0e15-469e-98f6-8e2a76d4204f\") " pod="openstack/cinder-api-0" Jan 27 15:30:08 crc kubenswrapper[4697]: I0127 15:30:08.456927 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2676b51f-0e15-469e-98f6-8e2a76d4204f-scripts\") pod \"cinder-api-0\" (UID: \"2676b51f-0e15-469e-98f6-8e2a76d4204f\") " pod="openstack/cinder-api-0" Jan 27 15:30:08 crc kubenswrapper[4697]: I0127 15:30:08.462374 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 27 15:30:08 crc kubenswrapper[4697]: I0127 15:30:08.477612 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kbr22\" (UniqueName: \"kubernetes.io/projected/2676b51f-0e15-469e-98f6-8e2a76d4204f-kube-api-access-kbr22\") pod \"cinder-api-0\" (UID: \"2676b51f-0e15-469e-98f6-8e2a76d4204f\") " pod="openstack/cinder-api-0" Jan 27 15:30:08 crc kubenswrapper[4697]: I0127 15:30:08.580196 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-4ngx7" Jan 27 15:30:08 crc kubenswrapper[4697]: I0127 15:30:08.589216 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 27 15:30:08 crc kubenswrapper[4697]: I0127 15:30:08.601535 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc00891c-0cae-42c0-bb0a-8e78bd146365" path="/var/lib/kubelet/pods/dc00891c-0cae-42c0-bb0a-8e78bd146365/volumes" Jan 27 15:30:08 crc kubenswrapper[4697]: I0127 15:30:08.660057 4697 generic.go:334] "Generic (PLEG): container finished" podID="eeb8f1bf-1cfd-43f2-83ec-b322061636f4" containerID="668e06fcd5cdf328c971098fbe1bab0532296b30c846cc20b9c22cd7ae2b7078" exitCode=0 Jan 27 15:30:08 crc kubenswrapper[4697]: I0127 15:30:08.661053 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-72dd2" event={"ID":"eeb8f1bf-1cfd-43f2-83ec-b322061636f4","Type":"ContainerDied","Data":"668e06fcd5cdf328c971098fbe1bab0532296b30c846cc20b9c22cd7ae2b7078"} Jan 27 15:30:08 crc kubenswrapper[4697]: I0127 15:30:08.992326 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 15:30:09 crc kubenswrapper[4697]: I0127 15:30:09.257022 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-72dd2" Jan 27 15:30:09 crc kubenswrapper[4697]: I0127 15:30:09.277582 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eeb8f1bf-1cfd-43f2-83ec-b322061636f4-dns-svc\") pod \"eeb8f1bf-1cfd-43f2-83ec-b322061636f4\" (UID: \"eeb8f1bf-1cfd-43f2-83ec-b322061636f4\") " Jan 27 15:30:09 crc kubenswrapper[4697]: I0127 15:30:09.277670 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eeb8f1bf-1cfd-43f2-83ec-b322061636f4-config\") pod \"eeb8f1bf-1cfd-43f2-83ec-b322061636f4\" (UID: \"eeb8f1bf-1cfd-43f2-83ec-b322061636f4\") " Jan 27 15:30:09 crc kubenswrapper[4697]: I0127 15:30:09.277690 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eeb8f1bf-1cfd-43f2-83ec-b322061636f4-ovsdbserver-nb\") pod \"eeb8f1bf-1cfd-43f2-83ec-b322061636f4\" (UID: \"eeb8f1bf-1cfd-43f2-83ec-b322061636f4\") " Jan 27 15:30:09 crc kubenswrapper[4697]: I0127 15:30:09.277736 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f64v7\" (UniqueName: \"kubernetes.io/projected/eeb8f1bf-1cfd-43f2-83ec-b322061636f4-kube-api-access-f64v7\") pod \"eeb8f1bf-1cfd-43f2-83ec-b322061636f4\" (UID: \"eeb8f1bf-1cfd-43f2-83ec-b322061636f4\") " Jan 27 15:30:09 crc kubenswrapper[4697]: I0127 15:30:09.277810 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/eeb8f1bf-1cfd-43f2-83ec-b322061636f4-dns-swift-storage-0\") pod \"eeb8f1bf-1cfd-43f2-83ec-b322061636f4\" (UID: \"eeb8f1bf-1cfd-43f2-83ec-b322061636f4\") " Jan 27 15:30:09 crc kubenswrapper[4697]: I0127 15:30:09.277865 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eeb8f1bf-1cfd-43f2-83ec-b322061636f4-ovsdbserver-sb\") pod \"eeb8f1bf-1cfd-43f2-83ec-b322061636f4\" (UID: \"eeb8f1bf-1cfd-43f2-83ec-b322061636f4\") " Jan 27 15:30:09 crc kubenswrapper[4697]: I0127 15:30:09.332062 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eeb8f1bf-1cfd-43f2-83ec-b322061636f4-kube-api-access-f64v7" (OuterVolumeSpecName: "kube-api-access-f64v7") pod "eeb8f1bf-1cfd-43f2-83ec-b322061636f4" (UID: "eeb8f1bf-1cfd-43f2-83ec-b322061636f4"). InnerVolumeSpecName "kube-api-access-f64v7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:30:09 crc kubenswrapper[4697]: I0127 15:30:09.386736 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eeb8f1bf-1cfd-43f2-83ec-b322061636f4-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "eeb8f1bf-1cfd-43f2-83ec-b322061636f4" (UID: "eeb8f1bf-1cfd-43f2-83ec-b322061636f4"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:30:09 crc kubenswrapper[4697]: I0127 15:30:09.388619 4697 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eeb8f1bf-1cfd-43f2-83ec-b322061636f4-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 15:30:09 crc kubenswrapper[4697]: I0127 15:30:09.388646 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f64v7\" (UniqueName: \"kubernetes.io/projected/eeb8f1bf-1cfd-43f2-83ec-b322061636f4-kube-api-access-f64v7\") on node \"crc\" DevicePath \"\"" Jan 27 15:30:09 crc kubenswrapper[4697]: I0127 15:30:09.396181 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eeb8f1bf-1cfd-43f2-83ec-b322061636f4-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "eeb8f1bf-1cfd-43f2-83ec-b322061636f4" (UID: "eeb8f1bf-1cfd-43f2-83ec-b322061636f4"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:30:09 crc kubenswrapper[4697]: I0127 15:30:09.438547 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eeb8f1bf-1cfd-43f2-83ec-b322061636f4-config" (OuterVolumeSpecName: "config") pod "eeb8f1bf-1cfd-43f2-83ec-b322061636f4" (UID: "eeb8f1bf-1cfd-43f2-83ec-b322061636f4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:30:09 crc kubenswrapper[4697]: I0127 15:30:09.441401 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-4ngx7"] Jan 27 15:30:09 crc kubenswrapper[4697]: I0127 15:30:09.451234 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eeb8f1bf-1cfd-43f2-83ec-b322061636f4-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "eeb8f1bf-1cfd-43f2-83ec-b322061636f4" (UID: "eeb8f1bf-1cfd-43f2-83ec-b322061636f4"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:30:09 crc kubenswrapper[4697]: I0127 15:30:09.463156 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eeb8f1bf-1cfd-43f2-83ec-b322061636f4-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "eeb8f1bf-1cfd-43f2-83ec-b322061636f4" (UID: "eeb8f1bf-1cfd-43f2-83ec-b322061636f4"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:30:09 crc kubenswrapper[4697]: I0127 15:30:09.492622 4697 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/eeb8f1bf-1cfd-43f2-83ec-b322061636f4-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 27 15:30:09 crc kubenswrapper[4697]: I0127 15:30:09.492654 4697 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eeb8f1bf-1cfd-43f2-83ec-b322061636f4-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 15:30:09 crc kubenswrapper[4697]: I0127 15:30:09.492665 4697 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eeb8f1bf-1cfd-43f2-83ec-b322061636f4-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 15:30:09 crc kubenswrapper[4697]: I0127 15:30:09.492673 4697 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eeb8f1bf-1cfd-43f2-83ec-b322061636f4-config\") on node \"crc\" DevicePath \"\"" Jan 27 15:30:09 crc kubenswrapper[4697]: I0127 15:30:09.629628 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 27 15:30:09 crc kubenswrapper[4697]: W0127 15:30:09.661927 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2676b51f_0e15_469e_98f6_8e2a76d4204f.slice/crio-65b5d6cc51b826d72cf2d241e98e7e9b111f5bc5b9b290d025fa7242387837fd WatchSource:0}: Error finding container 65b5d6cc51b826d72cf2d241e98e7e9b111f5bc5b9b290d025fa7242387837fd: Status 404 returned error can't find the container with id 65b5d6cc51b826d72cf2d241e98e7e9b111f5bc5b9b290d025fa7242387837fd Jan 27 15:30:09 crc kubenswrapper[4697]: I0127 15:30:09.699059 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"33a73e4d-a656-45bf-bb16-39ddc92e053b","Type":"ContainerStarted","Data":"502158077c8d370fe649528caec35b11c83318fc96ef2acc96b2cd271bd8c883"} Jan 27 15:30:09 crc kubenswrapper[4697]: I0127 15:30:09.709029 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-4ngx7" event={"ID":"29e0ecdc-1d14-468a-bc68-d6cfaf89ffa6","Type":"ContainerStarted","Data":"ca46a54d19431532778657582500876d846e87db9aee269d3302d14ccd62d6b4"} Jan 27 15:30:09 crc kubenswrapper[4697]: I0127 15:30:09.731264 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-72dd2" Jan 27 15:30:09 crc kubenswrapper[4697]: I0127 15:30:09.731260 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-72dd2" event={"ID":"eeb8f1bf-1cfd-43f2-83ec-b322061636f4","Type":"ContainerDied","Data":"6de3994444687fe2f9ca9cc271a1a86c1a186b094847c2a0bf93998f3b34d520"} Jan 27 15:30:09 crc kubenswrapper[4697]: I0127 15:30:09.732675 4697 scope.go:117] "RemoveContainer" containerID="668e06fcd5cdf328c971098fbe1bab0532296b30c846cc20b9c22cd7ae2b7078" Jan 27 15:30:09 crc kubenswrapper[4697]: I0127 15:30:09.743927 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"2676b51f-0e15-469e-98f6-8e2a76d4204f","Type":"ContainerStarted","Data":"65b5d6cc51b826d72cf2d241e98e7e9b111f5bc5b9b290d025fa7242387837fd"} Jan 27 15:30:09 crc kubenswrapper[4697]: I0127 15:30:09.796845 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-72dd2"] Jan 27 15:30:09 crc kubenswrapper[4697]: I0127 15:30:09.807298 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-72dd2"] Jan 27 15:30:09 crc kubenswrapper[4697]: I0127 15:30:09.929574 4697 scope.go:117] "RemoveContainer" containerID="c46d7ff452f451007d7792bf9de829af4c060507675f15590dd8da47187dc3cf" Jan 27 15:30:10 crc kubenswrapper[4697]: I0127 15:30:10.586428 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eeb8f1bf-1cfd-43f2-83ec-b322061636f4" path="/var/lib/kubelet/pods/eeb8f1bf-1cfd-43f2-83ec-b322061636f4/volumes" Jan 27 15:30:10 crc kubenswrapper[4697]: I0127 15:30:10.827819 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 27 15:30:10 crc kubenswrapper[4697]: I0127 15:30:10.833106 4697 generic.go:334] "Generic (PLEG): container finished" podID="29e0ecdc-1d14-468a-bc68-d6cfaf89ffa6" containerID="ff13ec4d904d0e03bae142bc4046b95950c7bee1bc8777a0f17a13b9476540fa" exitCode=0 Jan 27 15:30:10 crc kubenswrapper[4697]: I0127 15:30:10.833156 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-4ngx7" event={"ID":"29e0ecdc-1d14-468a-bc68-d6cfaf89ffa6","Type":"ContainerDied","Data":"ff13ec4d904d0e03bae142bc4046b95950c7bee1bc8777a0f17a13b9476540fa"} Jan 27 15:30:10 crc kubenswrapper[4697]: I0127 15:30:10.924257 4697 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5b9dc56b78-cpxnx" podUID="ca5e937a-90cf-44e0-bf5c-bcb75c95a2f4" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.148:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.148:8443: connect: connection refused" Jan 27 15:30:10 crc kubenswrapper[4697]: I0127 15:30:10.924638 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-5b9dc56b78-cpxnx" Jan 27 15:30:10 crc kubenswrapper[4697]: I0127 15:30:10.925505 4697 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"94e5a0ea328ee095ebea3b739ec83ee42ff649968869720920ee234c3045166f"} pod="openstack/horizon-5b9dc56b78-cpxnx" containerMessage="Container horizon failed startup probe, will be restarted" Jan 27 15:30:10 crc kubenswrapper[4697]: I0127 15:30:10.925544 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5b9dc56b78-cpxnx" podUID="ca5e937a-90cf-44e0-bf5c-bcb75c95a2f4" containerName="horizon" containerID="cri-o://94e5a0ea328ee095ebea3b739ec83ee42ff649968869720920ee234c3045166f" gracePeriod=30 Jan 27 15:30:11 crc kubenswrapper[4697]: I0127 15:30:11.078051 4697 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-6b4dc8dd8d-w99s5" podUID="001c4d9a-f883-48ed-aafa-9b820b5b9380" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.163:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 15:30:11 crc kubenswrapper[4697]: I0127 15:30:11.650135 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-85b9bd5db8-9x55q" Jan 27 15:30:11 crc kubenswrapper[4697]: I0127 15:30:11.650375 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-85b9bd5db8-9x55q" Jan 27 15:30:11 crc kubenswrapper[4697]: I0127 15:30:11.938384 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-4ngx7" event={"ID":"29e0ecdc-1d14-468a-bc68-d6cfaf89ffa6","Type":"ContainerStarted","Data":"7663e6286d9a601286c916efa6c870f5976476bfc96b99faf55dbe8f92d4a34c"} Jan 27 15:30:11 crc kubenswrapper[4697]: I0127 15:30:11.939651 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c9776ccc5-4ngx7" Jan 27 15:30:11 crc kubenswrapper[4697]: I0127 15:30:11.962030 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"2676b51f-0e15-469e-98f6-8e2a76d4204f","Type":"ContainerStarted","Data":"05346ea7c19e9d70c44f136c721843ff7aa80b17d1f177377fd120c190703a13"} Jan 27 15:30:11 crc kubenswrapper[4697]: I0127 15:30:11.972002 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c9776ccc5-4ngx7" podStartSLOduration=4.971983551 podStartE2EDuration="4.971983551s" podCreationTimestamp="2026-01-27 15:30:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:30:11.968236489 +0000 UTC m=+1308.140636280" watchObservedRunningTime="2026-01-27 15:30:11.971983551 +0000 UTC m=+1308.144383332" Jan 27 15:30:11 crc kubenswrapper[4697]: I0127 15:30:11.987952 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"33a73e4d-a656-45bf-bb16-39ddc92e053b","Type":"ContainerStarted","Data":"50d65fc05d7a40d0a9668384a98b46c43e19b80a68f93209e2b4cb36595ca405"} Jan 27 15:30:12 crc kubenswrapper[4697]: I0127 15:30:12.996517 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"2676b51f-0e15-469e-98f6-8e2a76d4204f","Type":"ContainerStarted","Data":"7b4bd846e4014e621005731ea7aeb194539fae86bf87dc737dce2817f7b59901"} Jan 27 15:30:12 crc kubenswrapper[4697]: I0127 15:30:12.996810 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 27 15:30:12 crc kubenswrapper[4697]: I0127 15:30:12.997414 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="2676b51f-0e15-469e-98f6-8e2a76d4204f" containerName="cinder-api-log" containerID="cri-o://05346ea7c19e9d70c44f136c721843ff7aa80b17d1f177377fd120c190703a13" gracePeriod=30 Jan 27 15:30:12 crc kubenswrapper[4697]: I0127 15:30:12.997423 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="2676b51f-0e15-469e-98f6-8e2a76d4204f" containerName="cinder-api" containerID="cri-o://7b4bd846e4014e621005731ea7aeb194539fae86bf87dc737dce2817f7b59901" gracePeriod=30 Jan 27 15:30:13 crc kubenswrapper[4697]: I0127 15:30:13.000681 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"33a73e4d-a656-45bf-bb16-39ddc92e053b","Type":"ContainerStarted","Data":"5e827fb420b8a1829d7c434ffcfe05c5e5e940ba76f3638ffe07a099894d82d8"} Jan 27 15:30:13 crc kubenswrapper[4697]: I0127 15:30:13.027907 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=5.027889564 podStartE2EDuration="5.027889564s" podCreationTimestamp="2026-01-27 15:30:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:30:13.024553782 +0000 UTC m=+1309.196953563" watchObservedRunningTime="2026-01-27 15:30:13.027889564 +0000 UTC m=+1309.200289345" Jan 27 15:30:13 crc kubenswrapper[4697]: I0127 15:30:13.211953 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6b4dc8dd8d-w99s5" Jan 27 15:30:13 crc kubenswrapper[4697]: I0127 15:30:13.250917 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=5.278534948 podStartE2EDuration="6.250899521s" podCreationTimestamp="2026-01-27 15:30:07 +0000 UTC" firstStartedPulling="2026-01-27 15:30:09.059970834 +0000 UTC m=+1305.232370615" lastFinishedPulling="2026-01-27 15:30:10.032335407 +0000 UTC m=+1306.204735188" observedRunningTime="2026-01-27 15:30:13.059059861 +0000 UTC m=+1309.231459642" watchObservedRunningTime="2026-01-27 15:30:13.250899521 +0000 UTC m=+1309.423299302" Jan 27 15:30:13 crc kubenswrapper[4697]: I0127 15:30:13.463853 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 27 15:30:14 crc kubenswrapper[4697]: I0127 15:30:14.012222 4697 generic.go:334] "Generic (PLEG): container finished" podID="2676b51f-0e15-469e-98f6-8e2a76d4204f" containerID="05346ea7c19e9d70c44f136c721843ff7aa80b17d1f177377fd120c190703a13" exitCode=143 Jan 27 15:30:14 crc kubenswrapper[4697]: I0127 15:30:14.013011 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"2676b51f-0e15-469e-98f6-8e2a76d4204f","Type":"ContainerDied","Data":"05346ea7c19e9d70c44f136c721843ff7aa80b17d1f177377fd120c190703a13"} Jan 27 15:30:15 crc kubenswrapper[4697]: I0127 15:30:15.022903 4697 generic.go:334] "Generic (PLEG): container finished" podID="2676b51f-0e15-469e-98f6-8e2a76d4204f" containerID="7b4bd846e4014e621005731ea7aeb194539fae86bf87dc737dce2817f7b59901" exitCode=0 Jan 27 15:30:15 crc kubenswrapper[4697]: I0127 15:30:15.022969 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"2676b51f-0e15-469e-98f6-8e2a76d4204f","Type":"ContainerDied","Data":"7b4bd846e4014e621005731ea7aeb194539fae86bf87dc737dce2817f7b59901"} Jan 27 15:30:15 crc kubenswrapper[4697]: I0127 15:30:15.023586 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"2676b51f-0e15-469e-98f6-8e2a76d4204f","Type":"ContainerDied","Data":"65b5d6cc51b826d72cf2d241e98e7e9b111f5bc5b9b290d025fa7242387837fd"} Jan 27 15:30:15 crc kubenswrapper[4697]: I0127 15:30:15.023612 4697 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="65b5d6cc51b826d72cf2d241e98e7e9b111f5bc5b9b290d025fa7242387837fd" Jan 27 15:30:15 crc kubenswrapper[4697]: I0127 15:30:15.073832 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 27 15:30:15 crc kubenswrapper[4697]: I0127 15:30:15.074909 4697 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6b4dc8dd8d-w99s5" podUID="001c4d9a-f883-48ed-aafa-9b820b5b9380" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.163:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 15:30:15 crc kubenswrapper[4697]: I0127 15:30:15.090200 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6b4dc8dd8d-w99s5" Jan 27 15:30:15 crc kubenswrapper[4697]: I0127 15:30:15.264220 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2676b51f-0e15-469e-98f6-8e2a76d4204f-config-data-custom\") pod \"2676b51f-0e15-469e-98f6-8e2a76d4204f\" (UID: \"2676b51f-0e15-469e-98f6-8e2a76d4204f\") " Jan 27 15:30:15 crc kubenswrapper[4697]: I0127 15:30:15.264684 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2676b51f-0e15-469e-98f6-8e2a76d4204f-config-data\") pod \"2676b51f-0e15-469e-98f6-8e2a76d4204f\" (UID: \"2676b51f-0e15-469e-98f6-8e2a76d4204f\") " Jan 27 15:30:15 crc kubenswrapper[4697]: I0127 15:30:15.264774 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2676b51f-0e15-469e-98f6-8e2a76d4204f-scripts\") pod \"2676b51f-0e15-469e-98f6-8e2a76d4204f\" (UID: \"2676b51f-0e15-469e-98f6-8e2a76d4204f\") " Jan 27 15:30:15 crc kubenswrapper[4697]: I0127 15:30:15.264899 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2676b51f-0e15-469e-98f6-8e2a76d4204f-combined-ca-bundle\") pod \"2676b51f-0e15-469e-98f6-8e2a76d4204f\" (UID: \"2676b51f-0e15-469e-98f6-8e2a76d4204f\") " Jan 27 15:30:15 crc kubenswrapper[4697]: I0127 15:30:15.265051 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kbr22\" (UniqueName: \"kubernetes.io/projected/2676b51f-0e15-469e-98f6-8e2a76d4204f-kube-api-access-kbr22\") pod \"2676b51f-0e15-469e-98f6-8e2a76d4204f\" (UID: \"2676b51f-0e15-469e-98f6-8e2a76d4204f\") " Jan 27 15:30:15 crc kubenswrapper[4697]: I0127 15:30:15.265323 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2676b51f-0e15-469e-98f6-8e2a76d4204f-logs\") pod \"2676b51f-0e15-469e-98f6-8e2a76d4204f\" (UID: \"2676b51f-0e15-469e-98f6-8e2a76d4204f\") " Jan 27 15:30:15 crc kubenswrapper[4697]: I0127 15:30:15.265420 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2676b51f-0e15-469e-98f6-8e2a76d4204f-etc-machine-id\") pod \"2676b51f-0e15-469e-98f6-8e2a76d4204f\" (UID: \"2676b51f-0e15-469e-98f6-8e2a76d4204f\") " Jan 27 15:30:15 crc kubenswrapper[4697]: I0127 15:30:15.265994 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2676b51f-0e15-469e-98f6-8e2a76d4204f-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "2676b51f-0e15-469e-98f6-8e2a76d4204f" (UID: "2676b51f-0e15-469e-98f6-8e2a76d4204f"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 15:30:15 crc kubenswrapper[4697]: I0127 15:30:15.266683 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2676b51f-0e15-469e-98f6-8e2a76d4204f-logs" (OuterVolumeSpecName: "logs") pod "2676b51f-0e15-469e-98f6-8e2a76d4204f" (UID: "2676b51f-0e15-469e-98f6-8e2a76d4204f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:30:15 crc kubenswrapper[4697]: I0127 15:30:15.284053 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2676b51f-0e15-469e-98f6-8e2a76d4204f-kube-api-access-kbr22" (OuterVolumeSpecName: "kube-api-access-kbr22") pod "2676b51f-0e15-469e-98f6-8e2a76d4204f" (UID: "2676b51f-0e15-469e-98f6-8e2a76d4204f"). InnerVolumeSpecName "kube-api-access-kbr22". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:30:15 crc kubenswrapper[4697]: I0127 15:30:15.289073 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2676b51f-0e15-469e-98f6-8e2a76d4204f-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "2676b51f-0e15-469e-98f6-8e2a76d4204f" (UID: "2676b51f-0e15-469e-98f6-8e2a76d4204f"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:30:15 crc kubenswrapper[4697]: I0127 15:30:15.289175 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2676b51f-0e15-469e-98f6-8e2a76d4204f-scripts" (OuterVolumeSpecName: "scripts") pod "2676b51f-0e15-469e-98f6-8e2a76d4204f" (UID: "2676b51f-0e15-469e-98f6-8e2a76d4204f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:30:15 crc kubenswrapper[4697]: I0127 15:30:15.371113 4697 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2676b51f-0e15-469e-98f6-8e2a76d4204f-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 27 15:30:15 crc kubenswrapper[4697]: I0127 15:30:15.371137 4697 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2676b51f-0e15-469e-98f6-8e2a76d4204f-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 15:30:15 crc kubenswrapper[4697]: I0127 15:30:15.371146 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kbr22\" (UniqueName: \"kubernetes.io/projected/2676b51f-0e15-469e-98f6-8e2a76d4204f-kube-api-access-kbr22\") on node \"crc\" DevicePath \"\"" Jan 27 15:30:15 crc kubenswrapper[4697]: I0127 15:30:15.371155 4697 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2676b51f-0e15-469e-98f6-8e2a76d4204f-logs\") on node \"crc\" DevicePath \"\"" Jan 27 15:30:15 crc kubenswrapper[4697]: I0127 15:30:15.371163 4697 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2676b51f-0e15-469e-98f6-8e2a76d4204f-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 27 15:30:15 crc kubenswrapper[4697]: I0127 15:30:15.381985 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2676b51f-0e15-469e-98f6-8e2a76d4204f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2676b51f-0e15-469e-98f6-8e2a76d4204f" (UID: "2676b51f-0e15-469e-98f6-8e2a76d4204f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:30:15 crc kubenswrapper[4697]: I0127 15:30:15.423216 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2676b51f-0e15-469e-98f6-8e2a76d4204f-config-data" (OuterVolumeSpecName: "config-data") pod "2676b51f-0e15-469e-98f6-8e2a76d4204f" (UID: "2676b51f-0e15-469e-98f6-8e2a76d4204f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:30:15 crc kubenswrapper[4697]: I0127 15:30:15.477160 4697 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2676b51f-0e15-469e-98f6-8e2a76d4204f-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 15:30:15 crc kubenswrapper[4697]: I0127 15:30:15.477197 4697 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2676b51f-0e15-469e-98f6-8e2a76d4204f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 15:30:16 crc kubenswrapper[4697]: I0127 15:30:16.032930 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 27 15:30:16 crc kubenswrapper[4697]: I0127 15:30:16.065304 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 27 15:30:16 crc kubenswrapper[4697]: I0127 15:30:16.080628 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 27 15:30:16 crc kubenswrapper[4697]: I0127 15:30:16.100250 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 27 15:30:16 crc kubenswrapper[4697]: E0127 15:30:16.100647 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eeb8f1bf-1cfd-43f2-83ec-b322061636f4" containerName="dnsmasq-dns" Jan 27 15:30:16 crc kubenswrapper[4697]: I0127 15:30:16.100665 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="eeb8f1bf-1cfd-43f2-83ec-b322061636f4" containerName="dnsmasq-dns" Jan 27 15:30:16 crc kubenswrapper[4697]: E0127 15:30:16.100677 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2676b51f-0e15-469e-98f6-8e2a76d4204f" containerName="cinder-api" Jan 27 15:30:16 crc kubenswrapper[4697]: I0127 15:30:16.100686 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="2676b51f-0e15-469e-98f6-8e2a76d4204f" containerName="cinder-api" Jan 27 15:30:16 crc kubenswrapper[4697]: E0127 15:30:16.100704 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2676b51f-0e15-469e-98f6-8e2a76d4204f" containerName="cinder-api-log" Jan 27 15:30:16 crc kubenswrapper[4697]: I0127 15:30:16.100710 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="2676b51f-0e15-469e-98f6-8e2a76d4204f" containerName="cinder-api-log" Jan 27 15:30:16 crc kubenswrapper[4697]: E0127 15:30:16.100721 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eeb8f1bf-1cfd-43f2-83ec-b322061636f4" containerName="init" Jan 27 15:30:16 crc kubenswrapper[4697]: I0127 15:30:16.100726 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="eeb8f1bf-1cfd-43f2-83ec-b322061636f4" containerName="init" Jan 27 15:30:16 crc kubenswrapper[4697]: I0127 15:30:16.100902 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="2676b51f-0e15-469e-98f6-8e2a76d4204f" containerName="cinder-api-log" Jan 27 15:30:16 crc kubenswrapper[4697]: I0127 15:30:16.100916 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="2676b51f-0e15-469e-98f6-8e2a76d4204f" containerName="cinder-api" Jan 27 15:30:16 crc kubenswrapper[4697]: I0127 15:30:16.100925 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="eeb8f1bf-1cfd-43f2-83ec-b322061636f4" containerName="dnsmasq-dns" Jan 27 15:30:16 crc kubenswrapper[4697]: I0127 15:30:16.101860 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 27 15:30:16 crc kubenswrapper[4697]: I0127 15:30:16.106265 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Jan 27 15:30:16 crc kubenswrapper[4697]: I0127 15:30:16.106570 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 27 15:30:16 crc kubenswrapper[4697]: I0127 15:30:16.106708 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Jan 27 15:30:16 crc kubenswrapper[4697]: I0127 15:30:16.134757 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 27 15:30:16 crc kubenswrapper[4697]: I0127 15:30:16.292856 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6c37afd4-a5ce-450f-8d51-231aba899e23-etc-machine-id\") pod \"cinder-api-0\" (UID: \"6c37afd4-a5ce-450f-8d51-231aba899e23\") " pod="openstack/cinder-api-0" Jan 27 15:30:16 crc kubenswrapper[4697]: I0127 15:30:16.292904 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6c37afd4-a5ce-450f-8d51-231aba899e23-config-data-custom\") pod \"cinder-api-0\" (UID: \"6c37afd4-a5ce-450f-8d51-231aba899e23\") " pod="openstack/cinder-api-0" Jan 27 15:30:16 crc kubenswrapper[4697]: I0127 15:30:16.292925 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c37afd4-a5ce-450f-8d51-231aba899e23-public-tls-certs\") pod \"cinder-api-0\" (UID: \"6c37afd4-a5ce-450f-8d51-231aba899e23\") " pod="openstack/cinder-api-0" Jan 27 15:30:16 crc kubenswrapper[4697]: I0127 15:30:16.292967 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c37afd4-a5ce-450f-8d51-231aba899e23-scripts\") pod \"cinder-api-0\" (UID: \"6c37afd4-a5ce-450f-8d51-231aba899e23\") " pod="openstack/cinder-api-0" Jan 27 15:30:16 crc kubenswrapper[4697]: I0127 15:30:16.293005 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z479j\" (UniqueName: \"kubernetes.io/projected/6c37afd4-a5ce-450f-8d51-231aba899e23-kube-api-access-z479j\") pod \"cinder-api-0\" (UID: \"6c37afd4-a5ce-450f-8d51-231aba899e23\") " pod="openstack/cinder-api-0" Jan 27 15:30:16 crc kubenswrapper[4697]: I0127 15:30:16.293026 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c37afd4-a5ce-450f-8d51-231aba899e23-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"6c37afd4-a5ce-450f-8d51-231aba899e23\") " pod="openstack/cinder-api-0" Jan 27 15:30:16 crc kubenswrapper[4697]: I0127 15:30:16.293044 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6c37afd4-a5ce-450f-8d51-231aba899e23-logs\") pod \"cinder-api-0\" (UID: \"6c37afd4-a5ce-450f-8d51-231aba899e23\") " pod="openstack/cinder-api-0" Jan 27 15:30:16 crc kubenswrapper[4697]: I0127 15:30:16.293065 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c37afd4-a5ce-450f-8d51-231aba899e23-config-data\") pod \"cinder-api-0\" (UID: \"6c37afd4-a5ce-450f-8d51-231aba899e23\") " pod="openstack/cinder-api-0" Jan 27 15:30:16 crc kubenswrapper[4697]: I0127 15:30:16.293078 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c37afd4-a5ce-450f-8d51-231aba899e23-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"6c37afd4-a5ce-450f-8d51-231aba899e23\") " pod="openstack/cinder-api-0" Jan 27 15:30:16 crc kubenswrapper[4697]: I0127 15:30:16.394260 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z479j\" (UniqueName: \"kubernetes.io/projected/6c37afd4-a5ce-450f-8d51-231aba899e23-kube-api-access-z479j\") pod \"cinder-api-0\" (UID: \"6c37afd4-a5ce-450f-8d51-231aba899e23\") " pod="openstack/cinder-api-0" Jan 27 15:30:16 crc kubenswrapper[4697]: I0127 15:30:16.394309 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c37afd4-a5ce-450f-8d51-231aba899e23-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"6c37afd4-a5ce-450f-8d51-231aba899e23\") " pod="openstack/cinder-api-0" Jan 27 15:30:16 crc kubenswrapper[4697]: I0127 15:30:16.394329 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6c37afd4-a5ce-450f-8d51-231aba899e23-logs\") pod \"cinder-api-0\" (UID: \"6c37afd4-a5ce-450f-8d51-231aba899e23\") " pod="openstack/cinder-api-0" Jan 27 15:30:16 crc kubenswrapper[4697]: I0127 15:30:16.394350 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c37afd4-a5ce-450f-8d51-231aba899e23-config-data\") pod \"cinder-api-0\" (UID: \"6c37afd4-a5ce-450f-8d51-231aba899e23\") " pod="openstack/cinder-api-0" Jan 27 15:30:16 crc kubenswrapper[4697]: I0127 15:30:16.394366 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c37afd4-a5ce-450f-8d51-231aba899e23-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"6c37afd4-a5ce-450f-8d51-231aba899e23\") " pod="openstack/cinder-api-0" Jan 27 15:30:16 crc kubenswrapper[4697]: I0127 15:30:16.394424 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6c37afd4-a5ce-450f-8d51-231aba899e23-etc-machine-id\") pod \"cinder-api-0\" (UID: \"6c37afd4-a5ce-450f-8d51-231aba899e23\") " pod="openstack/cinder-api-0" Jan 27 15:30:16 crc kubenswrapper[4697]: I0127 15:30:16.394453 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6c37afd4-a5ce-450f-8d51-231aba899e23-config-data-custom\") pod \"cinder-api-0\" (UID: \"6c37afd4-a5ce-450f-8d51-231aba899e23\") " pod="openstack/cinder-api-0" Jan 27 15:30:16 crc kubenswrapper[4697]: I0127 15:30:16.394470 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c37afd4-a5ce-450f-8d51-231aba899e23-public-tls-certs\") pod \"cinder-api-0\" (UID: \"6c37afd4-a5ce-450f-8d51-231aba899e23\") " pod="openstack/cinder-api-0" Jan 27 15:30:16 crc kubenswrapper[4697]: I0127 15:30:16.394511 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c37afd4-a5ce-450f-8d51-231aba899e23-scripts\") pod \"cinder-api-0\" (UID: \"6c37afd4-a5ce-450f-8d51-231aba899e23\") " pod="openstack/cinder-api-0" Jan 27 15:30:16 crc kubenswrapper[4697]: I0127 15:30:16.396406 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6c37afd4-a5ce-450f-8d51-231aba899e23-logs\") pod \"cinder-api-0\" (UID: \"6c37afd4-a5ce-450f-8d51-231aba899e23\") " pod="openstack/cinder-api-0" Jan 27 15:30:16 crc kubenswrapper[4697]: I0127 15:30:16.396457 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6c37afd4-a5ce-450f-8d51-231aba899e23-etc-machine-id\") pod \"cinder-api-0\" (UID: \"6c37afd4-a5ce-450f-8d51-231aba899e23\") " pod="openstack/cinder-api-0" Jan 27 15:30:16 crc kubenswrapper[4697]: I0127 15:30:16.407517 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c37afd4-a5ce-450f-8d51-231aba899e23-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"6c37afd4-a5ce-450f-8d51-231aba899e23\") " pod="openstack/cinder-api-0" Jan 27 15:30:16 crc kubenswrapper[4697]: I0127 15:30:16.408018 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c37afd4-a5ce-450f-8d51-231aba899e23-scripts\") pod \"cinder-api-0\" (UID: \"6c37afd4-a5ce-450f-8d51-231aba899e23\") " pod="openstack/cinder-api-0" Jan 27 15:30:16 crc kubenswrapper[4697]: I0127 15:30:16.420456 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c37afd4-a5ce-450f-8d51-231aba899e23-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"6c37afd4-a5ce-450f-8d51-231aba899e23\") " pod="openstack/cinder-api-0" Jan 27 15:30:16 crc kubenswrapper[4697]: I0127 15:30:16.422034 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c37afd4-a5ce-450f-8d51-231aba899e23-config-data\") pod \"cinder-api-0\" (UID: \"6c37afd4-a5ce-450f-8d51-231aba899e23\") " pod="openstack/cinder-api-0" Jan 27 15:30:16 crc kubenswrapper[4697]: I0127 15:30:16.433759 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6c37afd4-a5ce-450f-8d51-231aba899e23-config-data-custom\") pod \"cinder-api-0\" (UID: \"6c37afd4-a5ce-450f-8d51-231aba899e23\") " pod="openstack/cinder-api-0" Jan 27 15:30:16 crc kubenswrapper[4697]: I0127 15:30:16.439863 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z479j\" (UniqueName: \"kubernetes.io/projected/6c37afd4-a5ce-450f-8d51-231aba899e23-kube-api-access-z479j\") pod \"cinder-api-0\" (UID: \"6c37afd4-a5ce-450f-8d51-231aba899e23\") " pod="openstack/cinder-api-0" Jan 27 15:30:16 crc kubenswrapper[4697]: I0127 15:30:16.444432 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c37afd4-a5ce-450f-8d51-231aba899e23-public-tls-certs\") pod \"cinder-api-0\" (UID: \"6c37afd4-a5ce-450f-8d51-231aba899e23\") " pod="openstack/cinder-api-0" Jan 27 15:30:16 crc kubenswrapper[4697]: I0127 15:30:16.453964 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 27 15:30:16 crc kubenswrapper[4697]: I0127 15:30:16.579677 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2676b51f-0e15-469e-98f6-8e2a76d4204f" path="/var/lib/kubelet/pods/2676b51f-0e15-469e-98f6-8e2a76d4204f/volumes" Jan 27 15:30:17 crc kubenswrapper[4697]: I0127 15:30:17.161329 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 27 15:30:17 crc kubenswrapper[4697]: W0127 15:30:17.187971 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6c37afd4_a5ce_450f_8d51_231aba899e23.slice/crio-646ad1913b35567c432f4b639bc8d2af7556ba4d937dde8e866a9b8ef016fb06 WatchSource:0}: Error finding container 646ad1913b35567c432f4b639bc8d2af7556ba4d937dde8e866a9b8ef016fb06: Status 404 returned error can't find the container with id 646ad1913b35567c432f4b639bc8d2af7556ba4d937dde8e866a9b8ef016fb06 Jan 27 15:30:17 crc kubenswrapper[4697]: I0127 15:30:17.919225 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-cbf684fd-9bzgt" Jan 27 15:30:18 crc kubenswrapper[4697]: I0127 15:30:18.047476 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"6c37afd4-a5ce-450f-8d51-231aba899e23","Type":"ContainerStarted","Data":"fed1d65b8b71d36c03e10522ab21fc5605b4ba3c17430a7753fe563a37d37bb7"} Jan 27 15:30:18 crc kubenswrapper[4697]: I0127 15:30:18.048060 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"6c37afd4-a5ce-450f-8d51-231aba899e23","Type":"ContainerStarted","Data":"646ad1913b35567c432f4b639bc8d2af7556ba4d937dde8e866a9b8ef016fb06"} Jan 27 15:30:18 crc kubenswrapper[4697]: I0127 15:30:18.287480 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-cbf684fd-9bzgt" Jan 27 15:30:18 crc kubenswrapper[4697]: I0127 15:30:18.354230 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-6b4dc8dd8d-w99s5"] Jan 27 15:30:18 crc kubenswrapper[4697]: I0127 15:30:18.354516 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-6b4dc8dd8d-w99s5" podUID="001c4d9a-f883-48ed-aafa-9b820b5b9380" containerName="barbican-api" containerID="cri-o://d2678d6a9b780b8efb89eb2ab113aff36e204d7ea628e403d7a8ef6567fe3963" gracePeriod=30 Jan 27 15:30:18 crc kubenswrapper[4697]: I0127 15:30:18.354751 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-6b4dc8dd8d-w99s5" podUID="001c4d9a-f883-48ed-aafa-9b820b5b9380" containerName="barbican-api-log" containerID="cri-o://91b87d86adc1c0b52193f2a6f5c7b0dee6f084e2b7f4e6c72564bfb2aed13025" gracePeriod=30 Jan 27 15:30:18 crc kubenswrapper[4697]: I0127 15:30:18.584915 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5c9776ccc5-4ngx7" Jan 27 15:30:18 crc kubenswrapper[4697]: I0127 15:30:18.685709 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-kc2ll"] Jan 27 15:30:18 crc kubenswrapper[4697]: I0127 15:30:18.685985 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-55f844cf75-kc2ll" podUID="e4bebb87-c35b-4185-8c32-560d5ddc3664" containerName="dnsmasq-dns" containerID="cri-o://c9da9fa853897df35c23910665d430955fbd2044d732f4824decf0dbad6d31b8" gracePeriod=10 Jan 27 15:30:19 crc kubenswrapper[4697]: I0127 15:30:19.098672 4697 generic.go:334] "Generic (PLEG): container finished" podID="001c4d9a-f883-48ed-aafa-9b820b5b9380" containerID="91b87d86adc1c0b52193f2a6f5c7b0dee6f084e2b7f4e6c72564bfb2aed13025" exitCode=143 Jan 27 15:30:19 crc kubenswrapper[4697]: I0127 15:30:19.099049 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6b4dc8dd8d-w99s5" event={"ID":"001c4d9a-f883-48ed-aafa-9b820b5b9380","Type":"ContainerDied","Data":"91b87d86adc1c0b52193f2a6f5c7b0dee6f084e2b7f4e6c72564bfb2aed13025"} Jan 27 15:30:19 crc kubenswrapper[4697]: I0127 15:30:19.112915 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"6c37afd4-a5ce-450f-8d51-231aba899e23","Type":"ContainerStarted","Data":"ae9435141076dc09106005130170101d6eba108e7426f98cfe394d31cf4fdbe1"} Jan 27 15:30:19 crc kubenswrapper[4697]: I0127 15:30:19.114049 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 27 15:30:19 crc kubenswrapper[4697]: I0127 15:30:19.122403 4697 generic.go:334] "Generic (PLEG): container finished" podID="e4bebb87-c35b-4185-8c32-560d5ddc3664" containerID="c9da9fa853897df35c23910665d430955fbd2044d732f4824decf0dbad6d31b8" exitCode=0 Jan 27 15:30:19 crc kubenswrapper[4697]: I0127 15:30:19.122438 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-kc2ll" event={"ID":"e4bebb87-c35b-4185-8c32-560d5ddc3664","Type":"ContainerDied","Data":"c9da9fa853897df35c23910665d430955fbd2044d732f4824decf0dbad6d31b8"} Jan 27 15:30:19 crc kubenswrapper[4697]: I0127 15:30:19.149097 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.149082507 podStartE2EDuration="3.149082507s" podCreationTimestamp="2026-01-27 15:30:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:30:19.138295825 +0000 UTC m=+1315.310695606" watchObservedRunningTime="2026-01-27 15:30:19.149082507 +0000 UTC m=+1315.321482288" Jan 27 15:30:19 crc kubenswrapper[4697]: I0127 15:30:19.243257 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 27 15:30:19 crc kubenswrapper[4697]: I0127 15:30:19.297331 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 15:30:19 crc kubenswrapper[4697]: I0127 15:30:19.463654 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-kc2ll" Jan 27 15:30:19 crc kubenswrapper[4697]: I0127 15:30:19.600944 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e4bebb87-c35b-4185-8c32-560d5ddc3664-dns-swift-storage-0\") pod \"e4bebb87-c35b-4185-8c32-560d5ddc3664\" (UID: \"e4bebb87-c35b-4185-8c32-560d5ddc3664\") " Jan 27 15:30:19 crc kubenswrapper[4697]: I0127 15:30:19.601010 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e4bebb87-c35b-4185-8c32-560d5ddc3664-ovsdbserver-nb\") pod \"e4bebb87-c35b-4185-8c32-560d5ddc3664\" (UID: \"e4bebb87-c35b-4185-8c32-560d5ddc3664\") " Jan 27 15:30:19 crc kubenswrapper[4697]: I0127 15:30:19.601129 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bws4d\" (UniqueName: \"kubernetes.io/projected/e4bebb87-c35b-4185-8c32-560d5ddc3664-kube-api-access-bws4d\") pod \"e4bebb87-c35b-4185-8c32-560d5ddc3664\" (UID: \"e4bebb87-c35b-4185-8c32-560d5ddc3664\") " Jan 27 15:30:19 crc kubenswrapper[4697]: I0127 15:30:19.601379 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e4bebb87-c35b-4185-8c32-560d5ddc3664-ovsdbserver-sb\") pod \"e4bebb87-c35b-4185-8c32-560d5ddc3664\" (UID: \"e4bebb87-c35b-4185-8c32-560d5ddc3664\") " Jan 27 15:30:19 crc kubenswrapper[4697]: I0127 15:30:19.601468 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4bebb87-c35b-4185-8c32-560d5ddc3664-config\") pod \"e4bebb87-c35b-4185-8c32-560d5ddc3664\" (UID: \"e4bebb87-c35b-4185-8c32-560d5ddc3664\") " Jan 27 15:30:19 crc kubenswrapper[4697]: I0127 15:30:19.601500 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e4bebb87-c35b-4185-8c32-560d5ddc3664-dns-svc\") pod \"e4bebb87-c35b-4185-8c32-560d5ddc3664\" (UID: \"e4bebb87-c35b-4185-8c32-560d5ddc3664\") " Jan 27 15:30:19 crc kubenswrapper[4697]: I0127 15:30:19.647272 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4bebb87-c35b-4185-8c32-560d5ddc3664-kube-api-access-bws4d" (OuterVolumeSpecName: "kube-api-access-bws4d") pod "e4bebb87-c35b-4185-8c32-560d5ddc3664" (UID: "e4bebb87-c35b-4185-8c32-560d5ddc3664"). InnerVolumeSpecName "kube-api-access-bws4d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:30:19 crc kubenswrapper[4697]: I0127 15:30:19.707853 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bws4d\" (UniqueName: \"kubernetes.io/projected/e4bebb87-c35b-4185-8c32-560d5ddc3664-kube-api-access-bws4d\") on node \"crc\" DevicePath \"\"" Jan 27 15:30:19 crc kubenswrapper[4697]: I0127 15:30:19.708645 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e4bebb87-c35b-4185-8c32-560d5ddc3664-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e4bebb87-c35b-4185-8c32-560d5ddc3664" (UID: "e4bebb87-c35b-4185-8c32-560d5ddc3664"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:30:19 crc kubenswrapper[4697]: I0127 15:30:19.708662 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e4bebb87-c35b-4185-8c32-560d5ddc3664-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e4bebb87-c35b-4185-8c32-560d5ddc3664" (UID: "e4bebb87-c35b-4185-8c32-560d5ddc3664"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:30:19 crc kubenswrapper[4697]: I0127 15:30:19.719537 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e4bebb87-c35b-4185-8c32-560d5ddc3664-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "e4bebb87-c35b-4185-8c32-560d5ddc3664" (UID: "e4bebb87-c35b-4185-8c32-560d5ddc3664"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:30:19 crc kubenswrapper[4697]: I0127 15:30:19.762211 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e4bebb87-c35b-4185-8c32-560d5ddc3664-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e4bebb87-c35b-4185-8c32-560d5ddc3664" (UID: "e4bebb87-c35b-4185-8c32-560d5ddc3664"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:30:19 crc kubenswrapper[4697]: I0127 15:30:19.808989 4697 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e4bebb87-c35b-4185-8c32-560d5ddc3664-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 15:30:19 crc kubenswrapper[4697]: I0127 15:30:19.809016 4697 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e4bebb87-c35b-4185-8c32-560d5ddc3664-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 15:30:19 crc kubenswrapper[4697]: I0127 15:30:19.809025 4697 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e4bebb87-c35b-4185-8c32-560d5ddc3664-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 27 15:30:19 crc kubenswrapper[4697]: I0127 15:30:19.809035 4697 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e4bebb87-c35b-4185-8c32-560d5ddc3664-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 15:30:19 crc kubenswrapper[4697]: I0127 15:30:19.819669 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e4bebb87-c35b-4185-8c32-560d5ddc3664-config" (OuterVolumeSpecName: "config") pod "e4bebb87-c35b-4185-8c32-560d5ddc3664" (UID: "e4bebb87-c35b-4185-8c32-560d5ddc3664"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:30:19 crc kubenswrapper[4697]: I0127 15:30:19.910003 4697 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4bebb87-c35b-4185-8c32-560d5ddc3664-config\") on node \"crc\" DevicePath \"\"" Jan 27 15:30:20 crc kubenswrapper[4697]: I0127 15:30:20.133416 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-kc2ll" Jan 27 15:30:20 crc kubenswrapper[4697]: I0127 15:30:20.135831 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-kc2ll" event={"ID":"e4bebb87-c35b-4185-8c32-560d5ddc3664","Type":"ContainerDied","Data":"171a21f24282c55c836fe482a4d3434df658011cfd338ad9e50fab3fb7f1f870"} Jan 27 15:30:20 crc kubenswrapper[4697]: I0127 15:30:20.135872 4697 scope.go:117] "RemoveContainer" containerID="c9da9fa853897df35c23910665d430955fbd2044d732f4824decf0dbad6d31b8" Jan 27 15:30:20 crc kubenswrapper[4697]: I0127 15:30:20.136084 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="33a73e4d-a656-45bf-bb16-39ddc92e053b" containerName="cinder-scheduler" containerID="cri-o://50d65fc05d7a40d0a9668384a98b46c43e19b80a68f93209e2b4cb36595ca405" gracePeriod=30 Jan 27 15:30:20 crc kubenswrapper[4697]: I0127 15:30:20.136163 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="33a73e4d-a656-45bf-bb16-39ddc92e053b" containerName="probe" containerID="cri-o://5e827fb420b8a1829d7c434ffcfe05c5e5e940ba76f3638ffe07a099894d82d8" gracePeriod=30 Jan 27 15:30:20 crc kubenswrapper[4697]: I0127 15:30:20.167251 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-kc2ll"] Jan 27 15:30:20 crc kubenswrapper[4697]: I0127 15:30:20.181122 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-kc2ll"] Jan 27 15:30:20 crc kubenswrapper[4697]: I0127 15:30:20.219950 4697 scope.go:117] "RemoveContainer" containerID="10518bbe554a8ca61cdc472176eee5c16ed7c10cdfae11c2345e3111734a8059" Jan 27 15:30:20 crc kubenswrapper[4697]: I0127 15:30:20.578468 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4bebb87-c35b-4185-8c32-560d5ddc3664" path="/var/lib/kubelet/pods/e4bebb87-c35b-4185-8c32-560d5ddc3664/volumes" Jan 27 15:30:20 crc kubenswrapper[4697]: I0127 15:30:20.710539 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-59df5b454d-5c7dx" Jan 27 15:30:21 crc kubenswrapper[4697]: I0127 15:30:21.827850 4697 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6b4dc8dd8d-w99s5" podUID="001c4d9a-f883-48ed-aafa-9b820b5b9380" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.163:9311/healthcheck\": read tcp 10.217.0.2:36960->10.217.0.163:9311: read: connection reset by peer" Jan 27 15:30:21 crc kubenswrapper[4697]: I0127 15:30:21.827895 4697 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6b4dc8dd8d-w99s5" podUID="001c4d9a-f883-48ed-aafa-9b820b5b9380" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.163:9311/healthcheck\": read tcp 10.217.0.2:36964->10.217.0.163:9311: read: connection reset by peer" Jan 27 15:30:22 crc kubenswrapper[4697]: I0127 15:30:22.152709 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 27 15:30:22 crc kubenswrapper[4697]: E0127 15:30:22.153281 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4bebb87-c35b-4185-8c32-560d5ddc3664" containerName="dnsmasq-dns" Jan 27 15:30:22 crc kubenswrapper[4697]: I0127 15:30:22.153293 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4bebb87-c35b-4185-8c32-560d5ddc3664" containerName="dnsmasq-dns" Jan 27 15:30:22 crc kubenswrapper[4697]: E0127 15:30:22.153305 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4bebb87-c35b-4185-8c32-560d5ddc3664" containerName="init" Jan 27 15:30:22 crc kubenswrapper[4697]: I0127 15:30:22.153311 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4bebb87-c35b-4185-8c32-560d5ddc3664" containerName="init" Jan 27 15:30:22 crc kubenswrapper[4697]: I0127 15:30:22.153500 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4bebb87-c35b-4185-8c32-560d5ddc3664" containerName="dnsmasq-dns" Jan 27 15:30:22 crc kubenswrapper[4697]: I0127 15:30:22.154139 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 27 15:30:22 crc kubenswrapper[4697]: I0127 15:30:22.159867 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-7zwn6" Jan 27 15:30:22 crc kubenswrapper[4697]: I0127 15:30:22.160302 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Jan 27 15:30:22 crc kubenswrapper[4697]: I0127 15:30:22.160564 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Jan 27 15:30:22 crc kubenswrapper[4697]: I0127 15:30:22.167846 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 27 15:30:22 crc kubenswrapper[4697]: I0127 15:30:22.176186 4697 generic.go:334] "Generic (PLEG): container finished" podID="001c4d9a-f883-48ed-aafa-9b820b5b9380" containerID="d2678d6a9b780b8efb89eb2ab113aff36e204d7ea628e403d7a8ef6567fe3963" exitCode=0 Jan 27 15:30:22 crc kubenswrapper[4697]: I0127 15:30:22.176269 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6b4dc8dd8d-w99s5" event={"ID":"001c4d9a-f883-48ed-aafa-9b820b5b9380","Type":"ContainerDied","Data":"d2678d6a9b780b8efb89eb2ab113aff36e204d7ea628e403d7a8ef6567fe3963"} Jan 27 15:30:22 crc kubenswrapper[4697]: I0127 15:30:22.211151 4697 generic.go:334] "Generic (PLEG): container finished" podID="33a73e4d-a656-45bf-bb16-39ddc92e053b" containerID="5e827fb420b8a1829d7c434ffcfe05c5e5e940ba76f3638ffe07a099894d82d8" exitCode=0 Jan 27 15:30:22 crc kubenswrapper[4697]: I0127 15:30:22.211184 4697 generic.go:334] "Generic (PLEG): container finished" podID="33a73e4d-a656-45bf-bb16-39ddc92e053b" containerID="50d65fc05d7a40d0a9668384a98b46c43e19b80a68f93209e2b4cb36595ca405" exitCode=0 Jan 27 15:30:22 crc kubenswrapper[4697]: I0127 15:30:22.211253 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"33a73e4d-a656-45bf-bb16-39ddc92e053b","Type":"ContainerDied","Data":"5e827fb420b8a1829d7c434ffcfe05c5e5e940ba76f3638ffe07a099894d82d8"} Jan 27 15:30:22 crc kubenswrapper[4697]: I0127 15:30:22.211281 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"33a73e4d-a656-45bf-bb16-39ddc92e053b","Type":"ContainerDied","Data":"50d65fc05d7a40d0a9668384a98b46c43e19b80a68f93209e2b4cb36595ca405"} Jan 27 15:30:22 crc kubenswrapper[4697]: I0127 15:30:22.242174 4697 generic.go:334] "Generic (PLEG): container finished" podID="d6ad161d-fe95-4ad3-8f60-1f1310b2974c" containerID="e54450188c94f7298427d91a62c88df853535928735655dd6ef49dea887a8a99" exitCode=137 Jan 27 15:30:22 crc kubenswrapper[4697]: I0127 15:30:22.242221 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5965fc65fb-dvhzz" event={"ID":"d6ad161d-fe95-4ad3-8f60-1f1310b2974c","Type":"ContainerDied","Data":"e54450188c94f7298427d91a62c88df853535928735655dd6ef49dea887a8a99"} Jan 27 15:30:22 crc kubenswrapper[4697]: I0127 15:30:22.242248 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5965fc65fb-dvhzz" event={"ID":"d6ad161d-fe95-4ad3-8f60-1f1310b2974c","Type":"ContainerStarted","Data":"ebe38bf6f6e82a4ae410ea90d70082a99bbf9864bd0a371e11f885c6c1ee2d61"} Jan 27 15:30:22 crc kubenswrapper[4697]: I0127 15:30:22.259007 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d176f9c-9152-4162-b723-1f6e8330118a-combined-ca-bundle\") pod \"openstackclient\" (UID: \"3d176f9c-9152-4162-b723-1f6e8330118a\") " pod="openstack/openstackclient" Jan 27 15:30:22 crc kubenswrapper[4697]: I0127 15:30:22.259358 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4lzc\" (UniqueName: \"kubernetes.io/projected/3d176f9c-9152-4162-b723-1f6e8330118a-kube-api-access-f4lzc\") pod \"openstackclient\" (UID: \"3d176f9c-9152-4162-b723-1f6e8330118a\") " pod="openstack/openstackclient" Jan 27 15:30:22 crc kubenswrapper[4697]: I0127 15:30:22.259473 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/3d176f9c-9152-4162-b723-1f6e8330118a-openstack-config-secret\") pod \"openstackclient\" (UID: \"3d176f9c-9152-4162-b723-1f6e8330118a\") " pod="openstack/openstackclient" Jan 27 15:30:22 crc kubenswrapper[4697]: I0127 15:30:22.259572 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/3d176f9c-9152-4162-b723-1f6e8330118a-openstack-config\") pod \"openstackclient\" (UID: \"3d176f9c-9152-4162-b723-1f6e8330118a\") " pod="openstack/openstackclient" Jan 27 15:30:22 crc kubenswrapper[4697]: I0127 15:30:22.362724 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6b4dc8dd8d-w99s5" Jan 27 15:30:22 crc kubenswrapper[4697]: I0127 15:30:22.362875 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d176f9c-9152-4162-b723-1f6e8330118a-combined-ca-bundle\") pod \"openstackclient\" (UID: \"3d176f9c-9152-4162-b723-1f6e8330118a\") " pod="openstack/openstackclient" Jan 27 15:30:22 crc kubenswrapper[4697]: I0127 15:30:22.363007 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f4lzc\" (UniqueName: \"kubernetes.io/projected/3d176f9c-9152-4162-b723-1f6e8330118a-kube-api-access-f4lzc\") pod \"openstackclient\" (UID: \"3d176f9c-9152-4162-b723-1f6e8330118a\") " pod="openstack/openstackclient" Jan 27 15:30:22 crc kubenswrapper[4697]: I0127 15:30:22.363122 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/3d176f9c-9152-4162-b723-1f6e8330118a-openstack-config-secret\") pod \"openstackclient\" (UID: \"3d176f9c-9152-4162-b723-1f6e8330118a\") " pod="openstack/openstackclient" Jan 27 15:30:22 crc kubenswrapper[4697]: I0127 15:30:22.363206 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/3d176f9c-9152-4162-b723-1f6e8330118a-openstack-config\") pod \"openstackclient\" (UID: \"3d176f9c-9152-4162-b723-1f6e8330118a\") " pod="openstack/openstackclient" Jan 27 15:30:22 crc kubenswrapper[4697]: I0127 15:30:22.365229 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/3d176f9c-9152-4162-b723-1f6e8330118a-openstack-config\") pod \"openstackclient\" (UID: \"3d176f9c-9152-4162-b723-1f6e8330118a\") " pod="openstack/openstackclient" Jan 27 15:30:22 crc kubenswrapper[4697]: I0127 15:30:22.373257 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/3d176f9c-9152-4162-b723-1f6e8330118a-openstack-config-secret\") pod \"openstackclient\" (UID: \"3d176f9c-9152-4162-b723-1f6e8330118a\") " pod="openstack/openstackclient" Jan 27 15:30:22 crc kubenswrapper[4697]: I0127 15:30:22.378674 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d176f9c-9152-4162-b723-1f6e8330118a-combined-ca-bundle\") pod \"openstackclient\" (UID: \"3d176f9c-9152-4162-b723-1f6e8330118a\") " pod="openstack/openstackclient" Jan 27 15:30:22 crc kubenswrapper[4697]: I0127 15:30:22.387301 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4lzc\" (UniqueName: \"kubernetes.io/projected/3d176f9c-9152-4162-b723-1f6e8330118a-kube-api-access-f4lzc\") pod \"openstackclient\" (UID: \"3d176f9c-9152-4162-b723-1f6e8330118a\") " pod="openstack/openstackclient" Jan 27 15:30:22 crc kubenswrapper[4697]: I0127 15:30:22.464399 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/001c4d9a-f883-48ed-aafa-9b820b5b9380-combined-ca-bundle\") pod \"001c4d9a-f883-48ed-aafa-9b820b5b9380\" (UID: \"001c4d9a-f883-48ed-aafa-9b820b5b9380\") " Jan 27 15:30:22 crc kubenswrapper[4697]: I0127 15:30:22.464540 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/001c4d9a-f883-48ed-aafa-9b820b5b9380-config-data-custom\") pod \"001c4d9a-f883-48ed-aafa-9b820b5b9380\" (UID: \"001c4d9a-f883-48ed-aafa-9b820b5b9380\") " Jan 27 15:30:22 crc kubenswrapper[4697]: I0127 15:30:22.464579 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/001c4d9a-f883-48ed-aafa-9b820b5b9380-logs\") pod \"001c4d9a-f883-48ed-aafa-9b820b5b9380\" (UID: \"001c4d9a-f883-48ed-aafa-9b820b5b9380\") " Jan 27 15:30:22 crc kubenswrapper[4697]: I0127 15:30:22.464623 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5924n\" (UniqueName: \"kubernetes.io/projected/001c4d9a-f883-48ed-aafa-9b820b5b9380-kube-api-access-5924n\") pod \"001c4d9a-f883-48ed-aafa-9b820b5b9380\" (UID: \"001c4d9a-f883-48ed-aafa-9b820b5b9380\") " Jan 27 15:30:22 crc kubenswrapper[4697]: I0127 15:30:22.464696 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/001c4d9a-f883-48ed-aafa-9b820b5b9380-config-data\") pod \"001c4d9a-f883-48ed-aafa-9b820b5b9380\" (UID: \"001c4d9a-f883-48ed-aafa-9b820b5b9380\") " Jan 27 15:30:22 crc kubenswrapper[4697]: I0127 15:30:22.466305 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/001c4d9a-f883-48ed-aafa-9b820b5b9380-logs" (OuterVolumeSpecName: "logs") pod "001c4d9a-f883-48ed-aafa-9b820b5b9380" (UID: "001c4d9a-f883-48ed-aafa-9b820b5b9380"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:30:22 crc kubenswrapper[4697]: I0127 15:30:22.472956 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/001c4d9a-f883-48ed-aafa-9b820b5b9380-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "001c4d9a-f883-48ed-aafa-9b820b5b9380" (UID: "001c4d9a-f883-48ed-aafa-9b820b5b9380"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:30:22 crc kubenswrapper[4697]: I0127 15:30:22.475271 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/001c4d9a-f883-48ed-aafa-9b820b5b9380-kube-api-access-5924n" (OuterVolumeSpecName: "kube-api-access-5924n") pod "001c4d9a-f883-48ed-aafa-9b820b5b9380" (UID: "001c4d9a-f883-48ed-aafa-9b820b5b9380"). InnerVolumeSpecName "kube-api-access-5924n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:30:22 crc kubenswrapper[4697]: I0127 15:30:22.476476 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 27 15:30:22 crc kubenswrapper[4697]: I0127 15:30:22.527529 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/001c4d9a-f883-48ed-aafa-9b820b5b9380-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "001c4d9a-f883-48ed-aafa-9b820b5b9380" (UID: "001c4d9a-f883-48ed-aafa-9b820b5b9380"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:30:22 crc kubenswrapper[4697]: I0127 15:30:22.566654 4697 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/001c4d9a-f883-48ed-aafa-9b820b5b9380-logs\") on node \"crc\" DevicePath \"\"" Jan 27 15:30:22 crc kubenswrapper[4697]: I0127 15:30:22.566691 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5924n\" (UniqueName: \"kubernetes.io/projected/001c4d9a-f883-48ed-aafa-9b820b5b9380-kube-api-access-5924n\") on node \"crc\" DevicePath \"\"" Jan 27 15:30:22 crc kubenswrapper[4697]: I0127 15:30:22.566702 4697 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/001c4d9a-f883-48ed-aafa-9b820b5b9380-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 15:30:22 crc kubenswrapper[4697]: I0127 15:30:22.566710 4697 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/001c4d9a-f883-48ed-aafa-9b820b5b9380-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 27 15:30:22 crc kubenswrapper[4697]: I0127 15:30:22.567988 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/001c4d9a-f883-48ed-aafa-9b820b5b9380-config-data" (OuterVolumeSpecName: "config-data") pod "001c4d9a-f883-48ed-aafa-9b820b5b9380" (UID: "001c4d9a-f883-48ed-aafa-9b820b5b9380"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:30:22 crc kubenswrapper[4697]: I0127 15:30:22.668095 4697 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/001c4d9a-f883-48ed-aafa-9b820b5b9380-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 15:30:22 crc kubenswrapper[4697]: I0127 15:30:22.675546 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 27 15:30:22 crc kubenswrapper[4697]: I0127 15:30:22.769375 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33a73e4d-a656-45bf-bb16-39ddc92e053b-scripts\") pod \"33a73e4d-a656-45bf-bb16-39ddc92e053b\" (UID: \"33a73e4d-a656-45bf-bb16-39ddc92e053b\") " Jan 27 15:30:22 crc kubenswrapper[4697]: I0127 15:30:22.769464 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33a73e4d-a656-45bf-bb16-39ddc92e053b-config-data\") pod \"33a73e4d-a656-45bf-bb16-39ddc92e053b\" (UID: \"33a73e4d-a656-45bf-bb16-39ddc92e053b\") " Jan 27 15:30:22 crc kubenswrapper[4697]: I0127 15:30:22.769499 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6xft8\" (UniqueName: \"kubernetes.io/projected/33a73e4d-a656-45bf-bb16-39ddc92e053b-kube-api-access-6xft8\") pod \"33a73e4d-a656-45bf-bb16-39ddc92e053b\" (UID: \"33a73e4d-a656-45bf-bb16-39ddc92e053b\") " Jan 27 15:30:22 crc kubenswrapper[4697]: I0127 15:30:22.769517 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/33a73e4d-a656-45bf-bb16-39ddc92e053b-config-data-custom\") pod \"33a73e4d-a656-45bf-bb16-39ddc92e053b\" (UID: \"33a73e4d-a656-45bf-bb16-39ddc92e053b\") " Jan 27 15:30:22 crc kubenswrapper[4697]: I0127 15:30:22.769664 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33a73e4d-a656-45bf-bb16-39ddc92e053b-combined-ca-bundle\") pod \"33a73e4d-a656-45bf-bb16-39ddc92e053b\" (UID: \"33a73e4d-a656-45bf-bb16-39ddc92e053b\") " Jan 27 15:30:22 crc kubenswrapper[4697]: I0127 15:30:22.769713 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/33a73e4d-a656-45bf-bb16-39ddc92e053b-etc-machine-id\") pod \"33a73e4d-a656-45bf-bb16-39ddc92e053b\" (UID: \"33a73e4d-a656-45bf-bb16-39ddc92e053b\") " Jan 27 15:30:22 crc kubenswrapper[4697]: I0127 15:30:22.770138 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/33a73e4d-a656-45bf-bb16-39ddc92e053b-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "33a73e4d-a656-45bf-bb16-39ddc92e053b" (UID: "33a73e4d-a656-45bf-bb16-39ddc92e053b"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 15:30:22 crc kubenswrapper[4697]: I0127 15:30:22.784152 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33a73e4d-a656-45bf-bb16-39ddc92e053b-scripts" (OuterVolumeSpecName: "scripts") pod "33a73e4d-a656-45bf-bb16-39ddc92e053b" (UID: "33a73e4d-a656-45bf-bb16-39ddc92e053b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:30:22 crc kubenswrapper[4697]: I0127 15:30:22.785130 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33a73e4d-a656-45bf-bb16-39ddc92e053b-kube-api-access-6xft8" (OuterVolumeSpecName: "kube-api-access-6xft8") pod "33a73e4d-a656-45bf-bb16-39ddc92e053b" (UID: "33a73e4d-a656-45bf-bb16-39ddc92e053b"). InnerVolumeSpecName "kube-api-access-6xft8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:30:22 crc kubenswrapper[4697]: I0127 15:30:22.790245 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33a73e4d-a656-45bf-bb16-39ddc92e053b-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "33a73e4d-a656-45bf-bb16-39ddc92e053b" (UID: "33a73e4d-a656-45bf-bb16-39ddc92e053b"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:30:22 crc kubenswrapper[4697]: I0127 15:30:22.871504 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6xft8\" (UniqueName: \"kubernetes.io/projected/33a73e4d-a656-45bf-bb16-39ddc92e053b-kube-api-access-6xft8\") on node \"crc\" DevicePath \"\"" Jan 27 15:30:22 crc kubenswrapper[4697]: I0127 15:30:22.871879 4697 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/33a73e4d-a656-45bf-bb16-39ddc92e053b-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 27 15:30:22 crc kubenswrapper[4697]: I0127 15:30:22.871891 4697 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/33a73e4d-a656-45bf-bb16-39ddc92e053b-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 27 15:30:22 crc kubenswrapper[4697]: I0127 15:30:22.871925 4697 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33a73e4d-a656-45bf-bb16-39ddc92e053b-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 15:30:22 crc kubenswrapper[4697]: I0127 15:30:22.913899 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33a73e4d-a656-45bf-bb16-39ddc92e053b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "33a73e4d-a656-45bf-bb16-39ddc92e053b" (UID: "33a73e4d-a656-45bf-bb16-39ddc92e053b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:30:22 crc kubenswrapper[4697]: I0127 15:30:22.932877 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33a73e4d-a656-45bf-bb16-39ddc92e053b-config-data" (OuterVolumeSpecName: "config-data") pod "33a73e4d-a656-45bf-bb16-39ddc92e053b" (UID: "33a73e4d-a656-45bf-bb16-39ddc92e053b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:30:22 crc kubenswrapper[4697]: I0127 15:30:22.973137 4697 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33a73e4d-a656-45bf-bb16-39ddc92e053b-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 15:30:22 crc kubenswrapper[4697]: I0127 15:30:22.973320 4697 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33a73e4d-a656-45bf-bb16-39ddc92e053b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 15:30:23 crc kubenswrapper[4697]: I0127 15:30:23.012586 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 27 15:30:23 crc kubenswrapper[4697]: I0127 15:30:23.251755 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"33a73e4d-a656-45bf-bb16-39ddc92e053b","Type":"ContainerDied","Data":"502158077c8d370fe649528caec35b11c83318fc96ef2acc96b2cd271bd8c883"} Jan 27 15:30:23 crc kubenswrapper[4697]: I0127 15:30:23.251812 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 27 15:30:23 crc kubenswrapper[4697]: I0127 15:30:23.251841 4697 scope.go:117] "RemoveContainer" containerID="5e827fb420b8a1829d7c434ffcfe05c5e5e940ba76f3638ffe07a099894d82d8" Jan 27 15:30:23 crc kubenswrapper[4697]: I0127 15:30:23.255076 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6b4dc8dd8d-w99s5" event={"ID":"001c4d9a-f883-48ed-aafa-9b820b5b9380","Type":"ContainerDied","Data":"aebb4dda8066d437a180f84f6e241be7585d73595fe974a8e5e3e09dddc2d863"} Jan 27 15:30:23 crc kubenswrapper[4697]: I0127 15:30:23.255155 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6b4dc8dd8d-w99s5" Jan 27 15:30:23 crc kubenswrapper[4697]: I0127 15:30:23.257934 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"3d176f9c-9152-4162-b723-1f6e8330118a","Type":"ContainerStarted","Data":"0a6c465f71625476ea7158a8c800e8e727062460eeaa74cdd0d1eb768962d6bd"} Jan 27 15:30:23 crc kubenswrapper[4697]: I0127 15:30:23.272381 4697 scope.go:117] "RemoveContainer" containerID="50d65fc05d7a40d0a9668384a98b46c43e19b80a68f93209e2b4cb36595ca405" Jan 27 15:30:23 crc kubenswrapper[4697]: I0127 15:30:23.316114 4697 scope.go:117] "RemoveContainer" containerID="d2678d6a9b780b8efb89eb2ab113aff36e204d7ea628e403d7a8ef6567fe3963" Jan 27 15:30:23 crc kubenswrapper[4697]: I0127 15:30:23.333343 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-6b4dc8dd8d-w99s5"] Jan 27 15:30:23 crc kubenswrapper[4697]: I0127 15:30:23.378910 4697 scope.go:117] "RemoveContainer" containerID="91b87d86adc1c0b52193f2a6f5c7b0dee6f084e2b7f4e6c72564bfb2aed13025" Jan 27 15:30:23 crc kubenswrapper[4697]: I0127 15:30:23.379037 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-6b4dc8dd8d-w99s5"] Jan 27 15:30:23 crc kubenswrapper[4697]: I0127 15:30:23.394050 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 15:30:23 crc kubenswrapper[4697]: I0127 15:30:23.407086 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 15:30:23 crc kubenswrapper[4697]: I0127 15:30:23.413375 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 15:30:23 crc kubenswrapper[4697]: E0127 15:30:23.413878 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33a73e4d-a656-45bf-bb16-39ddc92e053b" containerName="cinder-scheduler" Jan 27 15:30:23 crc kubenswrapper[4697]: I0127 15:30:23.413904 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="33a73e4d-a656-45bf-bb16-39ddc92e053b" containerName="cinder-scheduler" Jan 27 15:30:23 crc kubenswrapper[4697]: E0127 15:30:23.413940 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33a73e4d-a656-45bf-bb16-39ddc92e053b" containerName="probe" Jan 27 15:30:23 crc kubenswrapper[4697]: I0127 15:30:23.413946 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="33a73e4d-a656-45bf-bb16-39ddc92e053b" containerName="probe" Jan 27 15:30:23 crc kubenswrapper[4697]: E0127 15:30:23.413963 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="001c4d9a-f883-48ed-aafa-9b820b5b9380" containerName="barbican-api" Jan 27 15:30:23 crc kubenswrapper[4697]: I0127 15:30:23.413970 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="001c4d9a-f883-48ed-aafa-9b820b5b9380" containerName="barbican-api" Jan 27 15:30:23 crc kubenswrapper[4697]: E0127 15:30:23.413980 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="001c4d9a-f883-48ed-aafa-9b820b5b9380" containerName="barbican-api-log" Jan 27 15:30:23 crc kubenswrapper[4697]: I0127 15:30:23.413986 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="001c4d9a-f883-48ed-aafa-9b820b5b9380" containerName="barbican-api-log" Jan 27 15:30:23 crc kubenswrapper[4697]: I0127 15:30:23.414195 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="001c4d9a-f883-48ed-aafa-9b820b5b9380" containerName="barbican-api-log" Jan 27 15:30:23 crc kubenswrapper[4697]: I0127 15:30:23.414210 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="001c4d9a-f883-48ed-aafa-9b820b5b9380" containerName="barbican-api" Jan 27 15:30:23 crc kubenswrapper[4697]: I0127 15:30:23.414230 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="33a73e4d-a656-45bf-bb16-39ddc92e053b" containerName="cinder-scheduler" Jan 27 15:30:23 crc kubenswrapper[4697]: I0127 15:30:23.414241 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="33a73e4d-a656-45bf-bb16-39ddc92e053b" containerName="probe" Jan 27 15:30:23 crc kubenswrapper[4697]: I0127 15:30:23.415341 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 27 15:30:23 crc kubenswrapper[4697]: I0127 15:30:23.420916 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 27 15:30:23 crc kubenswrapper[4697]: I0127 15:30:23.430977 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 15:30:23 crc kubenswrapper[4697]: I0127 15:30:23.482070 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b69b7a05-c4c4-48a4-a4fa-0cc140a18080-scripts\") pod \"cinder-scheduler-0\" (UID: \"b69b7a05-c4c4-48a4-a4fa-0cc140a18080\") " pod="openstack/cinder-scheduler-0" Jan 27 15:30:23 crc kubenswrapper[4697]: I0127 15:30:23.482147 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b69b7a05-c4c4-48a4-a4fa-0cc140a18080-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"b69b7a05-c4c4-48a4-a4fa-0cc140a18080\") " pod="openstack/cinder-scheduler-0" Jan 27 15:30:23 crc kubenswrapper[4697]: I0127 15:30:23.482527 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b69b7a05-c4c4-48a4-a4fa-0cc140a18080-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"b69b7a05-c4c4-48a4-a4fa-0cc140a18080\") " pod="openstack/cinder-scheduler-0" Jan 27 15:30:23 crc kubenswrapper[4697]: I0127 15:30:23.482625 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b69b7a05-c4c4-48a4-a4fa-0cc140a18080-config-data\") pod \"cinder-scheduler-0\" (UID: \"b69b7a05-c4c4-48a4-a4fa-0cc140a18080\") " pod="openstack/cinder-scheduler-0" Jan 27 15:30:23 crc kubenswrapper[4697]: I0127 15:30:23.482711 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9l9c6\" (UniqueName: \"kubernetes.io/projected/b69b7a05-c4c4-48a4-a4fa-0cc140a18080-kube-api-access-9l9c6\") pod \"cinder-scheduler-0\" (UID: \"b69b7a05-c4c4-48a4-a4fa-0cc140a18080\") " pod="openstack/cinder-scheduler-0" Jan 27 15:30:23 crc kubenswrapper[4697]: I0127 15:30:23.482751 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b69b7a05-c4c4-48a4-a4fa-0cc140a18080-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"b69b7a05-c4c4-48a4-a4fa-0cc140a18080\") " pod="openstack/cinder-scheduler-0" Jan 27 15:30:23 crc kubenswrapper[4697]: I0127 15:30:23.584225 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b69b7a05-c4c4-48a4-a4fa-0cc140a18080-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"b69b7a05-c4c4-48a4-a4fa-0cc140a18080\") " pod="openstack/cinder-scheduler-0" Jan 27 15:30:23 crc kubenswrapper[4697]: I0127 15:30:23.584276 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b69b7a05-c4c4-48a4-a4fa-0cc140a18080-config-data\") pod \"cinder-scheduler-0\" (UID: \"b69b7a05-c4c4-48a4-a4fa-0cc140a18080\") " pod="openstack/cinder-scheduler-0" Jan 27 15:30:23 crc kubenswrapper[4697]: I0127 15:30:23.584309 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9l9c6\" (UniqueName: \"kubernetes.io/projected/b69b7a05-c4c4-48a4-a4fa-0cc140a18080-kube-api-access-9l9c6\") pod \"cinder-scheduler-0\" (UID: \"b69b7a05-c4c4-48a4-a4fa-0cc140a18080\") " pod="openstack/cinder-scheduler-0" Jan 27 15:30:23 crc kubenswrapper[4697]: I0127 15:30:23.584334 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b69b7a05-c4c4-48a4-a4fa-0cc140a18080-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"b69b7a05-c4c4-48a4-a4fa-0cc140a18080\") " pod="openstack/cinder-scheduler-0" Jan 27 15:30:23 crc kubenswrapper[4697]: I0127 15:30:23.584354 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b69b7a05-c4c4-48a4-a4fa-0cc140a18080-scripts\") pod \"cinder-scheduler-0\" (UID: \"b69b7a05-c4c4-48a4-a4fa-0cc140a18080\") " pod="openstack/cinder-scheduler-0" Jan 27 15:30:23 crc kubenswrapper[4697]: I0127 15:30:23.584352 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b69b7a05-c4c4-48a4-a4fa-0cc140a18080-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"b69b7a05-c4c4-48a4-a4fa-0cc140a18080\") " pod="openstack/cinder-scheduler-0" Jan 27 15:30:23 crc kubenswrapper[4697]: I0127 15:30:23.584377 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b69b7a05-c4c4-48a4-a4fa-0cc140a18080-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"b69b7a05-c4c4-48a4-a4fa-0cc140a18080\") " pod="openstack/cinder-scheduler-0" Jan 27 15:30:23 crc kubenswrapper[4697]: I0127 15:30:23.590272 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b69b7a05-c4c4-48a4-a4fa-0cc140a18080-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"b69b7a05-c4c4-48a4-a4fa-0cc140a18080\") " pod="openstack/cinder-scheduler-0" Jan 27 15:30:23 crc kubenswrapper[4697]: I0127 15:30:23.590359 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b69b7a05-c4c4-48a4-a4fa-0cc140a18080-config-data\") pod \"cinder-scheduler-0\" (UID: \"b69b7a05-c4c4-48a4-a4fa-0cc140a18080\") " pod="openstack/cinder-scheduler-0" Jan 27 15:30:23 crc kubenswrapper[4697]: I0127 15:30:23.590378 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b69b7a05-c4c4-48a4-a4fa-0cc140a18080-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"b69b7a05-c4c4-48a4-a4fa-0cc140a18080\") " pod="openstack/cinder-scheduler-0" Jan 27 15:30:23 crc kubenswrapper[4697]: I0127 15:30:23.593612 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b69b7a05-c4c4-48a4-a4fa-0cc140a18080-scripts\") pod \"cinder-scheduler-0\" (UID: \"b69b7a05-c4c4-48a4-a4fa-0cc140a18080\") " pod="openstack/cinder-scheduler-0" Jan 27 15:30:23 crc kubenswrapper[4697]: I0127 15:30:23.609305 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9l9c6\" (UniqueName: \"kubernetes.io/projected/b69b7a05-c4c4-48a4-a4fa-0cc140a18080-kube-api-access-9l9c6\") pod \"cinder-scheduler-0\" (UID: \"b69b7a05-c4c4-48a4-a4fa-0cc140a18080\") " pod="openstack/cinder-scheduler-0" Jan 27 15:30:23 crc kubenswrapper[4697]: I0127 15:30:23.733904 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 27 15:30:24 crc kubenswrapper[4697]: I0127 15:30:24.299587 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 15:30:24 crc kubenswrapper[4697]: W0127 15:30:24.310321 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb69b7a05_c4c4_48a4_a4fa_0cc140a18080.slice/crio-4041793033c5ec7ef3f179d73c176582710d8cd16787e55ee620c067de96e746 WatchSource:0}: Error finding container 4041793033c5ec7ef3f179d73c176582710d8cd16787e55ee620c067de96e746: Status 404 returned error can't find the container with id 4041793033c5ec7ef3f179d73c176582710d8cd16787e55ee620c067de96e746 Jan 27 15:30:24 crc kubenswrapper[4697]: I0127 15:30:24.599482 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="001c4d9a-f883-48ed-aafa-9b820b5b9380" path="/var/lib/kubelet/pods/001c4d9a-f883-48ed-aafa-9b820b5b9380/volumes" Jan 27 15:30:24 crc kubenswrapper[4697]: I0127 15:30:24.602845 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33a73e4d-a656-45bf-bb16-39ddc92e053b" path="/var/lib/kubelet/pods/33a73e4d-a656-45bf-bb16-39ddc92e053b/volumes" Jan 27 15:30:25 crc kubenswrapper[4697]: I0127 15:30:25.109481 4697 patch_prober.go:28] interesting pod/machine-config-daemon-wz495 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 15:30:25 crc kubenswrapper[4697]: I0127 15:30:25.109873 4697 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 15:30:25 crc kubenswrapper[4697]: I0127 15:30:25.293131 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"b69b7a05-c4c4-48a4-a4fa-0cc140a18080","Type":"ContainerStarted","Data":"33c5a452fe6653bfad0e64195463327dce2de9e8008ab76e1bf79a3c254a9c52"} Jan 27 15:30:25 crc kubenswrapper[4697]: I0127 15:30:25.293426 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"b69b7a05-c4c4-48a4-a4fa-0cc140a18080","Type":"ContainerStarted","Data":"4041793033c5ec7ef3f179d73c176582710d8cd16787e55ee620c067de96e746"} Jan 27 15:30:26 crc kubenswrapper[4697]: I0127 15:30:26.303609 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"b69b7a05-c4c4-48a4-a4fa-0cc140a18080","Type":"ContainerStarted","Data":"a87931d79b4491af91b4a771df9d9f5e17b1c308028385faaebeb65f1177c6c3"} Jan 27 15:30:26 crc kubenswrapper[4697]: I0127 15:30:26.326202 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.326185063 podStartE2EDuration="3.326185063s" podCreationTimestamp="2026-01-27 15:30:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:30:26.321219342 +0000 UTC m=+1322.493619123" watchObservedRunningTime="2026-01-27 15:30:26.326185063 +0000 UTC m=+1322.498584834" Jan 27 15:30:28 crc kubenswrapper[4697]: I0127 15:30:28.734724 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 27 15:30:29 crc kubenswrapper[4697]: I0127 15:30:29.294990 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-8bdd65479-mrv2d"] Jan 27 15:30:29 crc kubenswrapper[4697]: I0127 15:30:29.297362 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-8bdd65479-mrv2d" Jan 27 15:30:29 crc kubenswrapper[4697]: I0127 15:30:29.300682 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Jan 27 15:30:29 crc kubenswrapper[4697]: I0127 15:30:29.301020 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 27 15:30:29 crc kubenswrapper[4697]: I0127 15:30:29.301359 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Jan 27 15:30:29 crc kubenswrapper[4697]: I0127 15:30:29.313273 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-8bdd65479-mrv2d"] Jan 27 15:30:29 crc kubenswrapper[4697]: I0127 15:30:29.415546 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f6bc9e4-3f3f-4e33-a648-4381818937f1-combined-ca-bundle\") pod \"swift-proxy-8bdd65479-mrv2d\" (UID: \"8f6bc9e4-3f3f-4e33-a648-4381818937f1\") " pod="openstack/swift-proxy-8bdd65479-mrv2d" Jan 27 15:30:29 crc kubenswrapper[4697]: I0127 15:30:29.415897 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8f6bc9e4-3f3f-4e33-a648-4381818937f1-run-httpd\") pod \"swift-proxy-8bdd65479-mrv2d\" (UID: \"8f6bc9e4-3f3f-4e33-a648-4381818937f1\") " pod="openstack/swift-proxy-8bdd65479-mrv2d" Jan 27 15:30:29 crc kubenswrapper[4697]: I0127 15:30:29.415981 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5j5rs\" (UniqueName: \"kubernetes.io/projected/8f6bc9e4-3f3f-4e33-a648-4381818937f1-kube-api-access-5j5rs\") pod \"swift-proxy-8bdd65479-mrv2d\" (UID: \"8f6bc9e4-3f3f-4e33-a648-4381818937f1\") " pod="openstack/swift-proxy-8bdd65479-mrv2d" Jan 27 15:30:29 crc kubenswrapper[4697]: I0127 15:30:29.416015 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8f6bc9e4-3f3f-4e33-a648-4381818937f1-log-httpd\") pod \"swift-proxy-8bdd65479-mrv2d\" (UID: \"8f6bc9e4-3f3f-4e33-a648-4381818937f1\") " pod="openstack/swift-proxy-8bdd65479-mrv2d" Jan 27 15:30:29 crc kubenswrapper[4697]: I0127 15:30:29.416103 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f6bc9e4-3f3f-4e33-a648-4381818937f1-internal-tls-certs\") pod \"swift-proxy-8bdd65479-mrv2d\" (UID: \"8f6bc9e4-3f3f-4e33-a648-4381818937f1\") " pod="openstack/swift-proxy-8bdd65479-mrv2d" Jan 27 15:30:29 crc kubenswrapper[4697]: I0127 15:30:29.416175 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f6bc9e4-3f3f-4e33-a648-4381818937f1-public-tls-certs\") pod \"swift-proxy-8bdd65479-mrv2d\" (UID: \"8f6bc9e4-3f3f-4e33-a648-4381818937f1\") " pod="openstack/swift-proxy-8bdd65479-mrv2d" Jan 27 15:30:29 crc kubenswrapper[4697]: I0127 15:30:29.416284 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f6bc9e4-3f3f-4e33-a648-4381818937f1-config-data\") pod \"swift-proxy-8bdd65479-mrv2d\" (UID: \"8f6bc9e4-3f3f-4e33-a648-4381818937f1\") " pod="openstack/swift-proxy-8bdd65479-mrv2d" Jan 27 15:30:29 crc kubenswrapper[4697]: I0127 15:30:29.416366 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8f6bc9e4-3f3f-4e33-a648-4381818937f1-etc-swift\") pod \"swift-proxy-8bdd65479-mrv2d\" (UID: \"8f6bc9e4-3f3f-4e33-a648-4381818937f1\") " pod="openstack/swift-proxy-8bdd65479-mrv2d" Jan 27 15:30:29 crc kubenswrapper[4697]: I0127 15:30:29.518117 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8f6bc9e4-3f3f-4e33-a648-4381818937f1-log-httpd\") pod \"swift-proxy-8bdd65479-mrv2d\" (UID: \"8f6bc9e4-3f3f-4e33-a648-4381818937f1\") " pod="openstack/swift-proxy-8bdd65479-mrv2d" Jan 27 15:30:29 crc kubenswrapper[4697]: I0127 15:30:29.518192 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f6bc9e4-3f3f-4e33-a648-4381818937f1-internal-tls-certs\") pod \"swift-proxy-8bdd65479-mrv2d\" (UID: \"8f6bc9e4-3f3f-4e33-a648-4381818937f1\") " pod="openstack/swift-proxy-8bdd65479-mrv2d" Jan 27 15:30:29 crc kubenswrapper[4697]: I0127 15:30:29.518232 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f6bc9e4-3f3f-4e33-a648-4381818937f1-public-tls-certs\") pod \"swift-proxy-8bdd65479-mrv2d\" (UID: \"8f6bc9e4-3f3f-4e33-a648-4381818937f1\") " pod="openstack/swift-proxy-8bdd65479-mrv2d" Jan 27 15:30:29 crc kubenswrapper[4697]: I0127 15:30:29.518283 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f6bc9e4-3f3f-4e33-a648-4381818937f1-config-data\") pod \"swift-proxy-8bdd65479-mrv2d\" (UID: \"8f6bc9e4-3f3f-4e33-a648-4381818937f1\") " pod="openstack/swift-proxy-8bdd65479-mrv2d" Jan 27 15:30:29 crc kubenswrapper[4697]: I0127 15:30:29.518314 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8f6bc9e4-3f3f-4e33-a648-4381818937f1-etc-swift\") pod \"swift-proxy-8bdd65479-mrv2d\" (UID: \"8f6bc9e4-3f3f-4e33-a648-4381818937f1\") " pod="openstack/swift-proxy-8bdd65479-mrv2d" Jan 27 15:30:29 crc kubenswrapper[4697]: I0127 15:30:29.518349 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f6bc9e4-3f3f-4e33-a648-4381818937f1-combined-ca-bundle\") pod \"swift-proxy-8bdd65479-mrv2d\" (UID: \"8f6bc9e4-3f3f-4e33-a648-4381818937f1\") " pod="openstack/swift-proxy-8bdd65479-mrv2d" Jan 27 15:30:29 crc kubenswrapper[4697]: I0127 15:30:29.518368 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8f6bc9e4-3f3f-4e33-a648-4381818937f1-run-httpd\") pod \"swift-proxy-8bdd65479-mrv2d\" (UID: \"8f6bc9e4-3f3f-4e33-a648-4381818937f1\") " pod="openstack/swift-proxy-8bdd65479-mrv2d" Jan 27 15:30:29 crc kubenswrapper[4697]: I0127 15:30:29.518403 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5j5rs\" (UniqueName: \"kubernetes.io/projected/8f6bc9e4-3f3f-4e33-a648-4381818937f1-kube-api-access-5j5rs\") pod \"swift-proxy-8bdd65479-mrv2d\" (UID: \"8f6bc9e4-3f3f-4e33-a648-4381818937f1\") " pod="openstack/swift-proxy-8bdd65479-mrv2d" Jan 27 15:30:29 crc kubenswrapper[4697]: I0127 15:30:29.519129 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8f6bc9e4-3f3f-4e33-a648-4381818937f1-log-httpd\") pod \"swift-proxy-8bdd65479-mrv2d\" (UID: \"8f6bc9e4-3f3f-4e33-a648-4381818937f1\") " pod="openstack/swift-proxy-8bdd65479-mrv2d" Jan 27 15:30:29 crc kubenswrapper[4697]: I0127 15:30:29.520423 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8f6bc9e4-3f3f-4e33-a648-4381818937f1-run-httpd\") pod \"swift-proxy-8bdd65479-mrv2d\" (UID: \"8f6bc9e4-3f3f-4e33-a648-4381818937f1\") " pod="openstack/swift-proxy-8bdd65479-mrv2d" Jan 27 15:30:29 crc kubenswrapper[4697]: I0127 15:30:29.526757 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f6bc9e4-3f3f-4e33-a648-4381818937f1-combined-ca-bundle\") pod \"swift-proxy-8bdd65479-mrv2d\" (UID: \"8f6bc9e4-3f3f-4e33-a648-4381818937f1\") " pod="openstack/swift-proxy-8bdd65479-mrv2d" Jan 27 15:30:29 crc kubenswrapper[4697]: I0127 15:30:29.527015 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f6bc9e4-3f3f-4e33-a648-4381818937f1-public-tls-certs\") pod \"swift-proxy-8bdd65479-mrv2d\" (UID: \"8f6bc9e4-3f3f-4e33-a648-4381818937f1\") " pod="openstack/swift-proxy-8bdd65479-mrv2d" Jan 27 15:30:29 crc kubenswrapper[4697]: I0127 15:30:29.527129 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8f6bc9e4-3f3f-4e33-a648-4381818937f1-etc-swift\") pod \"swift-proxy-8bdd65479-mrv2d\" (UID: \"8f6bc9e4-3f3f-4e33-a648-4381818937f1\") " pod="openstack/swift-proxy-8bdd65479-mrv2d" Jan 27 15:30:29 crc kubenswrapper[4697]: I0127 15:30:29.527444 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f6bc9e4-3f3f-4e33-a648-4381818937f1-config-data\") pod \"swift-proxy-8bdd65479-mrv2d\" (UID: \"8f6bc9e4-3f3f-4e33-a648-4381818937f1\") " pod="openstack/swift-proxy-8bdd65479-mrv2d" Jan 27 15:30:29 crc kubenswrapper[4697]: I0127 15:30:29.544489 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5j5rs\" (UniqueName: \"kubernetes.io/projected/8f6bc9e4-3f3f-4e33-a648-4381818937f1-kube-api-access-5j5rs\") pod \"swift-proxy-8bdd65479-mrv2d\" (UID: \"8f6bc9e4-3f3f-4e33-a648-4381818937f1\") " pod="openstack/swift-proxy-8bdd65479-mrv2d" Jan 27 15:30:29 crc kubenswrapper[4697]: I0127 15:30:29.547735 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f6bc9e4-3f3f-4e33-a648-4381818937f1-internal-tls-certs\") pod \"swift-proxy-8bdd65479-mrv2d\" (UID: \"8f6bc9e4-3f3f-4e33-a648-4381818937f1\") " pod="openstack/swift-proxy-8bdd65479-mrv2d" Jan 27 15:30:29 crc kubenswrapper[4697]: I0127 15:30:29.623917 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-8bdd65479-mrv2d" Jan 27 15:30:30 crc kubenswrapper[4697]: I0127 15:30:30.283579 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 27 15:30:30 crc kubenswrapper[4697]: I0127 15:30:30.351724 4697 generic.go:334] "Generic (PLEG): container finished" podID="70da0843-011d-422d-bc59-479d90e689a8" containerID="fdfb3301b52faa56ab862269be9076c39667ffde1d926c21799f9da554b86682" exitCode=137 Jan 27 15:30:30 crc kubenswrapper[4697]: I0127 15:30:30.352045 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"70da0843-011d-422d-bc59-479d90e689a8","Type":"ContainerDied","Data":"fdfb3301b52faa56ab862269be9076c39667ffde1d926c21799f9da554b86682"} Jan 27 15:30:30 crc kubenswrapper[4697]: I0127 15:30:30.628602 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5965fc65fb-dvhzz" Jan 27 15:30:30 crc kubenswrapper[4697]: I0127 15:30:30.628635 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-5965fc65fb-dvhzz" Jan 27 15:30:31 crc kubenswrapper[4697]: I0127 15:30:31.373811 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-dff8b9f65-4b4q2" Jan 27 15:30:31 crc kubenswrapper[4697]: I0127 15:30:31.448220 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5ccccf969c-jgqtz"] Jan 27 15:30:31 crc kubenswrapper[4697]: I0127 15:30:31.448445 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-5ccccf969c-jgqtz" podUID="c83b7e83-b006-4f05-9f00-aa03173c05d9" containerName="neutron-api" containerID="cri-o://d9a2cb0992d090f183121f1d4ff95b09feb8cb2dcaa7f40e67590cadc230cdde" gracePeriod=30 Jan 27 15:30:31 crc kubenswrapper[4697]: I0127 15:30:31.448848 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-5ccccf969c-jgqtz" podUID="c83b7e83-b006-4f05-9f00-aa03173c05d9" containerName="neutron-httpd" containerID="cri-o://fcd98db175548bf792a3dde4f85acce5344b265205cdc49b7c80a39ace32143d" gracePeriod=30 Jan 27 15:30:32 crc kubenswrapper[4697]: I0127 15:30:32.370868 4697 generic.go:334] "Generic (PLEG): container finished" podID="c83b7e83-b006-4f05-9f00-aa03173c05d9" containerID="fcd98db175548bf792a3dde4f85acce5344b265205cdc49b7c80a39ace32143d" exitCode=0 Jan 27 15:30:32 crc kubenswrapper[4697]: I0127 15:30:32.370905 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5ccccf969c-jgqtz" event={"ID":"c83b7e83-b006-4f05-9f00-aa03173c05d9","Type":"ContainerDied","Data":"fcd98db175548bf792a3dde4f85acce5344b265205cdc49b7c80a39ace32143d"} Jan 27 15:30:32 crc kubenswrapper[4697]: I0127 15:30:32.534246 4697 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="70da0843-011d-422d-bc59-479d90e689a8" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.142:3000/\": dial tcp 10.217.0.142:3000: connect: connection refused" Jan 27 15:30:33 crc kubenswrapper[4697]: I0127 15:30:33.008125 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 15:30:33 crc kubenswrapper[4697]: I0127 15:30:33.013825 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="7a2066c3-d242-4f3b-85bd-f407f06cded2" containerName="glance-log" containerID="cri-o://65ccefbe4be02b1c5f60b5590cc3de45cd50c9089f2a515b32b18d017128ce7e" gracePeriod=30 Jan 27 15:30:33 crc kubenswrapper[4697]: I0127 15:30:33.013899 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="7a2066c3-d242-4f3b-85bd-f407f06cded2" containerName="glance-httpd" containerID="cri-o://1e3a54332da2bdea46a62af6888ed6d77b42bc8849e412f65815336659540e93" gracePeriod=30 Jan 27 15:30:33 crc kubenswrapper[4697]: I0127 15:30:33.028415 4697 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/glance-default-internal-api-0" podUID="7a2066c3-d242-4f3b-85bd-f407f06cded2" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.217.0.157:9292/healthcheck\": EOF" Jan 27 15:30:33 crc kubenswrapper[4697]: I0127 15:30:33.028511 4697 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/glance-default-internal-api-0" podUID="7a2066c3-d242-4f3b-85bd-f407f06cded2" containerName="glance-log" probeResult="failure" output="Get \"https://10.217.0.157:9292/healthcheck\": EOF" Jan 27 15:30:33 crc kubenswrapper[4697]: I0127 15:30:33.028612 4697 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="7a2066c3-d242-4f3b-85bd-f407f06cded2" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.217.0.157:9292/healthcheck\": EOF" Jan 27 15:30:33 crc kubenswrapper[4697]: I0127 15:30:33.029680 4697 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="7a2066c3-d242-4f3b-85bd-f407f06cded2" containerName="glance-log" probeResult="failure" output="Get \"https://10.217.0.157:9292/healthcheck\": EOF" Jan 27 15:30:33 crc kubenswrapper[4697]: I0127 15:30:33.386135 4697 generic.go:334] "Generic (PLEG): container finished" podID="7a2066c3-d242-4f3b-85bd-f407f06cded2" containerID="65ccefbe4be02b1c5f60b5590cc3de45cd50c9089f2a515b32b18d017128ce7e" exitCode=143 Jan 27 15:30:33 crc kubenswrapper[4697]: I0127 15:30:33.386177 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7a2066c3-d242-4f3b-85bd-f407f06cded2","Type":"ContainerDied","Data":"65ccefbe4be02b1c5f60b5590cc3de45cd50c9089f2a515b32b18d017128ce7e"} Jan 27 15:30:33 crc kubenswrapper[4697]: I0127 15:30:33.924716 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 15:30:33 crc kubenswrapper[4697]: I0127 15:30:33.925353 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="2dccf05f-4c68-4ed9-91e5-6c96ec5ba1d6" containerName="glance-log" containerID="cri-o://498ed7a6e35d21a4ad94e6d7396c394cf0a42cfc1834bbb7b8a985b27a7d0073" gracePeriod=30 Jan 27 15:30:33 crc kubenswrapper[4697]: I0127 15:30:33.925896 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="2dccf05f-4c68-4ed9-91e5-6c96ec5ba1d6" containerName="glance-httpd" containerID="cri-o://a58231e6612184f8a6d1d9111a036647b8f787183d1eb8701d7b6b23cbc57c25" gracePeriod=30 Jan 27 15:30:34 crc kubenswrapper[4697]: I0127 15:30:34.122879 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 27 15:30:34 crc kubenswrapper[4697]: I0127 15:30:34.402614 4697 generic.go:334] "Generic (PLEG): container finished" podID="2dccf05f-4c68-4ed9-91e5-6c96ec5ba1d6" containerID="498ed7a6e35d21a4ad94e6d7396c394cf0a42cfc1834bbb7b8a985b27a7d0073" exitCode=143 Jan 27 15:30:34 crc kubenswrapper[4697]: I0127 15:30:34.402660 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2dccf05f-4c68-4ed9-91e5-6c96ec5ba1d6","Type":"ContainerDied","Data":"498ed7a6e35d21a4ad94e6d7396c394cf0a42cfc1834bbb7b8a985b27a7d0073"} Jan 27 15:30:37 crc kubenswrapper[4697]: I0127 15:30:37.519340 4697 generic.go:334] "Generic (PLEG): container finished" podID="c83b7e83-b006-4f05-9f00-aa03173c05d9" containerID="d9a2cb0992d090f183121f1d4ff95b09feb8cb2dcaa7f40e67590cadc230cdde" exitCode=0 Jan 27 15:30:37 crc kubenswrapper[4697]: I0127 15:30:37.519389 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5ccccf969c-jgqtz" event={"ID":"c83b7e83-b006-4f05-9f00-aa03173c05d9","Type":"ContainerDied","Data":"d9a2cb0992d090f183121f1d4ff95b09feb8cb2dcaa7f40e67590cadc230cdde"} Jan 27 15:30:38 crc kubenswrapper[4697]: I0127 15:30:38.529493 4697 generic.go:334] "Generic (PLEG): container finished" podID="2dccf05f-4c68-4ed9-91e5-6c96ec5ba1d6" containerID="a58231e6612184f8a6d1d9111a036647b8f787183d1eb8701d7b6b23cbc57c25" exitCode=0 Jan 27 15:30:38 crc kubenswrapper[4697]: I0127 15:30:38.529548 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2dccf05f-4c68-4ed9-91e5-6c96ec5ba1d6","Type":"ContainerDied","Data":"a58231e6612184f8a6d1d9111a036647b8f787183d1eb8701d7b6b23cbc57c25"} Jan 27 15:30:38 crc kubenswrapper[4697]: E0127 15:30:38.648746 4697 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified" Jan 27 15:30:38 crc kubenswrapper[4697]: E0127 15:30:38.649183 4697 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:openstackclient,Image:quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified,Command:[/bin/sleep],Args:[infinity],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5b4h574h696h6hfbh67dh68bh584h5ffh67chbfh675h54bh587h5bfh5fbh558hd4h68ch54bh68ch55bh67chf5h695h698h545h566h88h5c4h75h54q,ValueFrom:nil,},EnvVar{Name:OS_CLOUD,Value:default,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_HOST,Value:metric-storage-prometheus.openstack.svc,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_PORT,Value:9090,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:openstack-config,ReadOnly:false,MountPath:/home/cloud-admin/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/home/cloud-admin/.config/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/home/cloud-admin/cloudrc,SubPath:cloudrc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f4lzc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42401,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42401,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstackclient_openstack(3d176f9c-9152-4162-b723-1f6e8330118a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 15:30:38 crc kubenswrapper[4697]: E0127 15:30:38.654057 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstackclient\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstackclient" podUID="3d176f9c-9152-4162-b723-1f6e8330118a" Jan 27 15:30:39 crc kubenswrapper[4697]: I0127 15:30:39.135983 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 15:30:39 crc kubenswrapper[4697]: I0127 15:30:39.248935 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/70da0843-011d-422d-bc59-479d90e689a8-run-httpd\") pod \"70da0843-011d-422d-bc59-479d90e689a8\" (UID: \"70da0843-011d-422d-bc59-479d90e689a8\") " Jan 27 15:30:39 crc kubenswrapper[4697]: I0127 15:30:39.249040 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70da0843-011d-422d-bc59-479d90e689a8-combined-ca-bundle\") pod \"70da0843-011d-422d-bc59-479d90e689a8\" (UID: \"70da0843-011d-422d-bc59-479d90e689a8\") " Jan 27 15:30:39 crc kubenswrapper[4697]: I0127 15:30:39.249105 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/70da0843-011d-422d-bc59-479d90e689a8-log-httpd\") pod \"70da0843-011d-422d-bc59-479d90e689a8\" (UID: \"70da0843-011d-422d-bc59-479d90e689a8\") " Jan 27 15:30:39 crc kubenswrapper[4697]: I0127 15:30:39.249133 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/70da0843-011d-422d-bc59-479d90e689a8-scripts\") pod \"70da0843-011d-422d-bc59-479d90e689a8\" (UID: \"70da0843-011d-422d-bc59-479d90e689a8\") " Jan 27 15:30:39 crc kubenswrapper[4697]: I0127 15:30:39.249202 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70da0843-011d-422d-bc59-479d90e689a8-config-data\") pod \"70da0843-011d-422d-bc59-479d90e689a8\" (UID: \"70da0843-011d-422d-bc59-479d90e689a8\") " Jan 27 15:30:39 crc kubenswrapper[4697]: I0127 15:30:39.249251 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/70da0843-011d-422d-bc59-479d90e689a8-sg-core-conf-yaml\") pod \"70da0843-011d-422d-bc59-479d90e689a8\" (UID: \"70da0843-011d-422d-bc59-479d90e689a8\") " Jan 27 15:30:39 crc kubenswrapper[4697]: I0127 15:30:39.249314 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j62jp\" (UniqueName: \"kubernetes.io/projected/70da0843-011d-422d-bc59-479d90e689a8-kube-api-access-j62jp\") pod \"70da0843-011d-422d-bc59-479d90e689a8\" (UID: \"70da0843-011d-422d-bc59-479d90e689a8\") " Jan 27 15:30:39 crc kubenswrapper[4697]: I0127 15:30:39.249947 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/70da0843-011d-422d-bc59-479d90e689a8-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "70da0843-011d-422d-bc59-479d90e689a8" (UID: "70da0843-011d-422d-bc59-479d90e689a8"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:30:39 crc kubenswrapper[4697]: I0127 15:30:39.254606 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/70da0843-011d-422d-bc59-479d90e689a8-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "70da0843-011d-422d-bc59-479d90e689a8" (UID: "70da0843-011d-422d-bc59-479d90e689a8"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:30:39 crc kubenswrapper[4697]: I0127 15:30:39.261086 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70da0843-011d-422d-bc59-479d90e689a8-scripts" (OuterVolumeSpecName: "scripts") pod "70da0843-011d-422d-bc59-479d90e689a8" (UID: "70da0843-011d-422d-bc59-479d90e689a8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:30:39 crc kubenswrapper[4697]: I0127 15:30:39.267040 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70da0843-011d-422d-bc59-479d90e689a8-kube-api-access-j62jp" (OuterVolumeSpecName: "kube-api-access-j62jp") pod "70da0843-011d-422d-bc59-479d90e689a8" (UID: "70da0843-011d-422d-bc59-479d90e689a8"). InnerVolumeSpecName "kube-api-access-j62jp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:30:39 crc kubenswrapper[4697]: I0127 15:30:39.323576 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70da0843-011d-422d-bc59-479d90e689a8-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "70da0843-011d-422d-bc59-479d90e689a8" (UID: "70da0843-011d-422d-bc59-479d90e689a8"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:30:39 crc kubenswrapper[4697]: I0127 15:30:39.351026 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70da0843-011d-422d-bc59-479d90e689a8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "70da0843-011d-422d-bc59-479d90e689a8" (UID: "70da0843-011d-422d-bc59-479d90e689a8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:30:39 crc kubenswrapper[4697]: I0127 15:30:39.351206 4697 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/70da0843-011d-422d-bc59-479d90e689a8-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 15:30:39 crc kubenswrapper[4697]: I0127 15:30:39.351236 4697 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/70da0843-011d-422d-bc59-479d90e689a8-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 15:30:39 crc kubenswrapper[4697]: I0127 15:30:39.351245 4697 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/70da0843-011d-422d-bc59-479d90e689a8-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 27 15:30:39 crc kubenswrapper[4697]: I0127 15:30:39.351256 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j62jp\" (UniqueName: \"kubernetes.io/projected/70da0843-011d-422d-bc59-479d90e689a8-kube-api-access-j62jp\") on node \"crc\" DevicePath \"\"" Jan 27 15:30:39 crc kubenswrapper[4697]: I0127 15:30:39.351265 4697 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/70da0843-011d-422d-bc59-479d90e689a8-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 15:30:39 crc kubenswrapper[4697]: I0127 15:30:39.351273 4697 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70da0843-011d-422d-bc59-479d90e689a8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 15:30:39 crc kubenswrapper[4697]: I0127 15:30:39.383744 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5ccccf969c-jgqtz" Jan 27 15:30:39 crc kubenswrapper[4697]: I0127 15:30:39.428357 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70da0843-011d-422d-bc59-479d90e689a8-config-data" (OuterVolumeSpecName: "config-data") pod "70da0843-011d-422d-bc59-479d90e689a8" (UID: "70da0843-011d-422d-bc59-479d90e689a8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:30:39 crc kubenswrapper[4697]: I0127 15:30:39.452354 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c83b7e83-b006-4f05-9f00-aa03173c05d9-public-tls-certs\") pod \"c83b7e83-b006-4f05-9f00-aa03173c05d9\" (UID: \"c83b7e83-b006-4f05-9f00-aa03173c05d9\") " Jan 27 15:30:39 crc kubenswrapper[4697]: I0127 15:30:39.452421 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c83b7e83-b006-4f05-9f00-aa03173c05d9-config\") pod \"c83b7e83-b006-4f05-9f00-aa03173c05d9\" (UID: \"c83b7e83-b006-4f05-9f00-aa03173c05d9\") " Jan 27 15:30:39 crc kubenswrapper[4697]: I0127 15:30:39.452479 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c83b7e83-b006-4f05-9f00-aa03173c05d9-internal-tls-certs\") pod \"c83b7e83-b006-4f05-9f00-aa03173c05d9\" (UID: \"c83b7e83-b006-4f05-9f00-aa03173c05d9\") " Jan 27 15:30:39 crc kubenswrapper[4697]: I0127 15:30:39.452552 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c83b7e83-b006-4f05-9f00-aa03173c05d9-combined-ca-bundle\") pod \"c83b7e83-b006-4f05-9f00-aa03173c05d9\" (UID: \"c83b7e83-b006-4f05-9f00-aa03173c05d9\") " Jan 27 15:30:39 crc kubenswrapper[4697]: I0127 15:30:39.452577 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/c83b7e83-b006-4f05-9f00-aa03173c05d9-httpd-config\") pod \"c83b7e83-b006-4f05-9f00-aa03173c05d9\" (UID: \"c83b7e83-b006-4f05-9f00-aa03173c05d9\") " Jan 27 15:30:39 crc kubenswrapper[4697]: I0127 15:30:39.452600 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nrbs6\" (UniqueName: \"kubernetes.io/projected/c83b7e83-b006-4f05-9f00-aa03173c05d9-kube-api-access-nrbs6\") pod \"c83b7e83-b006-4f05-9f00-aa03173c05d9\" (UID: \"c83b7e83-b006-4f05-9f00-aa03173c05d9\") " Jan 27 15:30:39 crc kubenswrapper[4697]: I0127 15:30:39.452671 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c83b7e83-b006-4f05-9f00-aa03173c05d9-ovndb-tls-certs\") pod \"c83b7e83-b006-4f05-9f00-aa03173c05d9\" (UID: \"c83b7e83-b006-4f05-9f00-aa03173c05d9\") " Jan 27 15:30:39 crc kubenswrapper[4697]: I0127 15:30:39.453006 4697 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70da0843-011d-422d-bc59-479d90e689a8-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 15:30:39 crc kubenswrapper[4697]: I0127 15:30:39.460314 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c83b7e83-b006-4f05-9f00-aa03173c05d9-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "c83b7e83-b006-4f05-9f00-aa03173c05d9" (UID: "c83b7e83-b006-4f05-9f00-aa03173c05d9"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:30:39 crc kubenswrapper[4697]: I0127 15:30:39.461310 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c83b7e83-b006-4f05-9f00-aa03173c05d9-kube-api-access-nrbs6" (OuterVolumeSpecName: "kube-api-access-nrbs6") pod "c83b7e83-b006-4f05-9f00-aa03173c05d9" (UID: "c83b7e83-b006-4f05-9f00-aa03173c05d9"). InnerVolumeSpecName "kube-api-access-nrbs6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:30:39 crc kubenswrapper[4697]: W0127 15:30:39.515422 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8f6bc9e4_3f3f_4e33_a648_4381818937f1.slice/crio-01c69b0ab0ccef3965537c575cf61457148908cf1db2c261078ff68804f81876 WatchSource:0}: Error finding container 01c69b0ab0ccef3965537c575cf61457148908cf1db2c261078ff68804f81876: Status 404 returned error can't find the container with id 01c69b0ab0ccef3965537c575cf61457148908cf1db2c261078ff68804f81876 Jan 27 15:30:39 crc kubenswrapper[4697]: I0127 15:30:39.518026 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-8bdd65479-mrv2d"] Jan 27 15:30:39 crc kubenswrapper[4697]: I0127 15:30:39.519800 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c83b7e83-b006-4f05-9f00-aa03173c05d9-config" (OuterVolumeSpecName: "config") pod "c83b7e83-b006-4f05-9f00-aa03173c05d9" (UID: "c83b7e83-b006-4f05-9f00-aa03173c05d9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:30:39 crc kubenswrapper[4697]: I0127 15:30:39.537545 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c83b7e83-b006-4f05-9f00-aa03173c05d9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c83b7e83-b006-4f05-9f00-aa03173c05d9" (UID: "c83b7e83-b006-4f05-9f00-aa03173c05d9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:30:39 crc kubenswrapper[4697]: I0127 15:30:39.548503 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-8bdd65479-mrv2d" event={"ID":"8f6bc9e4-3f3f-4e33-a648-4381818937f1","Type":"ContainerStarted","Data":"01c69b0ab0ccef3965537c575cf61457148908cf1db2c261078ff68804f81876"} Jan 27 15:30:39 crc kubenswrapper[4697]: I0127 15:30:39.551607 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5ccccf969c-jgqtz" Jan 27 15:30:39 crc kubenswrapper[4697]: I0127 15:30:39.551719 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5ccccf969c-jgqtz" event={"ID":"c83b7e83-b006-4f05-9f00-aa03173c05d9","Type":"ContainerDied","Data":"a0bb3148509f3f5c0a492bf7c88d6610899aad9c5d1728f02471ee5ed42d452b"} Jan 27 15:30:39 crc kubenswrapper[4697]: I0127 15:30:39.551850 4697 scope.go:117] "RemoveContainer" containerID="fcd98db175548bf792a3dde4f85acce5344b265205cdc49b7c80a39ace32143d" Jan 27 15:30:39 crc kubenswrapper[4697]: I0127 15:30:39.554243 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c83b7e83-b006-4f05-9f00-aa03173c05d9-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "c83b7e83-b006-4f05-9f00-aa03173c05d9" (UID: "c83b7e83-b006-4f05-9f00-aa03173c05d9"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:30:39 crc kubenswrapper[4697]: I0127 15:30:39.554341 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c83b7e83-b006-4f05-9f00-aa03173c05d9-internal-tls-certs\") pod \"c83b7e83-b006-4f05-9f00-aa03173c05d9\" (UID: \"c83b7e83-b006-4f05-9f00-aa03173c05d9\") " Jan 27 15:30:39 crc kubenswrapper[4697]: I0127 15:30:39.554745 4697 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/c83b7e83-b006-4f05-9f00-aa03173c05d9-config\") on node \"crc\" DevicePath \"\"" Jan 27 15:30:39 crc kubenswrapper[4697]: W0127 15:30:39.554920 4697 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/c83b7e83-b006-4f05-9f00-aa03173c05d9/volumes/kubernetes.io~secret/internal-tls-certs Jan 27 15:30:39 crc kubenswrapper[4697]: I0127 15:30:39.554939 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c83b7e83-b006-4f05-9f00-aa03173c05d9-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "c83b7e83-b006-4f05-9f00-aa03173c05d9" (UID: "c83b7e83-b006-4f05-9f00-aa03173c05d9"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:30:39 crc kubenswrapper[4697]: I0127 15:30:39.560041 4697 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c83b7e83-b006-4f05-9f00-aa03173c05d9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 15:30:39 crc kubenswrapper[4697]: I0127 15:30:39.560088 4697 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/c83b7e83-b006-4f05-9f00-aa03173c05d9-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 27 15:30:39 crc kubenswrapper[4697]: I0127 15:30:39.560106 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nrbs6\" (UniqueName: \"kubernetes.io/projected/c83b7e83-b006-4f05-9f00-aa03173c05d9-kube-api-access-nrbs6\") on node \"crc\" DevicePath \"\"" Jan 27 15:30:39 crc kubenswrapper[4697]: I0127 15:30:39.571820 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c83b7e83-b006-4f05-9f00-aa03173c05d9-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "c83b7e83-b006-4f05-9f00-aa03173c05d9" (UID: "c83b7e83-b006-4f05-9f00-aa03173c05d9"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:30:39 crc kubenswrapper[4697]: I0127 15:30:39.573183 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 15:30:39 crc kubenswrapper[4697]: I0127 15:30:39.573502 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"70da0843-011d-422d-bc59-479d90e689a8","Type":"ContainerDied","Data":"91106f703021d9c55ce074eb58a4fcca0a67b20e934e2cea01f117e7070b17b6"} Jan 27 15:30:39 crc kubenswrapper[4697]: I0127 15:30:39.584504 4697 scope.go:117] "RemoveContainer" containerID="d9a2cb0992d090f183121f1d4ff95b09feb8cb2dcaa7f40e67590cadc230cdde" Jan 27 15:30:39 crc kubenswrapper[4697]: E0127 15:30:39.584584 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstackclient\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified\\\"\"" pod="openstack/openstackclient" podUID="3d176f9c-9152-4162-b723-1f6e8330118a" Jan 27 15:30:39 crc kubenswrapper[4697]: I0127 15:30:39.604260 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c83b7e83-b006-4f05-9f00-aa03173c05d9-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "c83b7e83-b006-4f05-9f00-aa03173c05d9" (UID: "c83b7e83-b006-4f05-9f00-aa03173c05d9"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:30:39 crc kubenswrapper[4697]: I0127 15:30:39.622072 4697 scope.go:117] "RemoveContainer" containerID="fdfb3301b52faa56ab862269be9076c39667ffde1d926c21799f9da554b86682" Jan 27 15:30:39 crc kubenswrapper[4697]: I0127 15:30:39.641672 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 15:30:39 crc kubenswrapper[4697]: I0127 15:30:39.662262 4697 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c83b7e83-b006-4f05-9f00-aa03173c05d9-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 15:30:39 crc kubenswrapper[4697]: I0127 15:30:39.662314 4697 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c83b7e83-b006-4f05-9f00-aa03173c05d9-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 15:30:39 crc kubenswrapper[4697]: I0127 15:30:39.662324 4697 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c83b7e83-b006-4f05-9f00-aa03173c05d9-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 15:30:39 crc kubenswrapper[4697]: I0127 15:30:39.663028 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 27 15:30:39 crc kubenswrapper[4697]: I0127 15:30:39.711972 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 27 15:30:39 crc kubenswrapper[4697]: E0127 15:30:39.712470 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70da0843-011d-422d-bc59-479d90e689a8" containerName="proxy-httpd" Jan 27 15:30:39 crc kubenswrapper[4697]: I0127 15:30:39.712485 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="70da0843-011d-422d-bc59-479d90e689a8" containerName="proxy-httpd" Jan 27 15:30:39 crc kubenswrapper[4697]: E0127 15:30:39.712513 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c83b7e83-b006-4f05-9f00-aa03173c05d9" containerName="neutron-httpd" Jan 27 15:30:39 crc kubenswrapper[4697]: I0127 15:30:39.712520 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="c83b7e83-b006-4f05-9f00-aa03173c05d9" containerName="neutron-httpd" Jan 27 15:30:39 crc kubenswrapper[4697]: E0127 15:30:39.712531 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c83b7e83-b006-4f05-9f00-aa03173c05d9" containerName="neutron-api" Jan 27 15:30:39 crc kubenswrapper[4697]: I0127 15:30:39.712536 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="c83b7e83-b006-4f05-9f00-aa03173c05d9" containerName="neutron-api" Jan 27 15:30:39 crc kubenswrapper[4697]: E0127 15:30:39.712550 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70da0843-011d-422d-bc59-479d90e689a8" containerName="ceilometer-notification-agent" Jan 27 15:30:39 crc kubenswrapper[4697]: I0127 15:30:39.712556 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="70da0843-011d-422d-bc59-479d90e689a8" containerName="ceilometer-notification-agent" Jan 27 15:30:39 crc kubenswrapper[4697]: E0127 15:30:39.712568 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70da0843-011d-422d-bc59-479d90e689a8" containerName="sg-core" Jan 27 15:30:39 crc kubenswrapper[4697]: I0127 15:30:39.712574 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="70da0843-011d-422d-bc59-479d90e689a8" containerName="sg-core" Jan 27 15:30:39 crc kubenswrapper[4697]: E0127 15:30:39.712624 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70da0843-011d-422d-bc59-479d90e689a8" containerName="ceilometer-central-agent" Jan 27 15:30:39 crc kubenswrapper[4697]: I0127 15:30:39.712641 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="70da0843-011d-422d-bc59-479d90e689a8" containerName="ceilometer-central-agent" Jan 27 15:30:39 crc kubenswrapper[4697]: I0127 15:30:39.712818 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="c83b7e83-b006-4f05-9f00-aa03173c05d9" containerName="neutron-httpd" Jan 27 15:30:39 crc kubenswrapper[4697]: I0127 15:30:39.712836 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="70da0843-011d-422d-bc59-479d90e689a8" containerName="ceilometer-central-agent" Jan 27 15:30:39 crc kubenswrapper[4697]: I0127 15:30:39.712845 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="70da0843-011d-422d-bc59-479d90e689a8" containerName="proxy-httpd" Jan 27 15:30:39 crc kubenswrapper[4697]: I0127 15:30:39.712854 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="70da0843-011d-422d-bc59-479d90e689a8" containerName="ceilometer-notification-agent" Jan 27 15:30:39 crc kubenswrapper[4697]: I0127 15:30:39.712868 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="70da0843-011d-422d-bc59-479d90e689a8" containerName="sg-core" Jan 27 15:30:39 crc kubenswrapper[4697]: I0127 15:30:39.712878 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="c83b7e83-b006-4f05-9f00-aa03173c05d9" containerName="neutron-api" Jan 27 15:30:39 crc kubenswrapper[4697]: I0127 15:30:39.714294 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 15:30:39 crc kubenswrapper[4697]: I0127 15:30:39.729477 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 27 15:30:39 crc kubenswrapper[4697]: I0127 15:30:39.729597 4697 scope.go:117] "RemoveContainer" containerID="f5f8232461a2a3788177d3ee719ad49bae6cb338cba5dd1441ba3675ebf46fe7" Jan 27 15:30:39 crc kubenswrapper[4697]: I0127 15:30:39.732031 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 15:30:39 crc kubenswrapper[4697]: I0127 15:30:39.737833 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 27 15:30:39 crc kubenswrapper[4697]: I0127 15:30:39.768801 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcdb2\" (UniqueName: \"kubernetes.io/projected/65153f23-c661-4b49-b86c-294fd3a42610-kube-api-access-rcdb2\") pod \"ceilometer-0\" (UID: \"65153f23-c661-4b49-b86c-294fd3a42610\") " pod="openstack/ceilometer-0" Jan 27 15:30:39 crc kubenswrapper[4697]: I0127 15:30:39.768848 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65153f23-c661-4b49-b86c-294fd3a42610-config-data\") pod \"ceilometer-0\" (UID: \"65153f23-c661-4b49-b86c-294fd3a42610\") " pod="openstack/ceilometer-0" Jan 27 15:30:39 crc kubenswrapper[4697]: I0127 15:30:39.768890 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65153f23-c661-4b49-b86c-294fd3a42610-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"65153f23-c661-4b49-b86c-294fd3a42610\") " pod="openstack/ceilometer-0" Jan 27 15:30:39 crc kubenswrapper[4697]: I0127 15:30:39.768930 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/65153f23-c661-4b49-b86c-294fd3a42610-scripts\") pod \"ceilometer-0\" (UID: \"65153f23-c661-4b49-b86c-294fd3a42610\") " pod="openstack/ceilometer-0" Jan 27 15:30:39 crc kubenswrapper[4697]: I0127 15:30:39.768956 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/65153f23-c661-4b49-b86c-294fd3a42610-run-httpd\") pod \"ceilometer-0\" (UID: \"65153f23-c661-4b49-b86c-294fd3a42610\") " pod="openstack/ceilometer-0" Jan 27 15:30:39 crc kubenswrapper[4697]: I0127 15:30:39.769018 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/65153f23-c661-4b49-b86c-294fd3a42610-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"65153f23-c661-4b49-b86c-294fd3a42610\") " pod="openstack/ceilometer-0" Jan 27 15:30:39 crc kubenswrapper[4697]: I0127 15:30:39.769034 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/65153f23-c661-4b49-b86c-294fd3a42610-log-httpd\") pod \"ceilometer-0\" (UID: \"65153f23-c661-4b49-b86c-294fd3a42610\") " pod="openstack/ceilometer-0" Jan 27 15:30:39 crc kubenswrapper[4697]: I0127 15:30:39.836958 4697 scope.go:117] "RemoveContainer" containerID="b032d78bfb5d09a5bcc4cce4a5692183cdd0441c6395fbb0780d86701d8bd0b2" Jan 27 15:30:39 crc kubenswrapper[4697]: I0127 15:30:39.874239 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/65153f23-c661-4b49-b86c-294fd3a42610-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"65153f23-c661-4b49-b86c-294fd3a42610\") " pod="openstack/ceilometer-0" Jan 27 15:30:39 crc kubenswrapper[4697]: I0127 15:30:39.874281 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/65153f23-c661-4b49-b86c-294fd3a42610-log-httpd\") pod \"ceilometer-0\" (UID: \"65153f23-c661-4b49-b86c-294fd3a42610\") " pod="openstack/ceilometer-0" Jan 27 15:30:39 crc kubenswrapper[4697]: I0127 15:30:39.874324 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rcdb2\" (UniqueName: \"kubernetes.io/projected/65153f23-c661-4b49-b86c-294fd3a42610-kube-api-access-rcdb2\") pod \"ceilometer-0\" (UID: \"65153f23-c661-4b49-b86c-294fd3a42610\") " pod="openstack/ceilometer-0" Jan 27 15:30:39 crc kubenswrapper[4697]: I0127 15:30:39.874345 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65153f23-c661-4b49-b86c-294fd3a42610-config-data\") pod \"ceilometer-0\" (UID: \"65153f23-c661-4b49-b86c-294fd3a42610\") " pod="openstack/ceilometer-0" Jan 27 15:30:39 crc kubenswrapper[4697]: I0127 15:30:39.874383 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65153f23-c661-4b49-b86c-294fd3a42610-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"65153f23-c661-4b49-b86c-294fd3a42610\") " pod="openstack/ceilometer-0" Jan 27 15:30:39 crc kubenswrapper[4697]: I0127 15:30:39.874419 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/65153f23-c661-4b49-b86c-294fd3a42610-scripts\") pod \"ceilometer-0\" (UID: \"65153f23-c661-4b49-b86c-294fd3a42610\") " pod="openstack/ceilometer-0" Jan 27 15:30:39 crc kubenswrapper[4697]: I0127 15:30:39.874445 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/65153f23-c661-4b49-b86c-294fd3a42610-run-httpd\") pod \"ceilometer-0\" (UID: \"65153f23-c661-4b49-b86c-294fd3a42610\") " pod="openstack/ceilometer-0" Jan 27 15:30:39 crc kubenswrapper[4697]: I0127 15:30:39.874936 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/65153f23-c661-4b49-b86c-294fd3a42610-run-httpd\") pod \"ceilometer-0\" (UID: \"65153f23-c661-4b49-b86c-294fd3a42610\") " pod="openstack/ceilometer-0" Jan 27 15:30:39 crc kubenswrapper[4697]: I0127 15:30:39.879378 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/65153f23-c661-4b49-b86c-294fd3a42610-log-httpd\") pod \"ceilometer-0\" (UID: \"65153f23-c661-4b49-b86c-294fd3a42610\") " pod="openstack/ceilometer-0" Jan 27 15:30:39 crc kubenswrapper[4697]: I0127 15:30:39.908314 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/65153f23-c661-4b49-b86c-294fd3a42610-scripts\") pod \"ceilometer-0\" (UID: \"65153f23-c661-4b49-b86c-294fd3a42610\") " pod="openstack/ceilometer-0" Jan 27 15:30:39 crc kubenswrapper[4697]: I0127 15:30:39.918711 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rcdb2\" (UniqueName: \"kubernetes.io/projected/65153f23-c661-4b49-b86c-294fd3a42610-kube-api-access-rcdb2\") pod \"ceilometer-0\" (UID: \"65153f23-c661-4b49-b86c-294fd3a42610\") " pod="openstack/ceilometer-0" Jan 27 15:30:39 crc kubenswrapper[4697]: I0127 15:30:39.918802 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65153f23-c661-4b49-b86c-294fd3a42610-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"65153f23-c661-4b49-b86c-294fd3a42610\") " pod="openstack/ceilometer-0" Jan 27 15:30:39 crc kubenswrapper[4697]: I0127 15:30:39.922586 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/65153f23-c661-4b49-b86c-294fd3a42610-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"65153f23-c661-4b49-b86c-294fd3a42610\") " pod="openstack/ceilometer-0" Jan 27 15:30:39 crc kubenswrapper[4697]: I0127 15:30:39.928210 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65153f23-c661-4b49-b86c-294fd3a42610-config-data\") pod \"ceilometer-0\" (UID: \"65153f23-c661-4b49-b86c-294fd3a42610\") " pod="openstack/ceilometer-0" Jan 27 15:30:39 crc kubenswrapper[4697]: I0127 15:30:39.969047 4697 scope.go:117] "RemoveContainer" containerID="023d0b7bbc4282457b9c42149fe43bd96f28c8d5b0006f9f50340bf622320c7a" Jan 27 15:30:40 crc kubenswrapper[4697]: I0127 15:30:40.022011 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5ccccf969c-jgqtz"] Jan 27 15:30:40 crc kubenswrapper[4697]: I0127 15:30:40.038310 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-5ccccf969c-jgqtz"] Jan 27 15:30:40 crc kubenswrapper[4697]: I0127 15:30:40.102888 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 15:30:40 crc kubenswrapper[4697]: I0127 15:30:40.311417 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 15:30:40 crc kubenswrapper[4697]: I0127 15:30:40.395553 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8b2f2\" (UniqueName: \"kubernetes.io/projected/2dccf05f-4c68-4ed9-91e5-6c96ec5ba1d6-kube-api-access-8b2f2\") pod \"2dccf05f-4c68-4ed9-91e5-6c96ec5ba1d6\" (UID: \"2dccf05f-4c68-4ed9-91e5-6c96ec5ba1d6\") " Jan 27 15:30:40 crc kubenswrapper[4697]: I0127 15:30:40.395657 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"2dccf05f-4c68-4ed9-91e5-6c96ec5ba1d6\" (UID: \"2dccf05f-4c68-4ed9-91e5-6c96ec5ba1d6\") " Jan 27 15:30:40 crc kubenswrapper[4697]: I0127 15:30:40.395723 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2dccf05f-4c68-4ed9-91e5-6c96ec5ba1d6-scripts\") pod \"2dccf05f-4c68-4ed9-91e5-6c96ec5ba1d6\" (UID: \"2dccf05f-4c68-4ed9-91e5-6c96ec5ba1d6\") " Jan 27 15:30:40 crc kubenswrapper[4697]: I0127 15:30:40.395741 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2dccf05f-4c68-4ed9-91e5-6c96ec5ba1d6-logs\") pod \"2dccf05f-4c68-4ed9-91e5-6c96ec5ba1d6\" (UID: \"2dccf05f-4c68-4ed9-91e5-6c96ec5ba1d6\") " Jan 27 15:30:40 crc kubenswrapper[4697]: I0127 15:30:40.395765 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2dccf05f-4c68-4ed9-91e5-6c96ec5ba1d6-httpd-run\") pod \"2dccf05f-4c68-4ed9-91e5-6c96ec5ba1d6\" (UID: \"2dccf05f-4c68-4ed9-91e5-6c96ec5ba1d6\") " Jan 27 15:30:40 crc kubenswrapper[4697]: I0127 15:30:40.395793 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2dccf05f-4c68-4ed9-91e5-6c96ec5ba1d6-config-data\") pod \"2dccf05f-4c68-4ed9-91e5-6c96ec5ba1d6\" (UID: \"2dccf05f-4c68-4ed9-91e5-6c96ec5ba1d6\") " Jan 27 15:30:40 crc kubenswrapper[4697]: I0127 15:30:40.395811 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2dccf05f-4c68-4ed9-91e5-6c96ec5ba1d6-combined-ca-bundle\") pod \"2dccf05f-4c68-4ed9-91e5-6c96ec5ba1d6\" (UID: \"2dccf05f-4c68-4ed9-91e5-6c96ec5ba1d6\") " Jan 27 15:30:40 crc kubenswrapper[4697]: I0127 15:30:40.395859 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2dccf05f-4c68-4ed9-91e5-6c96ec5ba1d6-public-tls-certs\") pod \"2dccf05f-4c68-4ed9-91e5-6c96ec5ba1d6\" (UID: \"2dccf05f-4c68-4ed9-91e5-6c96ec5ba1d6\") " Jan 27 15:30:40 crc kubenswrapper[4697]: I0127 15:30:40.396354 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2dccf05f-4c68-4ed9-91e5-6c96ec5ba1d6-logs" (OuterVolumeSpecName: "logs") pod "2dccf05f-4c68-4ed9-91e5-6c96ec5ba1d6" (UID: "2dccf05f-4c68-4ed9-91e5-6c96ec5ba1d6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:30:40 crc kubenswrapper[4697]: I0127 15:30:40.396829 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2dccf05f-4c68-4ed9-91e5-6c96ec5ba1d6-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "2dccf05f-4c68-4ed9-91e5-6c96ec5ba1d6" (UID: "2dccf05f-4c68-4ed9-91e5-6c96ec5ba1d6"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:30:40 crc kubenswrapper[4697]: I0127 15:30:40.401694 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "glance") pod "2dccf05f-4c68-4ed9-91e5-6c96ec5ba1d6" (UID: "2dccf05f-4c68-4ed9-91e5-6c96ec5ba1d6"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 27 15:30:40 crc kubenswrapper[4697]: I0127 15:30:40.410207 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2dccf05f-4c68-4ed9-91e5-6c96ec5ba1d6-scripts" (OuterVolumeSpecName: "scripts") pod "2dccf05f-4c68-4ed9-91e5-6c96ec5ba1d6" (UID: "2dccf05f-4c68-4ed9-91e5-6c96ec5ba1d6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:30:40 crc kubenswrapper[4697]: I0127 15:30:40.422988 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2dccf05f-4c68-4ed9-91e5-6c96ec5ba1d6-kube-api-access-8b2f2" (OuterVolumeSpecName: "kube-api-access-8b2f2") pod "2dccf05f-4c68-4ed9-91e5-6c96ec5ba1d6" (UID: "2dccf05f-4c68-4ed9-91e5-6c96ec5ba1d6"). InnerVolumeSpecName "kube-api-access-8b2f2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:30:40 crc kubenswrapper[4697]: I0127 15:30:40.478801 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2dccf05f-4c68-4ed9-91e5-6c96ec5ba1d6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2dccf05f-4c68-4ed9-91e5-6c96ec5ba1d6" (UID: "2dccf05f-4c68-4ed9-91e5-6c96ec5ba1d6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:30:40 crc kubenswrapper[4697]: I0127 15:30:40.500641 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8b2f2\" (UniqueName: \"kubernetes.io/projected/2dccf05f-4c68-4ed9-91e5-6c96ec5ba1d6-kube-api-access-8b2f2\") on node \"crc\" DevicePath \"\"" Jan 27 15:30:40 crc kubenswrapper[4697]: I0127 15:30:40.500682 4697 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Jan 27 15:30:40 crc kubenswrapper[4697]: I0127 15:30:40.500693 4697 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2dccf05f-4c68-4ed9-91e5-6c96ec5ba1d6-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 15:30:40 crc kubenswrapper[4697]: I0127 15:30:40.500703 4697 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2dccf05f-4c68-4ed9-91e5-6c96ec5ba1d6-logs\") on node \"crc\" DevicePath \"\"" Jan 27 15:30:40 crc kubenswrapper[4697]: I0127 15:30:40.500711 4697 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2dccf05f-4c68-4ed9-91e5-6c96ec5ba1d6-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 27 15:30:40 crc kubenswrapper[4697]: I0127 15:30:40.500719 4697 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2dccf05f-4c68-4ed9-91e5-6c96ec5ba1d6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 15:30:40 crc kubenswrapper[4697]: I0127 15:30:40.597546 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2dccf05f-4c68-4ed9-91e5-6c96ec5ba1d6-config-data" (OuterVolumeSpecName: "config-data") pod "2dccf05f-4c68-4ed9-91e5-6c96ec5ba1d6" (UID: "2dccf05f-4c68-4ed9-91e5-6c96ec5ba1d6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:30:40 crc kubenswrapper[4697]: I0127 15:30:40.604029 4697 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2dccf05f-4c68-4ed9-91e5-6c96ec5ba1d6-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 15:30:40 crc kubenswrapper[4697]: I0127 15:30:40.607766 4697 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Jan 27 15:30:40 crc kubenswrapper[4697]: I0127 15:30:40.609326 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70da0843-011d-422d-bc59-479d90e689a8" path="/var/lib/kubelet/pods/70da0843-011d-422d-bc59-479d90e689a8/volumes" Jan 27 15:30:40 crc kubenswrapper[4697]: I0127 15:30:40.610409 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c83b7e83-b006-4f05-9f00-aa03173c05d9" path="/var/lib/kubelet/pods/c83b7e83-b006-4f05-9f00-aa03173c05d9/volumes" Jan 27 15:30:40 crc kubenswrapper[4697]: I0127 15:30:40.621002 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 15:30:40 crc kubenswrapper[4697]: I0127 15:30:40.630492 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2dccf05f-4c68-4ed9-91e5-6c96ec5ba1d6-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "2dccf05f-4c68-4ed9-91e5-6c96ec5ba1d6" (UID: "2dccf05f-4c68-4ed9-91e5-6c96ec5ba1d6"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:30:40 crc kubenswrapper[4697]: I0127 15:30:40.638256 4697 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5965fc65fb-dvhzz" podUID="d6ad161d-fe95-4ad3-8f60-1f1310b2974c" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.147:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.147:8443: connect: connection refused" Jan 27 15:30:40 crc kubenswrapper[4697]: I0127 15:30:40.711291 4697 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2dccf05f-4c68-4ed9-91e5-6c96ec5ba1d6-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 15:30:40 crc kubenswrapper[4697]: I0127 15:30:40.711319 4697 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Jan 27 15:30:40 crc kubenswrapper[4697]: I0127 15:30:40.722289 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-8bdd65479-mrv2d" event={"ID":"8f6bc9e4-3f3f-4e33-a648-4381818937f1","Type":"ContainerStarted","Data":"56579a5e87af8edc153d18a7a629b24f46ba93f76046e889cd375374f80d6d61"} Jan 27 15:30:40 crc kubenswrapper[4697]: I0127 15:30:40.722325 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2dccf05f-4c68-4ed9-91e5-6c96ec5ba1d6","Type":"ContainerDied","Data":"a91ea5280664b3f6684f3d1af2971c6f9b18d7c75e10900d4d5f453ed001d33a"} Jan 27 15:30:40 crc kubenswrapper[4697]: I0127 15:30:40.722365 4697 scope.go:117] "RemoveContainer" containerID="a58231e6612184f8a6d1d9111a036647b8f787183d1eb8701d7b6b23cbc57c25" Jan 27 15:30:40 crc kubenswrapper[4697]: I0127 15:30:40.762688 4697 scope.go:117] "RemoveContainer" containerID="498ed7a6e35d21a4ad94e6d7396c394cf0a42cfc1834bbb7b8a985b27a7d0073" Jan 27 15:30:40 crc kubenswrapper[4697]: I0127 15:30:40.919388 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 15:30:40 crc kubenswrapper[4697]: I0127 15:30:40.934768 4697 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 15:30:41 crc kubenswrapper[4697]: I0127 15:30:41.013865 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 15:30:41 crc kubenswrapper[4697]: I0127 15:30:41.025833 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 15:30:41 crc kubenswrapper[4697]: I0127 15:30:41.047903 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 15:30:41 crc kubenswrapper[4697]: E0127 15:30:41.048555 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2dccf05f-4c68-4ed9-91e5-6c96ec5ba1d6" containerName="glance-httpd" Jan 27 15:30:41 crc kubenswrapper[4697]: I0127 15:30:41.048631 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="2dccf05f-4c68-4ed9-91e5-6c96ec5ba1d6" containerName="glance-httpd" Jan 27 15:30:41 crc kubenswrapper[4697]: E0127 15:30:41.048705 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2dccf05f-4c68-4ed9-91e5-6c96ec5ba1d6" containerName="glance-log" Jan 27 15:30:41 crc kubenswrapper[4697]: I0127 15:30:41.048764 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="2dccf05f-4c68-4ed9-91e5-6c96ec5ba1d6" containerName="glance-log" Jan 27 15:30:41 crc kubenswrapper[4697]: I0127 15:30:41.049027 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="2dccf05f-4c68-4ed9-91e5-6c96ec5ba1d6" containerName="glance-httpd" Jan 27 15:30:41 crc kubenswrapper[4697]: I0127 15:30:41.049097 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="2dccf05f-4c68-4ed9-91e5-6c96ec5ba1d6" containerName="glance-log" Jan 27 15:30:41 crc kubenswrapper[4697]: I0127 15:30:41.050089 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 15:30:41 crc kubenswrapper[4697]: I0127 15:30:41.053278 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 27 15:30:41 crc kubenswrapper[4697]: I0127 15:30:41.053874 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 27 15:30:41 crc kubenswrapper[4697]: I0127 15:30:41.079297 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 15:30:41 crc kubenswrapper[4697]: I0127 15:30:41.220348 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8397e2ec-7b94-4690-b567-716eae78b6d0-config-data\") pod \"glance-default-external-api-0\" (UID: \"8397e2ec-7b94-4690-b567-716eae78b6d0\") " pod="openstack/glance-default-external-api-0" Jan 27 15:30:41 crc kubenswrapper[4697]: I0127 15:30:41.220437 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8397e2ec-7b94-4690-b567-716eae78b6d0-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"8397e2ec-7b94-4690-b567-716eae78b6d0\") " pod="openstack/glance-default-external-api-0" Jan 27 15:30:41 crc kubenswrapper[4697]: I0127 15:30:41.220462 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"8397e2ec-7b94-4690-b567-716eae78b6d0\") " pod="openstack/glance-default-external-api-0" Jan 27 15:30:41 crc kubenswrapper[4697]: I0127 15:30:41.220482 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cz2sl\" (UniqueName: \"kubernetes.io/projected/8397e2ec-7b94-4690-b567-716eae78b6d0-kube-api-access-cz2sl\") pod \"glance-default-external-api-0\" (UID: \"8397e2ec-7b94-4690-b567-716eae78b6d0\") " pod="openstack/glance-default-external-api-0" Jan 27 15:30:41 crc kubenswrapper[4697]: I0127 15:30:41.220499 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8397e2ec-7b94-4690-b567-716eae78b6d0-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"8397e2ec-7b94-4690-b567-716eae78b6d0\") " pod="openstack/glance-default-external-api-0" Jan 27 15:30:41 crc kubenswrapper[4697]: I0127 15:30:41.220535 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8397e2ec-7b94-4690-b567-716eae78b6d0-scripts\") pod \"glance-default-external-api-0\" (UID: \"8397e2ec-7b94-4690-b567-716eae78b6d0\") " pod="openstack/glance-default-external-api-0" Jan 27 15:30:41 crc kubenswrapper[4697]: I0127 15:30:41.220563 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8397e2ec-7b94-4690-b567-716eae78b6d0-logs\") pod \"glance-default-external-api-0\" (UID: \"8397e2ec-7b94-4690-b567-716eae78b6d0\") " pod="openstack/glance-default-external-api-0" Jan 27 15:30:41 crc kubenswrapper[4697]: I0127 15:30:41.220598 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8397e2ec-7b94-4690-b567-716eae78b6d0-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"8397e2ec-7b94-4690-b567-716eae78b6d0\") " pod="openstack/glance-default-external-api-0" Jan 27 15:30:41 crc kubenswrapper[4697]: I0127 15:30:41.322605 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8397e2ec-7b94-4690-b567-716eae78b6d0-scripts\") pod \"glance-default-external-api-0\" (UID: \"8397e2ec-7b94-4690-b567-716eae78b6d0\") " pod="openstack/glance-default-external-api-0" Jan 27 15:30:41 crc kubenswrapper[4697]: I0127 15:30:41.322798 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8397e2ec-7b94-4690-b567-716eae78b6d0-logs\") pod \"glance-default-external-api-0\" (UID: \"8397e2ec-7b94-4690-b567-716eae78b6d0\") " pod="openstack/glance-default-external-api-0" Jan 27 15:30:41 crc kubenswrapper[4697]: I0127 15:30:41.322885 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8397e2ec-7b94-4690-b567-716eae78b6d0-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"8397e2ec-7b94-4690-b567-716eae78b6d0\") " pod="openstack/glance-default-external-api-0" Jan 27 15:30:41 crc kubenswrapper[4697]: I0127 15:30:41.323049 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8397e2ec-7b94-4690-b567-716eae78b6d0-config-data\") pod \"glance-default-external-api-0\" (UID: \"8397e2ec-7b94-4690-b567-716eae78b6d0\") " pod="openstack/glance-default-external-api-0" Jan 27 15:30:41 crc kubenswrapper[4697]: I0127 15:30:41.323164 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8397e2ec-7b94-4690-b567-716eae78b6d0-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"8397e2ec-7b94-4690-b567-716eae78b6d0\") " pod="openstack/glance-default-external-api-0" Jan 27 15:30:41 crc kubenswrapper[4697]: I0127 15:30:41.323281 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"8397e2ec-7b94-4690-b567-716eae78b6d0\") " pod="openstack/glance-default-external-api-0" Jan 27 15:30:41 crc kubenswrapper[4697]: I0127 15:30:41.323362 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cz2sl\" (UniqueName: \"kubernetes.io/projected/8397e2ec-7b94-4690-b567-716eae78b6d0-kube-api-access-cz2sl\") pod \"glance-default-external-api-0\" (UID: \"8397e2ec-7b94-4690-b567-716eae78b6d0\") " pod="openstack/glance-default-external-api-0" Jan 27 15:30:41 crc kubenswrapper[4697]: I0127 15:30:41.323445 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8397e2ec-7b94-4690-b567-716eae78b6d0-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"8397e2ec-7b94-4690-b567-716eae78b6d0\") " pod="openstack/glance-default-external-api-0" Jan 27 15:30:41 crc kubenswrapper[4697]: I0127 15:30:41.323531 4697 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"8397e2ec-7b94-4690-b567-716eae78b6d0\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-external-api-0" Jan 27 15:30:41 crc kubenswrapper[4697]: I0127 15:30:41.323315 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8397e2ec-7b94-4690-b567-716eae78b6d0-logs\") pod \"glance-default-external-api-0\" (UID: \"8397e2ec-7b94-4690-b567-716eae78b6d0\") " pod="openstack/glance-default-external-api-0" Jan 27 15:30:41 crc kubenswrapper[4697]: I0127 15:30:41.323939 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8397e2ec-7b94-4690-b567-716eae78b6d0-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"8397e2ec-7b94-4690-b567-716eae78b6d0\") " pod="openstack/glance-default-external-api-0" Jan 27 15:30:41 crc kubenswrapper[4697]: I0127 15:30:41.330915 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8397e2ec-7b94-4690-b567-716eae78b6d0-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"8397e2ec-7b94-4690-b567-716eae78b6d0\") " pod="openstack/glance-default-external-api-0" Jan 27 15:30:41 crc kubenswrapper[4697]: I0127 15:30:41.332485 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8397e2ec-7b94-4690-b567-716eae78b6d0-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"8397e2ec-7b94-4690-b567-716eae78b6d0\") " pod="openstack/glance-default-external-api-0" Jan 27 15:30:41 crc kubenswrapper[4697]: I0127 15:30:41.332540 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8397e2ec-7b94-4690-b567-716eae78b6d0-scripts\") pod \"glance-default-external-api-0\" (UID: \"8397e2ec-7b94-4690-b567-716eae78b6d0\") " pod="openstack/glance-default-external-api-0" Jan 27 15:30:41 crc kubenswrapper[4697]: I0127 15:30:41.333963 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8397e2ec-7b94-4690-b567-716eae78b6d0-config-data\") pod \"glance-default-external-api-0\" (UID: \"8397e2ec-7b94-4690-b567-716eae78b6d0\") " pod="openstack/glance-default-external-api-0" Jan 27 15:30:41 crc kubenswrapper[4697]: I0127 15:30:41.341740 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cz2sl\" (UniqueName: \"kubernetes.io/projected/8397e2ec-7b94-4690-b567-716eae78b6d0-kube-api-access-cz2sl\") pod \"glance-default-external-api-0\" (UID: \"8397e2ec-7b94-4690-b567-716eae78b6d0\") " pod="openstack/glance-default-external-api-0" Jan 27 15:30:41 crc kubenswrapper[4697]: I0127 15:30:41.366601 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"8397e2ec-7b94-4690-b567-716eae78b6d0\") " pod="openstack/glance-default-external-api-0" Jan 27 15:30:41 crc kubenswrapper[4697]: I0127 15:30:41.629957 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"65153f23-c661-4b49-b86c-294fd3a42610","Type":"ContainerStarted","Data":"e3b92ba9b2ba429d3d07e6fa1c8e9b14929d53f5fd171b9cad61fa2e0feff069"} Jan 27 15:30:41 crc kubenswrapper[4697]: I0127 15:30:41.630280 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"65153f23-c661-4b49-b86c-294fd3a42610","Type":"ContainerStarted","Data":"241484ccea29501661c39c9843dff3a970835be58a9498c578bfff3b68c466a2"} Jan 27 15:30:41 crc kubenswrapper[4697]: I0127 15:30:41.631766 4697 generic.go:334] "Generic (PLEG): container finished" podID="ca5e937a-90cf-44e0-bf5c-bcb75c95a2f4" containerID="94e5a0ea328ee095ebea3b739ec83ee42ff649968869720920ee234c3045166f" exitCode=137 Jan 27 15:30:41 crc kubenswrapper[4697]: I0127 15:30:41.631836 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5b9dc56b78-cpxnx" event={"ID":"ca5e937a-90cf-44e0-bf5c-bcb75c95a2f4","Type":"ContainerDied","Data":"94e5a0ea328ee095ebea3b739ec83ee42ff649968869720920ee234c3045166f"} Jan 27 15:30:41 crc kubenswrapper[4697]: I0127 15:30:41.631853 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5b9dc56b78-cpxnx" event={"ID":"ca5e937a-90cf-44e0-bf5c-bcb75c95a2f4","Type":"ContainerStarted","Data":"17f787dabadd4f23183666fbe27a45b7ec9d6d7331dd1eb5e5057b2d0ac827cd"} Jan 27 15:30:41 crc kubenswrapper[4697]: I0127 15:30:41.639210 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-8bdd65479-mrv2d" event={"ID":"8f6bc9e4-3f3f-4e33-a648-4381818937f1","Type":"ContainerStarted","Data":"b32c022a29eb5ab2a675b725fc876511c94a0ceb494d0039fd05270900f0ab69"} Jan 27 15:30:41 crc kubenswrapper[4697]: I0127 15:30:41.639360 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-8bdd65479-mrv2d" Jan 27 15:30:41 crc kubenswrapper[4697]: I0127 15:30:41.674216 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 15:30:41 crc kubenswrapper[4697]: I0127 15:30:41.692333 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-8bdd65479-mrv2d" podStartSLOduration=12.692311629 podStartE2EDuration="12.692311629s" podCreationTimestamp="2026-01-27 15:30:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:30:41.684521921 +0000 UTC m=+1337.856921712" watchObservedRunningTime="2026-01-27 15:30:41.692311629 +0000 UTC m=+1337.864711410" Jan 27 15:30:42 crc kubenswrapper[4697]: I0127 15:30:42.159843 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 15:30:42 crc kubenswrapper[4697]: W0127 15:30:42.181131 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8397e2ec_7b94_4690_b567_716eae78b6d0.slice/crio-336df7f62a501ebe4de02319c4874a6091c69493e8b142be0ac511249fda7784 WatchSource:0}: Error finding container 336df7f62a501ebe4de02319c4874a6091c69493e8b142be0ac511249fda7784: Status 404 returned error can't find the container with id 336df7f62a501ebe4de02319c4874a6091c69493e8b142be0ac511249fda7784 Jan 27 15:30:42 crc kubenswrapper[4697]: I0127 15:30:42.583293 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2dccf05f-4c68-4ed9-91e5-6c96ec5ba1d6" path="/var/lib/kubelet/pods/2dccf05f-4c68-4ed9-91e5-6c96ec5ba1d6/volumes" Jan 27 15:30:42 crc kubenswrapper[4697]: I0127 15:30:42.688272 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8397e2ec-7b94-4690-b567-716eae78b6d0","Type":"ContainerStarted","Data":"336df7f62a501ebe4de02319c4874a6091c69493e8b142be0ac511249fda7784"} Jan 27 15:30:42 crc kubenswrapper[4697]: I0127 15:30:42.688611 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-8bdd65479-mrv2d" Jan 27 15:30:43 crc kubenswrapper[4697]: I0127 15:30:43.303736 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 15:30:43 crc kubenswrapper[4697]: I0127 15:30:43.365346 4697 scope.go:117] "RemoveContainer" containerID="623486231f5c96e9d354a16f43b493667a4061bcfb2552b848dbf16c20781b63" Jan 27 15:30:43 crc kubenswrapper[4697]: I0127 15:30:43.390718 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7a2066c3-d242-4f3b-85bd-f407f06cded2-logs\") pod \"7a2066c3-d242-4f3b-85bd-f407f06cded2\" (UID: \"7a2066c3-d242-4f3b-85bd-f407f06cded2\") " Jan 27 15:30:43 crc kubenswrapper[4697]: I0127 15:30:43.399173 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a2066c3-d242-4f3b-85bd-f407f06cded2-config-data\") pod \"7a2066c3-d242-4f3b-85bd-f407f06cded2\" (UID: \"7a2066c3-d242-4f3b-85bd-f407f06cded2\") " Jan 27 15:30:43 crc kubenswrapper[4697]: I0127 15:30:43.399206 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7a2066c3-d242-4f3b-85bd-f407f06cded2-scripts\") pod \"7a2066c3-d242-4f3b-85bd-f407f06cded2\" (UID: \"7a2066c3-d242-4f3b-85bd-f407f06cded2\") " Jan 27 15:30:43 crc kubenswrapper[4697]: I0127 15:30:43.399258 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a2066c3-d242-4f3b-85bd-f407f06cded2-combined-ca-bundle\") pod \"7a2066c3-d242-4f3b-85bd-f407f06cded2\" (UID: \"7a2066c3-d242-4f3b-85bd-f407f06cded2\") " Jan 27 15:30:43 crc kubenswrapper[4697]: I0127 15:30:43.399337 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7a2066c3-d242-4f3b-85bd-f407f06cded2-httpd-run\") pod \"7a2066c3-d242-4f3b-85bd-f407f06cded2\" (UID: \"7a2066c3-d242-4f3b-85bd-f407f06cded2\") " Jan 27 15:30:43 crc kubenswrapper[4697]: I0127 15:30:43.399380 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"7a2066c3-d242-4f3b-85bd-f407f06cded2\" (UID: \"7a2066c3-d242-4f3b-85bd-f407f06cded2\") " Jan 27 15:30:43 crc kubenswrapper[4697]: I0127 15:30:43.399477 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-prfc5\" (UniqueName: \"kubernetes.io/projected/7a2066c3-d242-4f3b-85bd-f407f06cded2-kube-api-access-prfc5\") pod \"7a2066c3-d242-4f3b-85bd-f407f06cded2\" (UID: \"7a2066c3-d242-4f3b-85bd-f407f06cded2\") " Jan 27 15:30:43 crc kubenswrapper[4697]: I0127 15:30:43.399533 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a2066c3-d242-4f3b-85bd-f407f06cded2-internal-tls-certs\") pod \"7a2066c3-d242-4f3b-85bd-f407f06cded2\" (UID: \"7a2066c3-d242-4f3b-85bd-f407f06cded2\") " Jan 27 15:30:43 crc kubenswrapper[4697]: I0127 15:30:43.402091 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a2066c3-d242-4f3b-85bd-f407f06cded2-logs" (OuterVolumeSpecName: "logs") pod "7a2066c3-d242-4f3b-85bd-f407f06cded2" (UID: "7a2066c3-d242-4f3b-85bd-f407f06cded2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:30:43 crc kubenswrapper[4697]: I0127 15:30:43.413020 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a2066c3-d242-4f3b-85bd-f407f06cded2-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "7a2066c3-d242-4f3b-85bd-f407f06cded2" (UID: "7a2066c3-d242-4f3b-85bd-f407f06cded2"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:30:43 crc kubenswrapper[4697]: I0127 15:30:43.428233 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "glance") pod "7a2066c3-d242-4f3b-85bd-f407f06cded2" (UID: "7a2066c3-d242-4f3b-85bd-f407f06cded2"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 27 15:30:43 crc kubenswrapper[4697]: I0127 15:30:43.459357 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a2066c3-d242-4f3b-85bd-f407f06cded2-kube-api-access-prfc5" (OuterVolumeSpecName: "kube-api-access-prfc5") pod "7a2066c3-d242-4f3b-85bd-f407f06cded2" (UID: "7a2066c3-d242-4f3b-85bd-f407f06cded2"). InnerVolumeSpecName "kube-api-access-prfc5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:30:43 crc kubenswrapper[4697]: I0127 15:30:43.459538 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a2066c3-d242-4f3b-85bd-f407f06cded2-scripts" (OuterVolumeSpecName: "scripts") pod "7a2066c3-d242-4f3b-85bd-f407f06cded2" (UID: "7a2066c3-d242-4f3b-85bd-f407f06cded2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:30:43 crc kubenswrapper[4697]: I0127 15:30:43.486955 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a2066c3-d242-4f3b-85bd-f407f06cded2-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "7a2066c3-d242-4f3b-85bd-f407f06cded2" (UID: "7a2066c3-d242-4f3b-85bd-f407f06cded2"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:30:43 crc kubenswrapper[4697]: I0127 15:30:43.505015 4697 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a2066c3-d242-4f3b-85bd-f407f06cded2-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 15:30:43 crc kubenswrapper[4697]: I0127 15:30:43.505045 4697 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7a2066c3-d242-4f3b-85bd-f407f06cded2-logs\") on node \"crc\" DevicePath \"\"" Jan 27 15:30:43 crc kubenswrapper[4697]: I0127 15:30:43.505055 4697 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7a2066c3-d242-4f3b-85bd-f407f06cded2-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 15:30:43 crc kubenswrapper[4697]: I0127 15:30:43.505063 4697 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7a2066c3-d242-4f3b-85bd-f407f06cded2-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 27 15:30:43 crc kubenswrapper[4697]: I0127 15:30:43.505083 4697 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Jan 27 15:30:43 crc kubenswrapper[4697]: I0127 15:30:43.505111 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-prfc5\" (UniqueName: \"kubernetes.io/projected/7a2066c3-d242-4f3b-85bd-f407f06cded2-kube-api-access-prfc5\") on node \"crc\" DevicePath \"\"" Jan 27 15:30:43 crc kubenswrapper[4697]: I0127 15:30:43.516993 4697 scope.go:117] "RemoveContainer" containerID="f8442d28292bd0eb9db4f39f7a15b939ae45099ab7307acd21e7dd09753aaba5" Jan 27 15:30:43 crc kubenswrapper[4697]: I0127 15:30:43.543963 4697 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Jan 27 15:30:43 crc kubenswrapper[4697]: I0127 15:30:43.550007 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a2066c3-d242-4f3b-85bd-f407f06cded2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7a2066c3-d242-4f3b-85bd-f407f06cded2" (UID: "7a2066c3-d242-4f3b-85bd-f407f06cded2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:30:43 crc kubenswrapper[4697]: I0127 15:30:43.571980 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a2066c3-d242-4f3b-85bd-f407f06cded2-config-data" (OuterVolumeSpecName: "config-data") pod "7a2066c3-d242-4f3b-85bd-f407f06cded2" (UID: "7a2066c3-d242-4f3b-85bd-f407f06cded2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:30:43 crc kubenswrapper[4697]: I0127 15:30:43.589657 4697 scope.go:117] "RemoveContainer" containerID="e61e4dee6f8cd23c21d2b977892deb1e161e06dbea43b8277352e6152bbae897" Jan 27 15:30:43 crc kubenswrapper[4697]: I0127 15:30:43.613509 4697 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a2066c3-d242-4f3b-85bd-f407f06cded2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 15:30:43 crc kubenswrapper[4697]: I0127 15:30:43.613543 4697 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Jan 27 15:30:43 crc kubenswrapper[4697]: I0127 15:30:43.613554 4697 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a2066c3-d242-4f3b-85bd-f407f06cded2-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 15:30:43 crc kubenswrapper[4697]: I0127 15:30:43.718267 4697 generic.go:334] "Generic (PLEG): container finished" podID="7a2066c3-d242-4f3b-85bd-f407f06cded2" containerID="1e3a54332da2bdea46a62af6888ed6d77b42bc8849e412f65815336659540e93" exitCode=0 Jan 27 15:30:43 crc kubenswrapper[4697]: I0127 15:30:43.718329 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7a2066c3-d242-4f3b-85bd-f407f06cded2","Type":"ContainerDied","Data":"1e3a54332da2bdea46a62af6888ed6d77b42bc8849e412f65815336659540e93"} Jan 27 15:30:43 crc kubenswrapper[4697]: I0127 15:30:43.718356 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7a2066c3-d242-4f3b-85bd-f407f06cded2","Type":"ContainerDied","Data":"92bcb75ce9206db107e7d6b1ed242749fb56a10d637090c778578a996a4e257e"} Jan 27 15:30:43 crc kubenswrapper[4697]: I0127 15:30:43.718373 4697 scope.go:117] "RemoveContainer" containerID="1e3a54332da2bdea46a62af6888ed6d77b42bc8849e412f65815336659540e93" Jan 27 15:30:43 crc kubenswrapper[4697]: I0127 15:30:43.718492 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 15:30:43 crc kubenswrapper[4697]: I0127 15:30:43.770694 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"65153f23-c661-4b49-b86c-294fd3a42610","Type":"ContainerStarted","Data":"0295f2908baee5e13e870e56581d707ed325b1fdb7cd3f413492ba7d6b494cbf"} Jan 27 15:30:43 crc kubenswrapper[4697]: I0127 15:30:43.772996 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 15:30:43 crc kubenswrapper[4697]: I0127 15:30:43.782970 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 15:30:43 crc kubenswrapper[4697]: I0127 15:30:43.802238 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 15:30:43 crc kubenswrapper[4697]: E0127 15:30:43.802569 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a2066c3-d242-4f3b-85bd-f407f06cded2" containerName="glance-httpd" Jan 27 15:30:43 crc kubenswrapper[4697]: I0127 15:30:43.802583 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a2066c3-d242-4f3b-85bd-f407f06cded2" containerName="glance-httpd" Jan 27 15:30:43 crc kubenswrapper[4697]: E0127 15:30:43.802596 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a2066c3-d242-4f3b-85bd-f407f06cded2" containerName="glance-log" Jan 27 15:30:43 crc kubenswrapper[4697]: I0127 15:30:43.802602 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a2066c3-d242-4f3b-85bd-f407f06cded2" containerName="glance-log" Jan 27 15:30:43 crc kubenswrapper[4697]: I0127 15:30:43.802772 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a2066c3-d242-4f3b-85bd-f407f06cded2" containerName="glance-httpd" Jan 27 15:30:43 crc kubenswrapper[4697]: I0127 15:30:43.808703 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a2066c3-d242-4f3b-85bd-f407f06cded2" containerName="glance-log" Jan 27 15:30:43 crc kubenswrapper[4697]: I0127 15:30:43.809919 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 15:30:43 crc kubenswrapper[4697]: I0127 15:30:43.813892 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8397e2ec-7b94-4690-b567-716eae78b6d0","Type":"ContainerStarted","Data":"89d018a9ef0edaa45c92aa4625c0817905243554d2d6282696d67f6635dfa266"} Jan 27 15:30:43 crc kubenswrapper[4697]: I0127 15:30:43.823718 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 27 15:30:43 crc kubenswrapper[4697]: I0127 15:30:43.823882 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 27 15:30:43 crc kubenswrapper[4697]: I0127 15:30:43.891846 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 15:30:43 crc kubenswrapper[4697]: I0127 15:30:43.925795 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31e0b520-a0d8-4d9c-a53a-dbc75c401f4f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"31e0b520-a0d8-4d9c-a53a-dbc75c401f4f\") " pod="openstack/glance-default-internal-api-0" Jan 27 15:30:43 crc kubenswrapper[4697]: I0127 15:30:43.925882 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/31e0b520-a0d8-4d9c-a53a-dbc75c401f4f-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"31e0b520-a0d8-4d9c-a53a-dbc75c401f4f\") " pod="openstack/glance-default-internal-api-0" Jan 27 15:30:43 crc kubenswrapper[4697]: I0127 15:30:43.925914 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"31e0b520-a0d8-4d9c-a53a-dbc75c401f4f\") " pod="openstack/glance-default-internal-api-0" Jan 27 15:30:43 crc kubenswrapper[4697]: I0127 15:30:43.925976 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjvbg\" (UniqueName: \"kubernetes.io/projected/31e0b520-a0d8-4d9c-a53a-dbc75c401f4f-kube-api-access-wjvbg\") pod \"glance-default-internal-api-0\" (UID: \"31e0b520-a0d8-4d9c-a53a-dbc75c401f4f\") " pod="openstack/glance-default-internal-api-0" Jan 27 15:30:43 crc kubenswrapper[4697]: I0127 15:30:43.925994 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/31e0b520-a0d8-4d9c-a53a-dbc75c401f4f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"31e0b520-a0d8-4d9c-a53a-dbc75c401f4f\") " pod="openstack/glance-default-internal-api-0" Jan 27 15:30:43 crc kubenswrapper[4697]: I0127 15:30:43.926028 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/31e0b520-a0d8-4d9c-a53a-dbc75c401f4f-logs\") pod \"glance-default-internal-api-0\" (UID: \"31e0b520-a0d8-4d9c-a53a-dbc75c401f4f\") " pod="openstack/glance-default-internal-api-0" Jan 27 15:30:43 crc kubenswrapper[4697]: I0127 15:30:43.926074 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/31e0b520-a0d8-4d9c-a53a-dbc75c401f4f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"31e0b520-a0d8-4d9c-a53a-dbc75c401f4f\") " pod="openstack/glance-default-internal-api-0" Jan 27 15:30:43 crc kubenswrapper[4697]: I0127 15:30:43.926088 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31e0b520-a0d8-4d9c-a53a-dbc75c401f4f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"31e0b520-a0d8-4d9c-a53a-dbc75c401f4f\") " pod="openstack/glance-default-internal-api-0" Jan 27 15:30:44 crc kubenswrapper[4697]: I0127 15:30:44.027408 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31e0b520-a0d8-4d9c-a53a-dbc75c401f4f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"31e0b520-a0d8-4d9c-a53a-dbc75c401f4f\") " pod="openstack/glance-default-internal-api-0" Jan 27 15:30:44 crc kubenswrapper[4697]: I0127 15:30:44.027481 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/31e0b520-a0d8-4d9c-a53a-dbc75c401f4f-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"31e0b520-a0d8-4d9c-a53a-dbc75c401f4f\") " pod="openstack/glance-default-internal-api-0" Jan 27 15:30:44 crc kubenswrapper[4697]: I0127 15:30:44.027506 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"31e0b520-a0d8-4d9c-a53a-dbc75c401f4f\") " pod="openstack/glance-default-internal-api-0" Jan 27 15:30:44 crc kubenswrapper[4697]: I0127 15:30:44.027581 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wjvbg\" (UniqueName: \"kubernetes.io/projected/31e0b520-a0d8-4d9c-a53a-dbc75c401f4f-kube-api-access-wjvbg\") pod \"glance-default-internal-api-0\" (UID: \"31e0b520-a0d8-4d9c-a53a-dbc75c401f4f\") " pod="openstack/glance-default-internal-api-0" Jan 27 15:30:44 crc kubenswrapper[4697]: I0127 15:30:44.027599 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/31e0b520-a0d8-4d9c-a53a-dbc75c401f4f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"31e0b520-a0d8-4d9c-a53a-dbc75c401f4f\") " pod="openstack/glance-default-internal-api-0" Jan 27 15:30:44 crc kubenswrapper[4697]: I0127 15:30:44.027625 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/31e0b520-a0d8-4d9c-a53a-dbc75c401f4f-logs\") pod \"glance-default-internal-api-0\" (UID: \"31e0b520-a0d8-4d9c-a53a-dbc75c401f4f\") " pod="openstack/glance-default-internal-api-0" Jan 27 15:30:44 crc kubenswrapper[4697]: I0127 15:30:44.027660 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/31e0b520-a0d8-4d9c-a53a-dbc75c401f4f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"31e0b520-a0d8-4d9c-a53a-dbc75c401f4f\") " pod="openstack/glance-default-internal-api-0" Jan 27 15:30:44 crc kubenswrapper[4697]: I0127 15:30:44.027674 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31e0b520-a0d8-4d9c-a53a-dbc75c401f4f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"31e0b520-a0d8-4d9c-a53a-dbc75c401f4f\") " pod="openstack/glance-default-internal-api-0" Jan 27 15:30:44 crc kubenswrapper[4697]: I0127 15:30:44.029491 4697 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"31e0b520-a0d8-4d9c-a53a-dbc75c401f4f\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/glance-default-internal-api-0" Jan 27 15:30:44 crc kubenswrapper[4697]: I0127 15:30:44.030307 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/31e0b520-a0d8-4d9c-a53a-dbc75c401f4f-logs\") pod \"glance-default-internal-api-0\" (UID: \"31e0b520-a0d8-4d9c-a53a-dbc75c401f4f\") " pod="openstack/glance-default-internal-api-0" Jan 27 15:30:44 crc kubenswrapper[4697]: I0127 15:30:44.030525 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/31e0b520-a0d8-4d9c-a53a-dbc75c401f4f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"31e0b520-a0d8-4d9c-a53a-dbc75c401f4f\") " pod="openstack/glance-default-internal-api-0" Jan 27 15:30:44 crc kubenswrapper[4697]: I0127 15:30:44.036352 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/31e0b520-a0d8-4d9c-a53a-dbc75c401f4f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"31e0b520-a0d8-4d9c-a53a-dbc75c401f4f\") " pod="openstack/glance-default-internal-api-0" Jan 27 15:30:44 crc kubenswrapper[4697]: I0127 15:30:44.039727 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/31e0b520-a0d8-4d9c-a53a-dbc75c401f4f-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"31e0b520-a0d8-4d9c-a53a-dbc75c401f4f\") " pod="openstack/glance-default-internal-api-0" Jan 27 15:30:44 crc kubenswrapper[4697]: I0127 15:30:44.040332 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31e0b520-a0d8-4d9c-a53a-dbc75c401f4f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"31e0b520-a0d8-4d9c-a53a-dbc75c401f4f\") " pod="openstack/glance-default-internal-api-0" Jan 27 15:30:44 crc kubenswrapper[4697]: I0127 15:30:44.055084 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31e0b520-a0d8-4d9c-a53a-dbc75c401f4f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"31e0b520-a0d8-4d9c-a53a-dbc75c401f4f\") " pod="openstack/glance-default-internal-api-0" Jan 27 15:30:44 crc kubenswrapper[4697]: I0127 15:30:44.058770 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wjvbg\" (UniqueName: \"kubernetes.io/projected/31e0b520-a0d8-4d9c-a53a-dbc75c401f4f-kube-api-access-wjvbg\") pod \"glance-default-internal-api-0\" (UID: \"31e0b520-a0d8-4d9c-a53a-dbc75c401f4f\") " pod="openstack/glance-default-internal-api-0" Jan 27 15:30:44 crc kubenswrapper[4697]: I0127 15:30:44.064725 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"31e0b520-a0d8-4d9c-a53a-dbc75c401f4f\") " pod="openstack/glance-default-internal-api-0" Jan 27 15:30:44 crc kubenswrapper[4697]: I0127 15:30:44.105316 4697 scope.go:117] "RemoveContainer" containerID="65ccefbe4be02b1c5f60b5590cc3de45cd50c9089f2a515b32b18d017128ce7e" Jan 27 15:30:44 crc kubenswrapper[4697]: I0127 15:30:44.155184 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 15:30:44 crc kubenswrapper[4697]: I0127 15:30:44.339983 4697 scope.go:117] "RemoveContainer" containerID="1e3a54332da2bdea46a62af6888ed6d77b42bc8849e412f65815336659540e93" Jan 27 15:30:44 crc kubenswrapper[4697]: E0127 15:30:44.341199 4697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e3a54332da2bdea46a62af6888ed6d77b42bc8849e412f65815336659540e93\": container with ID starting with 1e3a54332da2bdea46a62af6888ed6d77b42bc8849e412f65815336659540e93 not found: ID does not exist" containerID="1e3a54332da2bdea46a62af6888ed6d77b42bc8849e412f65815336659540e93" Jan 27 15:30:44 crc kubenswrapper[4697]: I0127 15:30:44.341244 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e3a54332da2bdea46a62af6888ed6d77b42bc8849e412f65815336659540e93"} err="failed to get container status \"1e3a54332da2bdea46a62af6888ed6d77b42bc8849e412f65815336659540e93\": rpc error: code = NotFound desc = could not find container \"1e3a54332da2bdea46a62af6888ed6d77b42bc8849e412f65815336659540e93\": container with ID starting with 1e3a54332da2bdea46a62af6888ed6d77b42bc8849e412f65815336659540e93 not found: ID does not exist" Jan 27 15:30:44 crc kubenswrapper[4697]: I0127 15:30:44.341268 4697 scope.go:117] "RemoveContainer" containerID="65ccefbe4be02b1c5f60b5590cc3de45cd50c9089f2a515b32b18d017128ce7e" Jan 27 15:30:44 crc kubenswrapper[4697]: E0127 15:30:44.342169 4697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"65ccefbe4be02b1c5f60b5590cc3de45cd50c9089f2a515b32b18d017128ce7e\": container with ID starting with 65ccefbe4be02b1c5f60b5590cc3de45cd50c9089f2a515b32b18d017128ce7e not found: ID does not exist" containerID="65ccefbe4be02b1c5f60b5590cc3de45cd50c9089f2a515b32b18d017128ce7e" Jan 27 15:30:44 crc kubenswrapper[4697]: I0127 15:30:44.342192 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65ccefbe4be02b1c5f60b5590cc3de45cd50c9089f2a515b32b18d017128ce7e"} err="failed to get container status \"65ccefbe4be02b1c5f60b5590cc3de45cd50c9089f2a515b32b18d017128ce7e\": rpc error: code = NotFound desc = could not find container \"65ccefbe4be02b1c5f60b5590cc3de45cd50c9089f2a515b32b18d017128ce7e\": container with ID starting with 65ccefbe4be02b1c5f60b5590cc3de45cd50c9089f2a515b32b18d017128ce7e not found: ID does not exist" Jan 27 15:30:44 crc kubenswrapper[4697]: I0127 15:30:44.595143 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a2066c3-d242-4f3b-85bd-f407f06cded2" path="/var/lib/kubelet/pods/7a2066c3-d242-4f3b-85bd-f407f06cded2/volumes" Jan 27 15:30:44 crc kubenswrapper[4697]: I0127 15:30:44.647631 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 15:30:44 crc kubenswrapper[4697]: W0127 15:30:44.651407 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod31e0b520_a0d8_4d9c_a53a_dbc75c401f4f.slice/crio-40ebcaae8bfa444beb8d8913e0fb8626c9405dd9f17b67c37e894202af310de9 WatchSource:0}: Error finding container 40ebcaae8bfa444beb8d8913e0fb8626c9405dd9f17b67c37e894202af310de9: Status 404 returned error can't find the container with id 40ebcaae8bfa444beb8d8913e0fb8626c9405dd9f17b67c37e894202af310de9 Jan 27 15:30:44 crc kubenswrapper[4697]: I0127 15:30:44.837490 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"65153f23-c661-4b49-b86c-294fd3a42610","Type":"ContainerStarted","Data":"e1bc7cabbdb98a377574b8a469cc4e4671483fba677339c3a52dc210ff59a833"} Jan 27 15:30:44 crc kubenswrapper[4697]: I0127 15:30:44.842524 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8397e2ec-7b94-4690-b567-716eae78b6d0","Type":"ContainerStarted","Data":"fdb2f542720d2b67d693c264b72697953f938807440f19ac6b1d629d128c6492"} Jan 27 15:30:44 crc kubenswrapper[4697]: I0127 15:30:44.847138 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"31e0b520-a0d8-4d9c-a53a-dbc75c401f4f","Type":"ContainerStarted","Data":"40ebcaae8bfa444beb8d8913e0fb8626c9405dd9f17b67c37e894202af310de9"} Jan 27 15:30:44 crc kubenswrapper[4697]: I0127 15:30:44.870408 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.87038826 podStartE2EDuration="3.87038826s" podCreationTimestamp="2026-01-27 15:30:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:30:44.869981201 +0000 UTC m=+1341.042380982" watchObservedRunningTime="2026-01-27 15:30:44.87038826 +0000 UTC m=+1341.042788041" Jan 27 15:30:45 crc kubenswrapper[4697]: I0127 15:30:45.856475 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"31e0b520-a0d8-4d9c-a53a-dbc75c401f4f","Type":"ContainerStarted","Data":"0343d890a7886693cbd0408da1f9747a633e83ff37ade64df2adf8c5a4902cdf"} Jan 27 15:30:46 crc kubenswrapper[4697]: I0127 15:30:46.866830 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"31e0b520-a0d8-4d9c-a53a-dbc75c401f4f","Type":"ContainerStarted","Data":"a6e8c1e28052d000b1422dc2c3e361f379a7842aa37b863ec69c2542cc1905e3"} Jan 27 15:30:46 crc kubenswrapper[4697]: I0127 15:30:46.869367 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"65153f23-c661-4b49-b86c-294fd3a42610","Type":"ContainerStarted","Data":"c27ba0856ff4174ec3f85fa411179971f8e544e3c50c06069f2f5693fca9e9fc"} Jan 27 15:30:46 crc kubenswrapper[4697]: I0127 15:30:46.870266 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 27 15:30:46 crc kubenswrapper[4697]: I0127 15:30:46.891314 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.891299098 podStartE2EDuration="3.891299098s" podCreationTimestamp="2026-01-27 15:30:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:30:46.888345716 +0000 UTC m=+1343.060745497" watchObservedRunningTime="2026-01-27 15:30:46.891299098 +0000 UTC m=+1343.063698879" Jan 27 15:30:46 crc kubenswrapper[4697]: I0127 15:30:46.912558 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.8822002429999998 podStartE2EDuration="7.912535934s" podCreationTimestamp="2026-01-27 15:30:39 +0000 UTC" firstStartedPulling="2026-01-27 15:30:40.934484328 +0000 UTC m=+1337.106884109" lastFinishedPulling="2026-01-27 15:30:45.964820009 +0000 UTC m=+1342.137219800" observedRunningTime="2026-01-27 15:30:46.91033425 +0000 UTC m=+1343.082734031" watchObservedRunningTime="2026-01-27 15:30:46.912535934 +0000 UTC m=+1343.084935725" Jan 27 15:30:48 crc kubenswrapper[4697]: I0127 15:30:48.521401 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-q8kwj"] Jan 27 15:30:48 crc kubenswrapper[4697]: I0127 15:30:48.522747 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-q8kwj" Jan 27 15:30:48 crc kubenswrapper[4697]: I0127 15:30:48.540951 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-q8kwj"] Jan 27 15:30:48 crc kubenswrapper[4697]: I0127 15:30:48.631575 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-xldlq"] Jan 27 15:30:48 crc kubenswrapper[4697]: I0127 15:30:48.632560 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-xldlq" Jan 27 15:30:48 crc kubenswrapper[4697]: I0127 15:30:48.634123 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b0fb28a5-449f-4063-8063-47fe549d8b30-operator-scripts\") pod \"nova-api-db-create-q8kwj\" (UID: \"b0fb28a5-449f-4063-8063-47fe549d8b30\") " pod="openstack/nova-api-db-create-q8kwj" Jan 27 15:30:48 crc kubenswrapper[4697]: I0127 15:30:48.634160 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lcl55\" (UniqueName: \"kubernetes.io/projected/b0fb28a5-449f-4063-8063-47fe549d8b30-kube-api-access-lcl55\") pod \"nova-api-db-create-q8kwj\" (UID: \"b0fb28a5-449f-4063-8063-47fe549d8b30\") " pod="openstack/nova-api-db-create-q8kwj" Jan 27 15:30:48 crc kubenswrapper[4697]: I0127 15:30:48.649871 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-xldlq"] Jan 27 15:30:48 crc kubenswrapper[4697]: I0127 15:30:48.736921 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d02d925a-22fb-46b8-a24e-754310c36008-operator-scripts\") pod \"nova-cell0-db-create-xldlq\" (UID: \"d02d925a-22fb-46b8-a24e-754310c36008\") " pod="openstack/nova-cell0-db-create-xldlq" Jan 27 15:30:48 crc kubenswrapper[4697]: I0127 15:30:48.737034 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvs5z\" (UniqueName: \"kubernetes.io/projected/d02d925a-22fb-46b8-a24e-754310c36008-kube-api-access-fvs5z\") pod \"nova-cell0-db-create-xldlq\" (UID: \"d02d925a-22fb-46b8-a24e-754310c36008\") " pod="openstack/nova-cell0-db-create-xldlq" Jan 27 15:30:48 crc kubenswrapper[4697]: I0127 15:30:48.737093 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b0fb28a5-449f-4063-8063-47fe549d8b30-operator-scripts\") pod \"nova-api-db-create-q8kwj\" (UID: \"b0fb28a5-449f-4063-8063-47fe549d8b30\") " pod="openstack/nova-api-db-create-q8kwj" Jan 27 15:30:48 crc kubenswrapper[4697]: I0127 15:30:48.737124 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lcl55\" (UniqueName: \"kubernetes.io/projected/b0fb28a5-449f-4063-8063-47fe549d8b30-kube-api-access-lcl55\") pod \"nova-api-db-create-q8kwj\" (UID: \"b0fb28a5-449f-4063-8063-47fe549d8b30\") " pod="openstack/nova-api-db-create-q8kwj" Jan 27 15:30:48 crc kubenswrapper[4697]: I0127 15:30:48.740392 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b0fb28a5-449f-4063-8063-47fe549d8b30-operator-scripts\") pod \"nova-api-db-create-q8kwj\" (UID: \"b0fb28a5-449f-4063-8063-47fe549d8b30\") " pod="openstack/nova-api-db-create-q8kwj" Jan 27 15:30:48 crc kubenswrapper[4697]: I0127 15:30:48.766479 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lcl55\" (UniqueName: \"kubernetes.io/projected/b0fb28a5-449f-4063-8063-47fe549d8b30-kube-api-access-lcl55\") pod \"nova-api-db-create-q8kwj\" (UID: \"b0fb28a5-449f-4063-8063-47fe549d8b30\") " pod="openstack/nova-api-db-create-q8kwj" Jan 27 15:30:48 crc kubenswrapper[4697]: I0127 15:30:48.826056 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-abaf-account-create-update-bwx76"] Jan 27 15:30:48 crc kubenswrapper[4697]: I0127 15:30:48.827180 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-abaf-account-create-update-bwx76" Jan 27 15:30:48 crc kubenswrapper[4697]: I0127 15:30:48.836167 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 27 15:30:48 crc kubenswrapper[4697]: I0127 15:30:48.838845 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d02d925a-22fb-46b8-a24e-754310c36008-operator-scripts\") pod \"nova-cell0-db-create-xldlq\" (UID: \"d02d925a-22fb-46b8-a24e-754310c36008\") " pod="openstack/nova-cell0-db-create-xldlq" Jan 27 15:30:48 crc kubenswrapper[4697]: I0127 15:30:48.838930 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fvs5z\" (UniqueName: \"kubernetes.io/projected/d02d925a-22fb-46b8-a24e-754310c36008-kube-api-access-fvs5z\") pod \"nova-cell0-db-create-xldlq\" (UID: \"d02d925a-22fb-46b8-a24e-754310c36008\") " pod="openstack/nova-cell0-db-create-xldlq" Jan 27 15:30:48 crc kubenswrapper[4697]: I0127 15:30:48.839683 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d02d925a-22fb-46b8-a24e-754310c36008-operator-scripts\") pod \"nova-cell0-db-create-xldlq\" (UID: \"d02d925a-22fb-46b8-a24e-754310c36008\") " pod="openstack/nova-cell0-db-create-xldlq" Jan 27 15:30:48 crc kubenswrapper[4697]: I0127 15:30:48.845185 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-abaf-account-create-update-bwx76"] Jan 27 15:30:48 crc kubenswrapper[4697]: I0127 15:30:48.869047 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-q8kwj" Jan 27 15:30:48 crc kubenswrapper[4697]: I0127 15:30:48.881287 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvs5z\" (UniqueName: \"kubernetes.io/projected/d02d925a-22fb-46b8-a24e-754310c36008-kube-api-access-fvs5z\") pod \"nova-cell0-db-create-xldlq\" (UID: \"d02d925a-22fb-46b8-a24e-754310c36008\") " pod="openstack/nova-cell0-db-create-xldlq" Jan 27 15:30:48 crc kubenswrapper[4697]: I0127 15:30:48.928391 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-v9vbm"] Jan 27 15:30:48 crc kubenswrapper[4697]: I0127 15:30:48.929363 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-v9vbm" Jan 27 15:30:48 crc kubenswrapper[4697]: I0127 15:30:48.940264 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f723bccb-279a-4613-8825-6af12bf7a421-operator-scripts\") pod \"nova-api-abaf-account-create-update-bwx76\" (UID: \"f723bccb-279a-4613-8825-6af12bf7a421\") " pod="openstack/nova-api-abaf-account-create-update-bwx76" Jan 27 15:30:48 crc kubenswrapper[4697]: I0127 15:30:48.940360 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j42fv\" (UniqueName: \"kubernetes.io/projected/f723bccb-279a-4613-8825-6af12bf7a421-kube-api-access-j42fv\") pod \"nova-api-abaf-account-create-update-bwx76\" (UID: \"f723bccb-279a-4613-8825-6af12bf7a421\") " pod="openstack/nova-api-abaf-account-create-update-bwx76" Jan 27 15:30:48 crc kubenswrapper[4697]: I0127 15:30:48.946493 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-xldlq" Jan 27 15:30:48 crc kubenswrapper[4697]: I0127 15:30:48.952449 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-v9vbm"] Jan 27 15:30:49 crc kubenswrapper[4697]: I0127 15:30:49.036491 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-33d3-account-create-update-5rbnd"] Jan 27 15:30:49 crc kubenswrapper[4697]: I0127 15:30:49.037510 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-33d3-account-create-update-5rbnd" Jan 27 15:30:49 crc kubenswrapper[4697]: I0127 15:30:49.043017 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 27 15:30:49 crc kubenswrapper[4697]: I0127 15:30:49.043907 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j42fv\" (UniqueName: \"kubernetes.io/projected/f723bccb-279a-4613-8825-6af12bf7a421-kube-api-access-j42fv\") pod \"nova-api-abaf-account-create-update-bwx76\" (UID: \"f723bccb-279a-4613-8825-6af12bf7a421\") " pod="openstack/nova-api-abaf-account-create-update-bwx76" Jan 27 15:30:49 crc kubenswrapper[4697]: I0127 15:30:49.043995 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6172572f-5fa4-419b-8d75-695458f7b4bd-operator-scripts\") pod \"nova-cell1-db-create-v9vbm\" (UID: \"6172572f-5fa4-419b-8d75-695458f7b4bd\") " pod="openstack/nova-cell1-db-create-v9vbm" Jan 27 15:30:49 crc kubenswrapper[4697]: I0127 15:30:49.044041 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x995q\" (UniqueName: \"kubernetes.io/projected/6172572f-5fa4-419b-8d75-695458f7b4bd-kube-api-access-x995q\") pod \"nova-cell1-db-create-v9vbm\" (UID: \"6172572f-5fa4-419b-8d75-695458f7b4bd\") " pod="openstack/nova-cell1-db-create-v9vbm" Jan 27 15:30:49 crc kubenswrapper[4697]: I0127 15:30:49.044113 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f723bccb-279a-4613-8825-6af12bf7a421-operator-scripts\") pod \"nova-api-abaf-account-create-update-bwx76\" (UID: \"f723bccb-279a-4613-8825-6af12bf7a421\") " pod="openstack/nova-api-abaf-account-create-update-bwx76" Jan 27 15:30:49 crc kubenswrapper[4697]: I0127 15:30:49.045552 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f723bccb-279a-4613-8825-6af12bf7a421-operator-scripts\") pod \"nova-api-abaf-account-create-update-bwx76\" (UID: \"f723bccb-279a-4613-8825-6af12bf7a421\") " pod="openstack/nova-api-abaf-account-create-update-bwx76" Jan 27 15:30:49 crc kubenswrapper[4697]: I0127 15:30:49.058875 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-33d3-account-create-update-5rbnd"] Jan 27 15:30:49 crc kubenswrapper[4697]: I0127 15:30:49.066640 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j42fv\" (UniqueName: \"kubernetes.io/projected/f723bccb-279a-4613-8825-6af12bf7a421-kube-api-access-j42fv\") pod \"nova-api-abaf-account-create-update-bwx76\" (UID: \"f723bccb-279a-4613-8825-6af12bf7a421\") " pod="openstack/nova-api-abaf-account-create-update-bwx76" Jan 27 15:30:49 crc kubenswrapper[4697]: I0127 15:30:49.146381 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjgsr\" (UniqueName: \"kubernetes.io/projected/597107a0-5b36-457f-82f2-486eb5d2880b-kube-api-access-jjgsr\") pod \"nova-cell0-33d3-account-create-update-5rbnd\" (UID: \"597107a0-5b36-457f-82f2-486eb5d2880b\") " pod="openstack/nova-cell0-33d3-account-create-update-5rbnd" Jan 27 15:30:49 crc kubenswrapper[4697]: I0127 15:30:49.146433 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6172572f-5fa4-419b-8d75-695458f7b4bd-operator-scripts\") pod \"nova-cell1-db-create-v9vbm\" (UID: \"6172572f-5fa4-419b-8d75-695458f7b4bd\") " pod="openstack/nova-cell1-db-create-v9vbm" Jan 27 15:30:49 crc kubenswrapper[4697]: I0127 15:30:49.146466 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x995q\" (UniqueName: \"kubernetes.io/projected/6172572f-5fa4-419b-8d75-695458f7b4bd-kube-api-access-x995q\") pod \"nova-cell1-db-create-v9vbm\" (UID: \"6172572f-5fa4-419b-8d75-695458f7b4bd\") " pod="openstack/nova-cell1-db-create-v9vbm" Jan 27 15:30:49 crc kubenswrapper[4697]: I0127 15:30:49.146522 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/597107a0-5b36-457f-82f2-486eb5d2880b-operator-scripts\") pod \"nova-cell0-33d3-account-create-update-5rbnd\" (UID: \"597107a0-5b36-457f-82f2-486eb5d2880b\") " pod="openstack/nova-cell0-33d3-account-create-update-5rbnd" Jan 27 15:30:49 crc kubenswrapper[4697]: I0127 15:30:49.147159 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6172572f-5fa4-419b-8d75-695458f7b4bd-operator-scripts\") pod \"nova-cell1-db-create-v9vbm\" (UID: \"6172572f-5fa4-419b-8d75-695458f7b4bd\") " pod="openstack/nova-cell1-db-create-v9vbm" Jan 27 15:30:49 crc kubenswrapper[4697]: I0127 15:30:49.147594 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-abaf-account-create-update-bwx76" Jan 27 15:30:49 crc kubenswrapper[4697]: I0127 15:30:49.163068 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x995q\" (UniqueName: \"kubernetes.io/projected/6172572f-5fa4-419b-8d75-695458f7b4bd-kube-api-access-x995q\") pod \"nova-cell1-db-create-v9vbm\" (UID: \"6172572f-5fa4-419b-8d75-695458f7b4bd\") " pod="openstack/nova-cell1-db-create-v9vbm" Jan 27 15:30:49 crc kubenswrapper[4697]: I0127 15:30:49.249635 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/597107a0-5b36-457f-82f2-486eb5d2880b-operator-scripts\") pod \"nova-cell0-33d3-account-create-update-5rbnd\" (UID: \"597107a0-5b36-457f-82f2-486eb5d2880b\") " pod="openstack/nova-cell0-33d3-account-create-update-5rbnd" Jan 27 15:30:49 crc kubenswrapper[4697]: I0127 15:30:49.249960 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/597107a0-5b36-457f-82f2-486eb5d2880b-operator-scripts\") pod \"nova-cell0-33d3-account-create-update-5rbnd\" (UID: \"597107a0-5b36-457f-82f2-486eb5d2880b\") " pod="openstack/nova-cell0-33d3-account-create-update-5rbnd" Jan 27 15:30:49 crc kubenswrapper[4697]: I0127 15:30:49.250134 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jjgsr\" (UniqueName: \"kubernetes.io/projected/597107a0-5b36-457f-82f2-486eb5d2880b-kube-api-access-jjgsr\") pod \"nova-cell0-33d3-account-create-update-5rbnd\" (UID: \"597107a0-5b36-457f-82f2-486eb5d2880b\") " pod="openstack/nova-cell0-33d3-account-create-update-5rbnd" Jan 27 15:30:49 crc kubenswrapper[4697]: I0127 15:30:49.256988 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-d000-account-create-update-4bkrc"] Jan 27 15:30:49 crc kubenswrapper[4697]: I0127 15:30:49.258343 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-d000-account-create-update-4bkrc" Jan 27 15:30:49 crc kubenswrapper[4697]: I0127 15:30:49.262740 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Jan 27 15:30:49 crc kubenswrapper[4697]: I0127 15:30:49.269407 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-d000-account-create-update-4bkrc"] Jan 27 15:30:49 crc kubenswrapper[4697]: I0127 15:30:49.270106 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jjgsr\" (UniqueName: \"kubernetes.io/projected/597107a0-5b36-457f-82f2-486eb5d2880b-kube-api-access-jjgsr\") pod \"nova-cell0-33d3-account-create-update-5rbnd\" (UID: \"597107a0-5b36-457f-82f2-486eb5d2880b\") " pod="openstack/nova-cell0-33d3-account-create-update-5rbnd" Jan 27 15:30:49 crc kubenswrapper[4697]: I0127 15:30:49.358231 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvg8v\" (UniqueName: \"kubernetes.io/projected/f5466297-9701-4292-954c-ac110351d448-kube-api-access-wvg8v\") pod \"nova-cell1-d000-account-create-update-4bkrc\" (UID: \"f5466297-9701-4292-954c-ac110351d448\") " pod="openstack/nova-cell1-d000-account-create-update-4bkrc" Jan 27 15:30:49 crc kubenswrapper[4697]: I0127 15:30:49.358335 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f5466297-9701-4292-954c-ac110351d448-operator-scripts\") pod \"nova-cell1-d000-account-create-update-4bkrc\" (UID: \"f5466297-9701-4292-954c-ac110351d448\") " pod="openstack/nova-cell1-d000-account-create-update-4bkrc" Jan 27 15:30:49 crc kubenswrapper[4697]: I0127 15:30:49.426147 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-v9vbm" Jan 27 15:30:49 crc kubenswrapper[4697]: I0127 15:30:49.438134 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-33d3-account-create-update-5rbnd" Jan 27 15:30:49 crc kubenswrapper[4697]: I0127 15:30:49.476334 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f5466297-9701-4292-954c-ac110351d448-operator-scripts\") pod \"nova-cell1-d000-account-create-update-4bkrc\" (UID: \"f5466297-9701-4292-954c-ac110351d448\") " pod="openstack/nova-cell1-d000-account-create-update-4bkrc" Jan 27 15:30:49 crc kubenswrapper[4697]: I0127 15:30:49.476466 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvg8v\" (UniqueName: \"kubernetes.io/projected/f5466297-9701-4292-954c-ac110351d448-kube-api-access-wvg8v\") pod \"nova-cell1-d000-account-create-update-4bkrc\" (UID: \"f5466297-9701-4292-954c-ac110351d448\") " pod="openstack/nova-cell1-d000-account-create-update-4bkrc" Jan 27 15:30:49 crc kubenswrapper[4697]: I0127 15:30:49.477271 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f5466297-9701-4292-954c-ac110351d448-operator-scripts\") pod \"nova-cell1-d000-account-create-update-4bkrc\" (UID: \"f5466297-9701-4292-954c-ac110351d448\") " pod="openstack/nova-cell1-d000-account-create-update-4bkrc" Jan 27 15:30:49 crc kubenswrapper[4697]: I0127 15:30:49.497559 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvg8v\" (UniqueName: \"kubernetes.io/projected/f5466297-9701-4292-954c-ac110351d448-kube-api-access-wvg8v\") pod \"nova-cell1-d000-account-create-update-4bkrc\" (UID: \"f5466297-9701-4292-954c-ac110351d448\") " pod="openstack/nova-cell1-d000-account-create-update-4bkrc" Jan 27 15:30:49 crc kubenswrapper[4697]: I0127 15:30:49.605624 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-d000-account-create-update-4bkrc" Jan 27 15:30:49 crc kubenswrapper[4697]: I0127 15:30:49.672054 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-8bdd65479-mrv2d" Jan 27 15:30:49 crc kubenswrapper[4697]: I0127 15:30:49.689947 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-8bdd65479-mrv2d" Jan 27 15:30:49 crc kubenswrapper[4697]: I0127 15:30:49.863239 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-xldlq"] Jan 27 15:30:49 crc kubenswrapper[4697]: I0127 15:30:49.892316 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-q8kwj"] Jan 27 15:30:49 crc kubenswrapper[4697]: I0127 15:30:49.932135 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-xldlq" event={"ID":"d02d925a-22fb-46b8-a24e-754310c36008","Type":"ContainerStarted","Data":"3bbbb9489e1eb6d9c7073931b305a19b4c16a8699a77c8a07d22f3f44fa1b60f"} Jan 27 15:30:50 crc kubenswrapper[4697]: I0127 15:30:50.021699 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-abaf-account-create-update-bwx76"] Jan 27 15:30:50 crc kubenswrapper[4697]: I0127 15:30:50.298630 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-33d3-account-create-update-5rbnd"] Jan 27 15:30:50 crc kubenswrapper[4697]: I0127 15:30:50.328881 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-v9vbm"] Jan 27 15:30:50 crc kubenswrapper[4697]: I0127 15:30:50.592366 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-d000-account-create-update-4bkrc"] Jan 27 15:30:50 crc kubenswrapper[4697]: I0127 15:30:50.634357 4697 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5965fc65fb-dvhzz" podUID="d6ad161d-fe95-4ad3-8f60-1f1310b2974c" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.147:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.147:8443: connect: connection refused" Jan 27 15:30:50 crc kubenswrapper[4697]: I0127 15:30:50.918600 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5b9dc56b78-cpxnx" Jan 27 15:30:50 crc kubenswrapper[4697]: I0127 15:30:50.918865 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-5b9dc56b78-cpxnx" Jan 27 15:30:50 crc kubenswrapper[4697]: I0127 15:30:50.944583 4697 generic.go:334] "Generic (PLEG): container finished" podID="f723bccb-279a-4613-8825-6af12bf7a421" containerID="9f332466f31941647d265892d47e52cd4f1f9d1d59ba86bfe5c382d31904f671" exitCode=0 Jan 27 15:30:50 crc kubenswrapper[4697]: I0127 15:30:50.944683 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-abaf-account-create-update-bwx76" event={"ID":"f723bccb-279a-4613-8825-6af12bf7a421","Type":"ContainerDied","Data":"9f332466f31941647d265892d47e52cd4f1f9d1d59ba86bfe5c382d31904f671"} Jan 27 15:30:50 crc kubenswrapper[4697]: I0127 15:30:50.944729 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-abaf-account-create-update-bwx76" event={"ID":"f723bccb-279a-4613-8825-6af12bf7a421","Type":"ContainerStarted","Data":"eec2173b59495f6e6f7e191741df190c9dc886c2351b468b3c3128690c9004ab"} Jan 27 15:30:50 crc kubenswrapper[4697]: I0127 15:30:50.947085 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-v9vbm" event={"ID":"6172572f-5fa4-419b-8d75-695458f7b4bd","Type":"ContainerStarted","Data":"c0ca261d130fbe08c2aee5de95eafee9b4138f8782a380b9c85f22813626fc40"} Jan 27 15:30:50 crc kubenswrapper[4697]: I0127 15:30:50.947134 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-v9vbm" event={"ID":"6172572f-5fa4-419b-8d75-695458f7b4bd","Type":"ContainerStarted","Data":"3a40a5ec466ac1cc9c8aca1da2e5c9d6426119da5f5fe2210837a90304965404"} Jan 27 15:30:50 crc kubenswrapper[4697]: I0127 15:30:50.950385 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-33d3-account-create-update-5rbnd" event={"ID":"597107a0-5b36-457f-82f2-486eb5d2880b","Type":"ContainerStarted","Data":"06e40ab31c616b05a9e6cf7a5543aeaf866ce670433cbde6b982f4eb1cf01b9c"} Jan 27 15:30:50 crc kubenswrapper[4697]: I0127 15:30:50.950420 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-33d3-account-create-update-5rbnd" event={"ID":"597107a0-5b36-457f-82f2-486eb5d2880b","Type":"ContainerStarted","Data":"a3eaa89776e2da4d1a6805a982bca56dc9821a5dfca8641162be87691c8fca42"} Jan 27 15:30:50 crc kubenswrapper[4697]: I0127 15:30:50.952746 4697 generic.go:334] "Generic (PLEG): container finished" podID="d02d925a-22fb-46b8-a24e-754310c36008" containerID="3617dbee2a7ed925b8df4578ec74a6b740a988cba21c1ea798ce504fd7a33edb" exitCode=0 Jan 27 15:30:50 crc kubenswrapper[4697]: I0127 15:30:50.952807 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-xldlq" event={"ID":"d02d925a-22fb-46b8-a24e-754310c36008","Type":"ContainerDied","Data":"3617dbee2a7ed925b8df4578ec74a6b740a988cba21c1ea798ce504fd7a33edb"} Jan 27 15:30:50 crc kubenswrapper[4697]: I0127 15:30:50.955699 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-d000-account-create-update-4bkrc" event={"ID":"f5466297-9701-4292-954c-ac110351d448","Type":"ContainerStarted","Data":"ad71c925f99b35347a70821180aaa38732851aa2f657330db39a2fa0d42790f1"} Jan 27 15:30:50 crc kubenswrapper[4697]: I0127 15:30:50.955911 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-d000-account-create-update-4bkrc" event={"ID":"f5466297-9701-4292-954c-ac110351d448","Type":"ContainerStarted","Data":"5e0f85c6e3aece12256d5bf87992e2978adad5c5dfe517c4d4321decfc06e8d6"} Jan 27 15:30:50 crc kubenswrapper[4697]: I0127 15:30:50.958222 4697 generic.go:334] "Generic (PLEG): container finished" podID="b0fb28a5-449f-4063-8063-47fe549d8b30" containerID="517354af8f224427c18a7bb451b1a385b64dde899684989428ca535374d6c9ae" exitCode=0 Jan 27 15:30:50 crc kubenswrapper[4697]: I0127 15:30:50.958360 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-q8kwj" event={"ID":"b0fb28a5-449f-4063-8063-47fe549d8b30","Type":"ContainerDied","Data":"517354af8f224427c18a7bb451b1a385b64dde899684989428ca535374d6c9ae"} Jan 27 15:30:50 crc kubenswrapper[4697]: I0127 15:30:50.958430 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-q8kwj" event={"ID":"b0fb28a5-449f-4063-8063-47fe549d8b30","Type":"ContainerStarted","Data":"14067ada1c5a04c0a60f8ff75163c3514c393e9b5d377a6882cea228cc2e767d"} Jan 27 15:30:51 crc kubenswrapper[4697]: I0127 15:30:51.029234 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-33d3-account-create-update-5rbnd" podStartSLOduration=2.029212078 podStartE2EDuration="2.029212078s" podCreationTimestamp="2026-01-27 15:30:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:30:51.005530222 +0000 UTC m=+1347.177930003" watchObservedRunningTime="2026-01-27 15:30:51.029212078 +0000 UTC m=+1347.201611859" Jan 27 15:30:51 crc kubenswrapper[4697]: I0127 15:30:51.043457 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db-create-v9vbm" podStartSLOduration=3.043437663 podStartE2EDuration="3.043437663s" podCreationTimestamp="2026-01-27 15:30:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:30:51.039724653 +0000 UTC m=+1347.212124424" watchObservedRunningTime="2026-01-27 15:30:51.043437663 +0000 UTC m=+1347.215837444" Jan 27 15:30:51 crc kubenswrapper[4697]: I0127 15:30:51.090007 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-d000-account-create-update-4bkrc" podStartSLOduration=2.089987634 podStartE2EDuration="2.089987634s" podCreationTimestamp="2026-01-27 15:30:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:30:51.062209219 +0000 UTC m=+1347.234609000" watchObservedRunningTime="2026-01-27 15:30:51.089987634 +0000 UTC m=+1347.262387415" Jan 27 15:30:51 crc kubenswrapper[4697]: I0127 15:30:51.674878 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 27 15:30:51 crc kubenswrapper[4697]: I0127 15:30:51.675144 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 27 15:30:51 crc kubenswrapper[4697]: I0127 15:30:51.731943 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 27 15:30:51 crc kubenswrapper[4697]: I0127 15:30:51.732423 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 27 15:30:51 crc kubenswrapper[4697]: I0127 15:30:51.967052 4697 generic.go:334] "Generic (PLEG): container finished" podID="6172572f-5fa4-419b-8d75-695458f7b4bd" containerID="c0ca261d130fbe08c2aee5de95eafee9b4138f8782a380b9c85f22813626fc40" exitCode=0 Jan 27 15:30:51 crc kubenswrapper[4697]: I0127 15:30:51.967153 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-v9vbm" event={"ID":"6172572f-5fa4-419b-8d75-695458f7b4bd","Type":"ContainerDied","Data":"c0ca261d130fbe08c2aee5de95eafee9b4138f8782a380b9c85f22813626fc40"} Jan 27 15:30:51 crc kubenswrapper[4697]: I0127 15:30:51.969157 4697 generic.go:334] "Generic (PLEG): container finished" podID="597107a0-5b36-457f-82f2-486eb5d2880b" containerID="06e40ab31c616b05a9e6cf7a5543aeaf866ce670433cbde6b982f4eb1cf01b9c" exitCode=0 Jan 27 15:30:51 crc kubenswrapper[4697]: I0127 15:30:51.969199 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-33d3-account-create-update-5rbnd" event={"ID":"597107a0-5b36-457f-82f2-486eb5d2880b","Type":"ContainerDied","Data":"06e40ab31c616b05a9e6cf7a5543aeaf866ce670433cbde6b982f4eb1cf01b9c"} Jan 27 15:30:51 crc kubenswrapper[4697]: I0127 15:30:51.971246 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"3d176f9c-9152-4162-b723-1f6e8330118a","Type":"ContainerStarted","Data":"2cbc344f08f59fe8aaa0202a62bbf09f816f01bc3e80662f144880009922aab8"} Jan 27 15:30:51 crc kubenswrapper[4697]: I0127 15:30:51.972542 4697 generic.go:334] "Generic (PLEG): container finished" podID="f5466297-9701-4292-954c-ac110351d448" containerID="ad71c925f99b35347a70821180aaa38732851aa2f657330db39a2fa0d42790f1" exitCode=0 Jan 27 15:30:51 crc kubenswrapper[4697]: I0127 15:30:51.972577 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-d000-account-create-update-4bkrc" event={"ID":"f5466297-9701-4292-954c-ac110351d448","Type":"ContainerDied","Data":"ad71c925f99b35347a70821180aaa38732851aa2f657330db39a2fa0d42790f1"} Jan 27 15:30:51 crc kubenswrapper[4697]: I0127 15:30:51.973383 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 27 15:30:51 crc kubenswrapper[4697]: I0127 15:30:51.973416 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 27 15:30:52 crc kubenswrapper[4697]: I0127 15:30:52.034943 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=1.9272492209999998 podStartE2EDuration="30.034920301s" podCreationTimestamp="2026-01-27 15:30:22 +0000 UTC" firstStartedPulling="2026-01-27 15:30:23.004428881 +0000 UTC m=+1319.176828662" lastFinishedPulling="2026-01-27 15:30:51.112099961 +0000 UTC m=+1347.284499742" observedRunningTime="2026-01-27 15:30:52.02132086 +0000 UTC m=+1348.193720651" watchObservedRunningTime="2026-01-27 15:30:52.034920301 +0000 UTC m=+1348.207320082" Jan 27 15:30:52 crc kubenswrapper[4697]: I0127 15:30:52.448932 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-q8kwj" Jan 27 15:30:52 crc kubenswrapper[4697]: I0127 15:30:52.585544 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lcl55\" (UniqueName: \"kubernetes.io/projected/b0fb28a5-449f-4063-8063-47fe549d8b30-kube-api-access-lcl55\") pod \"b0fb28a5-449f-4063-8063-47fe549d8b30\" (UID: \"b0fb28a5-449f-4063-8063-47fe549d8b30\") " Jan 27 15:30:52 crc kubenswrapper[4697]: I0127 15:30:52.586115 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b0fb28a5-449f-4063-8063-47fe549d8b30-operator-scripts\") pod \"b0fb28a5-449f-4063-8063-47fe549d8b30\" (UID: \"b0fb28a5-449f-4063-8063-47fe549d8b30\") " Jan 27 15:30:52 crc kubenswrapper[4697]: I0127 15:30:52.587031 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b0fb28a5-449f-4063-8063-47fe549d8b30-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b0fb28a5-449f-4063-8063-47fe549d8b30" (UID: "b0fb28a5-449f-4063-8063-47fe549d8b30"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:30:52 crc kubenswrapper[4697]: I0127 15:30:52.612752 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0fb28a5-449f-4063-8063-47fe549d8b30-kube-api-access-lcl55" (OuterVolumeSpecName: "kube-api-access-lcl55") pod "b0fb28a5-449f-4063-8063-47fe549d8b30" (UID: "b0fb28a5-449f-4063-8063-47fe549d8b30"). InnerVolumeSpecName "kube-api-access-lcl55". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:30:52 crc kubenswrapper[4697]: I0127 15:30:52.690964 4697 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b0fb28a5-449f-4063-8063-47fe549d8b30-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 15:30:52 crc kubenswrapper[4697]: I0127 15:30:52.691248 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lcl55\" (UniqueName: \"kubernetes.io/projected/b0fb28a5-449f-4063-8063-47fe549d8b30-kube-api-access-lcl55\") on node \"crc\" DevicePath \"\"" Jan 27 15:30:52 crc kubenswrapper[4697]: I0127 15:30:52.712428 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-xldlq" Jan 27 15:30:52 crc kubenswrapper[4697]: I0127 15:30:52.713122 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-abaf-account-create-update-bwx76" Jan 27 15:30:52 crc kubenswrapper[4697]: I0127 15:30:52.800516 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fvs5z\" (UniqueName: \"kubernetes.io/projected/d02d925a-22fb-46b8-a24e-754310c36008-kube-api-access-fvs5z\") pod \"d02d925a-22fb-46b8-a24e-754310c36008\" (UID: \"d02d925a-22fb-46b8-a24e-754310c36008\") " Jan 27 15:30:52 crc kubenswrapper[4697]: I0127 15:30:52.800617 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d02d925a-22fb-46b8-a24e-754310c36008-operator-scripts\") pod \"d02d925a-22fb-46b8-a24e-754310c36008\" (UID: \"d02d925a-22fb-46b8-a24e-754310c36008\") " Jan 27 15:30:52 crc kubenswrapper[4697]: I0127 15:30:52.800683 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f723bccb-279a-4613-8825-6af12bf7a421-operator-scripts\") pod \"f723bccb-279a-4613-8825-6af12bf7a421\" (UID: \"f723bccb-279a-4613-8825-6af12bf7a421\") " Jan 27 15:30:52 crc kubenswrapper[4697]: I0127 15:30:52.800731 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j42fv\" (UniqueName: \"kubernetes.io/projected/f723bccb-279a-4613-8825-6af12bf7a421-kube-api-access-j42fv\") pod \"f723bccb-279a-4613-8825-6af12bf7a421\" (UID: \"f723bccb-279a-4613-8825-6af12bf7a421\") " Jan 27 15:30:52 crc kubenswrapper[4697]: I0127 15:30:52.807253 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f723bccb-279a-4613-8825-6af12bf7a421-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f723bccb-279a-4613-8825-6af12bf7a421" (UID: "f723bccb-279a-4613-8825-6af12bf7a421"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:30:52 crc kubenswrapper[4697]: I0127 15:30:52.813769 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d02d925a-22fb-46b8-a24e-754310c36008-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d02d925a-22fb-46b8-a24e-754310c36008" (UID: "d02d925a-22fb-46b8-a24e-754310c36008"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:30:52 crc kubenswrapper[4697]: I0127 15:30:52.815344 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d02d925a-22fb-46b8-a24e-754310c36008-kube-api-access-fvs5z" (OuterVolumeSpecName: "kube-api-access-fvs5z") pod "d02d925a-22fb-46b8-a24e-754310c36008" (UID: "d02d925a-22fb-46b8-a24e-754310c36008"). InnerVolumeSpecName "kube-api-access-fvs5z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:30:52 crc kubenswrapper[4697]: I0127 15:30:52.840032 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f723bccb-279a-4613-8825-6af12bf7a421-kube-api-access-j42fv" (OuterVolumeSpecName: "kube-api-access-j42fv") pod "f723bccb-279a-4613-8825-6af12bf7a421" (UID: "f723bccb-279a-4613-8825-6af12bf7a421"). InnerVolumeSpecName "kube-api-access-j42fv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:30:52 crc kubenswrapper[4697]: I0127 15:30:52.903395 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fvs5z\" (UniqueName: \"kubernetes.io/projected/d02d925a-22fb-46b8-a24e-754310c36008-kube-api-access-fvs5z\") on node \"crc\" DevicePath \"\"" Jan 27 15:30:52 crc kubenswrapper[4697]: I0127 15:30:52.903432 4697 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d02d925a-22fb-46b8-a24e-754310c36008-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 15:30:52 crc kubenswrapper[4697]: I0127 15:30:52.903442 4697 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f723bccb-279a-4613-8825-6af12bf7a421-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 15:30:52 crc kubenswrapper[4697]: I0127 15:30:52.903451 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j42fv\" (UniqueName: \"kubernetes.io/projected/f723bccb-279a-4613-8825-6af12bf7a421-kube-api-access-j42fv\") on node \"crc\" DevicePath \"\"" Jan 27 15:30:52 crc kubenswrapper[4697]: I0127 15:30:52.983150 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-q8kwj" Jan 27 15:30:52 crc kubenswrapper[4697]: I0127 15:30:52.984003 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-q8kwj" event={"ID":"b0fb28a5-449f-4063-8063-47fe549d8b30","Type":"ContainerDied","Data":"14067ada1c5a04c0a60f8ff75163c3514c393e9b5d377a6882cea228cc2e767d"} Jan 27 15:30:52 crc kubenswrapper[4697]: I0127 15:30:52.984048 4697 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="14067ada1c5a04c0a60f8ff75163c3514c393e9b5d377a6882cea228cc2e767d" Jan 27 15:30:52 crc kubenswrapper[4697]: I0127 15:30:52.985127 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-abaf-account-create-update-bwx76" event={"ID":"f723bccb-279a-4613-8825-6af12bf7a421","Type":"ContainerDied","Data":"eec2173b59495f6e6f7e191741df190c9dc886c2351b468b3c3128690c9004ab"} Jan 27 15:30:52 crc kubenswrapper[4697]: I0127 15:30:52.985149 4697 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eec2173b59495f6e6f7e191741df190c9dc886c2351b468b3c3128690c9004ab" Jan 27 15:30:52 crc kubenswrapper[4697]: I0127 15:30:52.985192 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-abaf-account-create-update-bwx76" Jan 27 15:30:52 crc kubenswrapper[4697]: I0127 15:30:52.989180 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-xldlq" Jan 27 15:30:52 crc kubenswrapper[4697]: I0127 15:30:52.996673 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-xldlq" event={"ID":"d02d925a-22fb-46b8-a24e-754310c36008","Type":"ContainerDied","Data":"3bbbb9489e1eb6d9c7073931b305a19b4c16a8699a77c8a07d22f3f44fa1b60f"} Jan 27 15:30:52 crc kubenswrapper[4697]: I0127 15:30:52.996702 4697 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3bbbb9489e1eb6d9c7073931b305a19b4c16a8699a77c8a07d22f3f44fa1b60f" Jan 27 15:30:53 crc kubenswrapper[4697]: I0127 15:30:53.502080 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-v9vbm" Jan 27 15:30:53 crc kubenswrapper[4697]: I0127 15:30:53.617555 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6172572f-5fa4-419b-8d75-695458f7b4bd-operator-scripts\") pod \"6172572f-5fa4-419b-8d75-695458f7b4bd\" (UID: \"6172572f-5fa4-419b-8d75-695458f7b4bd\") " Jan 27 15:30:53 crc kubenswrapper[4697]: I0127 15:30:53.617613 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x995q\" (UniqueName: \"kubernetes.io/projected/6172572f-5fa4-419b-8d75-695458f7b4bd-kube-api-access-x995q\") pod \"6172572f-5fa4-419b-8d75-695458f7b4bd\" (UID: \"6172572f-5fa4-419b-8d75-695458f7b4bd\") " Jan 27 15:30:53 crc kubenswrapper[4697]: I0127 15:30:53.621117 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6172572f-5fa4-419b-8d75-695458f7b4bd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6172572f-5fa4-419b-8d75-695458f7b4bd" (UID: "6172572f-5fa4-419b-8d75-695458f7b4bd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:30:53 crc kubenswrapper[4697]: I0127 15:30:53.628015 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6172572f-5fa4-419b-8d75-695458f7b4bd-kube-api-access-x995q" (OuterVolumeSpecName: "kube-api-access-x995q") pod "6172572f-5fa4-419b-8d75-695458f7b4bd" (UID: "6172572f-5fa4-419b-8d75-695458f7b4bd"). InnerVolumeSpecName "kube-api-access-x995q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:30:53 crc kubenswrapper[4697]: I0127 15:30:53.676555 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-33d3-account-create-update-5rbnd" Jan 27 15:30:53 crc kubenswrapper[4697]: I0127 15:30:53.689977 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-d000-account-create-update-4bkrc" Jan 27 15:30:53 crc kubenswrapper[4697]: I0127 15:30:53.721138 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x995q\" (UniqueName: \"kubernetes.io/projected/6172572f-5fa4-419b-8d75-695458f7b4bd-kube-api-access-x995q\") on node \"crc\" DevicePath \"\"" Jan 27 15:30:53 crc kubenswrapper[4697]: I0127 15:30:53.721167 4697 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6172572f-5fa4-419b-8d75-695458f7b4bd-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 15:30:53 crc kubenswrapper[4697]: I0127 15:30:53.822431 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f5466297-9701-4292-954c-ac110351d448-operator-scripts\") pod \"f5466297-9701-4292-954c-ac110351d448\" (UID: \"f5466297-9701-4292-954c-ac110351d448\") " Jan 27 15:30:53 crc kubenswrapper[4697]: I0127 15:30:53.822686 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wvg8v\" (UniqueName: \"kubernetes.io/projected/f5466297-9701-4292-954c-ac110351d448-kube-api-access-wvg8v\") pod \"f5466297-9701-4292-954c-ac110351d448\" (UID: \"f5466297-9701-4292-954c-ac110351d448\") " Jan 27 15:30:53 crc kubenswrapper[4697]: I0127 15:30:53.822815 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/597107a0-5b36-457f-82f2-486eb5d2880b-operator-scripts\") pod \"597107a0-5b36-457f-82f2-486eb5d2880b\" (UID: \"597107a0-5b36-457f-82f2-486eb5d2880b\") " Jan 27 15:30:53 crc kubenswrapper[4697]: I0127 15:30:53.822876 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jjgsr\" (UniqueName: \"kubernetes.io/projected/597107a0-5b36-457f-82f2-486eb5d2880b-kube-api-access-jjgsr\") pod \"597107a0-5b36-457f-82f2-486eb5d2880b\" (UID: \"597107a0-5b36-457f-82f2-486eb5d2880b\") " Jan 27 15:30:53 crc kubenswrapper[4697]: I0127 15:30:53.823303 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f5466297-9701-4292-954c-ac110351d448-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f5466297-9701-4292-954c-ac110351d448" (UID: "f5466297-9701-4292-954c-ac110351d448"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:30:53 crc kubenswrapper[4697]: I0127 15:30:53.824226 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/597107a0-5b36-457f-82f2-486eb5d2880b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "597107a0-5b36-457f-82f2-486eb5d2880b" (UID: "597107a0-5b36-457f-82f2-486eb5d2880b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:30:53 crc kubenswrapper[4697]: I0127 15:30:53.830728 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5466297-9701-4292-954c-ac110351d448-kube-api-access-wvg8v" (OuterVolumeSpecName: "kube-api-access-wvg8v") pod "f5466297-9701-4292-954c-ac110351d448" (UID: "f5466297-9701-4292-954c-ac110351d448"). InnerVolumeSpecName "kube-api-access-wvg8v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:30:53 crc kubenswrapper[4697]: I0127 15:30:53.839304 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wvg8v\" (UniqueName: \"kubernetes.io/projected/f5466297-9701-4292-954c-ac110351d448-kube-api-access-wvg8v\") on node \"crc\" DevicePath \"\"" Jan 27 15:30:53 crc kubenswrapper[4697]: I0127 15:30:53.839344 4697 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/597107a0-5b36-457f-82f2-486eb5d2880b-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 15:30:53 crc kubenswrapper[4697]: I0127 15:30:53.839359 4697 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f5466297-9701-4292-954c-ac110351d448-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 15:30:53 crc kubenswrapper[4697]: I0127 15:30:53.851039 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/597107a0-5b36-457f-82f2-486eb5d2880b-kube-api-access-jjgsr" (OuterVolumeSpecName: "kube-api-access-jjgsr") pod "597107a0-5b36-457f-82f2-486eb5d2880b" (UID: "597107a0-5b36-457f-82f2-486eb5d2880b"). InnerVolumeSpecName "kube-api-access-jjgsr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:30:53 crc kubenswrapper[4697]: I0127 15:30:53.941026 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jjgsr\" (UniqueName: \"kubernetes.io/projected/597107a0-5b36-457f-82f2-486eb5d2880b-kube-api-access-jjgsr\") on node \"crc\" DevicePath \"\"" Jan 27 15:30:53 crc kubenswrapper[4697]: I0127 15:30:53.999530 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-d000-account-create-update-4bkrc" Jan 27 15:30:53 crc kubenswrapper[4697]: I0127 15:30:53.999521 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-d000-account-create-update-4bkrc" event={"ID":"f5466297-9701-4292-954c-ac110351d448","Type":"ContainerDied","Data":"5e0f85c6e3aece12256d5bf87992e2978adad5c5dfe517c4d4321decfc06e8d6"} Jan 27 15:30:53 crc kubenswrapper[4697]: I0127 15:30:53.999649 4697 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5e0f85c6e3aece12256d5bf87992e2978adad5c5dfe517c4d4321decfc06e8d6" Jan 27 15:30:54 crc kubenswrapper[4697]: I0127 15:30:54.001204 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-v9vbm" Jan 27 15:30:54 crc kubenswrapper[4697]: I0127 15:30:54.001206 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-v9vbm" event={"ID":"6172572f-5fa4-419b-8d75-695458f7b4bd","Type":"ContainerDied","Data":"3a40a5ec466ac1cc9c8aca1da2e5c9d6426119da5f5fe2210837a90304965404"} Jan 27 15:30:54 crc kubenswrapper[4697]: I0127 15:30:54.001269 4697 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3a40a5ec466ac1cc9c8aca1da2e5c9d6426119da5f5fe2210837a90304965404" Jan 27 15:30:54 crc kubenswrapper[4697]: I0127 15:30:54.002741 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-33d3-account-create-update-5rbnd" event={"ID":"597107a0-5b36-457f-82f2-486eb5d2880b","Type":"ContainerDied","Data":"a3eaa89776e2da4d1a6805a982bca56dc9821a5dfca8641162be87691c8fca42"} Jan 27 15:30:54 crc kubenswrapper[4697]: I0127 15:30:54.002771 4697 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a3eaa89776e2da4d1a6805a982bca56dc9821a5dfca8641162be87691c8fca42" Jan 27 15:30:54 crc kubenswrapper[4697]: I0127 15:30:54.002883 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-33d3-account-create-update-5rbnd" Jan 27 15:30:54 crc kubenswrapper[4697]: I0127 15:30:54.157394 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 27 15:30:54 crc kubenswrapper[4697]: I0127 15:30:54.157639 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 27 15:30:54 crc kubenswrapper[4697]: I0127 15:30:54.197674 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 27 15:30:54 crc kubenswrapper[4697]: I0127 15:30:54.213672 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 27 15:30:55 crc kubenswrapper[4697]: I0127 15:30:55.010727 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 27 15:30:55 crc kubenswrapper[4697]: I0127 15:30:55.011012 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 27 15:30:55 crc kubenswrapper[4697]: I0127 15:30:55.109121 4697 patch_prober.go:28] interesting pod/machine-config-daemon-wz495 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 15:30:55 crc kubenswrapper[4697]: I0127 15:30:55.109177 4697 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 15:30:55 crc kubenswrapper[4697]: I0127 15:30:55.109219 4697 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wz495" Jan 27 15:30:55 crc kubenswrapper[4697]: I0127 15:30:55.109825 4697 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5f08c1e0b4fdd3c835b2715925dd8d1fa9438edf0fb56dd634b6fc87424d2b5d"} pod="openshift-machine-config-operator/machine-config-daemon-wz495" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 15:30:55 crc kubenswrapper[4697]: I0127 15:30:55.109882 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" containerName="machine-config-daemon" containerID="cri-o://5f08c1e0b4fdd3c835b2715925dd8d1fa9438edf0fb56dd634b6fc87424d2b5d" gracePeriod=600 Jan 27 15:30:56 crc kubenswrapper[4697]: I0127 15:30:56.021016 4697 generic.go:334] "Generic (PLEG): container finished" podID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" containerID="5f08c1e0b4fdd3c835b2715925dd8d1fa9438edf0fb56dd634b6fc87424d2b5d" exitCode=0 Jan 27 15:30:56 crc kubenswrapper[4697]: I0127 15:30:56.021070 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wz495" event={"ID":"e9bec8bc-b2a6-4865-83ca-692ae5c022a6","Type":"ContainerDied","Data":"5f08c1e0b4fdd3c835b2715925dd8d1fa9438edf0fb56dd634b6fc87424d2b5d"} Jan 27 15:30:56 crc kubenswrapper[4697]: I0127 15:30:56.021372 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wz495" event={"ID":"e9bec8bc-b2a6-4865-83ca-692ae5c022a6","Type":"ContainerStarted","Data":"1041a01976f73e6dbbf881bb74cdc0195408ed73fc04fdd6c07635790ef653fc"} Jan 27 15:30:56 crc kubenswrapper[4697]: I0127 15:30:56.021392 4697 scope.go:117] "RemoveContainer" containerID="92d797174f2c61fd113567cb99c93ce3ccc4863dd93b46c4dc54df8e401db4fd" Jan 27 15:30:57 crc kubenswrapper[4697]: I0127 15:30:57.693289 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 15:30:57 crc kubenswrapper[4697]: I0127 15:30:57.694434 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="65153f23-c661-4b49-b86c-294fd3a42610" containerName="ceilometer-central-agent" containerID="cri-o://e3b92ba9b2ba429d3d07e6fa1c8e9b14929d53f5fd171b9cad61fa2e0feff069" gracePeriod=30 Jan 27 15:30:57 crc kubenswrapper[4697]: I0127 15:30:57.695414 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="65153f23-c661-4b49-b86c-294fd3a42610" containerName="proxy-httpd" containerID="cri-o://c27ba0856ff4174ec3f85fa411179971f8e544e3c50c06069f2f5693fca9e9fc" gracePeriod=30 Jan 27 15:30:57 crc kubenswrapper[4697]: I0127 15:30:57.695508 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="65153f23-c661-4b49-b86c-294fd3a42610" containerName="sg-core" containerID="cri-o://e1bc7cabbdb98a377574b8a469cc4e4671483fba677339c3a52dc210ff59a833" gracePeriod=30 Jan 27 15:30:57 crc kubenswrapper[4697]: I0127 15:30:57.695588 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="65153f23-c661-4b49-b86c-294fd3a42610" containerName="ceilometer-notification-agent" containerID="cri-o://0295f2908baee5e13e870e56581d707ed325b1fdb7cd3f413492ba7d6b494cbf" gracePeriod=30 Jan 27 15:30:57 crc kubenswrapper[4697]: I0127 15:30:57.723024 4697 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="65153f23-c661-4b49-b86c-294fd3a42610" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 502" Jan 27 15:30:58 crc kubenswrapper[4697]: I0127 15:30:58.047049 4697 generic.go:334] "Generic (PLEG): container finished" podID="65153f23-c661-4b49-b86c-294fd3a42610" containerID="c27ba0856ff4174ec3f85fa411179971f8e544e3c50c06069f2f5693fca9e9fc" exitCode=0 Jan 27 15:30:58 crc kubenswrapper[4697]: I0127 15:30:58.047401 4697 generic.go:334] "Generic (PLEG): container finished" podID="65153f23-c661-4b49-b86c-294fd3a42610" containerID="e1bc7cabbdb98a377574b8a469cc4e4671483fba677339c3a52dc210ff59a833" exitCode=2 Jan 27 15:30:58 crc kubenswrapper[4697]: I0127 15:30:58.047119 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"65153f23-c661-4b49-b86c-294fd3a42610","Type":"ContainerDied","Data":"c27ba0856ff4174ec3f85fa411179971f8e544e3c50c06069f2f5693fca9e9fc"} Jan 27 15:30:58 crc kubenswrapper[4697]: I0127 15:30:58.047460 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"65153f23-c661-4b49-b86c-294fd3a42610","Type":"ContainerDied","Data":"e1bc7cabbdb98a377574b8a469cc4e4671483fba677339c3a52dc210ff59a833"} Jan 27 15:30:58 crc kubenswrapper[4697]: I0127 15:30:58.457876 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 27 15:30:58 crc kubenswrapper[4697]: I0127 15:30:58.457997 4697 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 15:30:58 crc kubenswrapper[4697]: I0127 15:30:58.599360 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 27 15:30:59 crc kubenswrapper[4697]: I0127 15:30:59.064010 4697 generic.go:334] "Generic (PLEG): container finished" podID="65153f23-c661-4b49-b86c-294fd3a42610" containerID="0295f2908baee5e13e870e56581d707ed325b1fdb7cd3f413492ba7d6b494cbf" exitCode=0 Jan 27 15:30:59 crc kubenswrapper[4697]: I0127 15:30:59.064257 4697 generic.go:334] "Generic (PLEG): container finished" podID="65153f23-c661-4b49-b86c-294fd3a42610" containerID="e3b92ba9b2ba429d3d07e6fa1c8e9b14929d53f5fd171b9cad61fa2e0feff069" exitCode=0 Jan 27 15:30:59 crc kubenswrapper[4697]: I0127 15:30:59.064187 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"65153f23-c661-4b49-b86c-294fd3a42610","Type":"ContainerDied","Data":"0295f2908baee5e13e870e56581d707ed325b1fdb7cd3f413492ba7d6b494cbf"} Jan 27 15:30:59 crc kubenswrapper[4697]: I0127 15:30:59.065026 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"65153f23-c661-4b49-b86c-294fd3a42610","Type":"ContainerDied","Data":"e3b92ba9b2ba429d3d07e6fa1c8e9b14929d53f5fd171b9cad61fa2e0feff069"} Jan 27 15:30:59 crc kubenswrapper[4697]: I0127 15:30:59.065039 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"65153f23-c661-4b49-b86c-294fd3a42610","Type":"ContainerDied","Data":"241484ccea29501661c39c9843dff3a970835be58a9498c578bfff3b68c466a2"} Jan 27 15:30:59 crc kubenswrapper[4697]: I0127 15:30:59.065049 4697 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="241484ccea29501661c39c9843dff3a970835be58a9498c578bfff3b68c466a2" Jan 27 15:30:59 crc kubenswrapper[4697]: I0127 15:30:59.108619 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 15:30:59 crc kubenswrapper[4697]: I0127 15:30:59.266642 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/65153f23-c661-4b49-b86c-294fd3a42610-scripts\") pod \"65153f23-c661-4b49-b86c-294fd3a42610\" (UID: \"65153f23-c661-4b49-b86c-294fd3a42610\") " Jan 27 15:30:59 crc kubenswrapper[4697]: I0127 15:30:59.266795 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/65153f23-c661-4b49-b86c-294fd3a42610-sg-core-conf-yaml\") pod \"65153f23-c661-4b49-b86c-294fd3a42610\" (UID: \"65153f23-c661-4b49-b86c-294fd3a42610\") " Jan 27 15:30:59 crc kubenswrapper[4697]: I0127 15:30:59.266824 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65153f23-c661-4b49-b86c-294fd3a42610-combined-ca-bundle\") pod \"65153f23-c661-4b49-b86c-294fd3a42610\" (UID: \"65153f23-c661-4b49-b86c-294fd3a42610\") " Jan 27 15:30:59 crc kubenswrapper[4697]: I0127 15:30:59.266883 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/65153f23-c661-4b49-b86c-294fd3a42610-log-httpd\") pod \"65153f23-c661-4b49-b86c-294fd3a42610\" (UID: \"65153f23-c661-4b49-b86c-294fd3a42610\") " Jan 27 15:30:59 crc kubenswrapper[4697]: I0127 15:30:59.266935 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/65153f23-c661-4b49-b86c-294fd3a42610-run-httpd\") pod \"65153f23-c661-4b49-b86c-294fd3a42610\" (UID: \"65153f23-c661-4b49-b86c-294fd3a42610\") " Jan 27 15:30:59 crc kubenswrapper[4697]: I0127 15:30:59.266971 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65153f23-c661-4b49-b86c-294fd3a42610-config-data\") pod \"65153f23-c661-4b49-b86c-294fd3a42610\" (UID: \"65153f23-c661-4b49-b86c-294fd3a42610\") " Jan 27 15:30:59 crc kubenswrapper[4697]: I0127 15:30:59.266997 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rcdb2\" (UniqueName: \"kubernetes.io/projected/65153f23-c661-4b49-b86c-294fd3a42610-kube-api-access-rcdb2\") pod \"65153f23-c661-4b49-b86c-294fd3a42610\" (UID: \"65153f23-c661-4b49-b86c-294fd3a42610\") " Jan 27 15:30:59 crc kubenswrapper[4697]: I0127 15:30:59.268125 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/65153f23-c661-4b49-b86c-294fd3a42610-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "65153f23-c661-4b49-b86c-294fd3a42610" (UID: "65153f23-c661-4b49-b86c-294fd3a42610"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:30:59 crc kubenswrapper[4697]: I0127 15:30:59.268268 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/65153f23-c661-4b49-b86c-294fd3a42610-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "65153f23-c661-4b49-b86c-294fd3a42610" (UID: "65153f23-c661-4b49-b86c-294fd3a42610"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:30:59 crc kubenswrapper[4697]: I0127 15:30:59.279954 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65153f23-c661-4b49-b86c-294fd3a42610-scripts" (OuterVolumeSpecName: "scripts") pod "65153f23-c661-4b49-b86c-294fd3a42610" (UID: "65153f23-c661-4b49-b86c-294fd3a42610"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:30:59 crc kubenswrapper[4697]: I0127 15:30:59.334127 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-qbv8q"] Jan 27 15:30:59 crc kubenswrapper[4697]: E0127 15:30:59.334539 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0fb28a5-449f-4063-8063-47fe549d8b30" containerName="mariadb-database-create" Jan 27 15:30:59 crc kubenswrapper[4697]: I0127 15:30:59.334555 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0fb28a5-449f-4063-8063-47fe549d8b30" containerName="mariadb-database-create" Jan 27 15:30:59 crc kubenswrapper[4697]: E0127 15:30:59.334565 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65153f23-c661-4b49-b86c-294fd3a42610" containerName="ceilometer-central-agent" Jan 27 15:30:59 crc kubenswrapper[4697]: I0127 15:30:59.334571 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="65153f23-c661-4b49-b86c-294fd3a42610" containerName="ceilometer-central-agent" Jan 27 15:30:59 crc kubenswrapper[4697]: E0127 15:30:59.334595 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5466297-9701-4292-954c-ac110351d448" containerName="mariadb-account-create-update" Jan 27 15:30:59 crc kubenswrapper[4697]: I0127 15:30:59.334602 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5466297-9701-4292-954c-ac110351d448" containerName="mariadb-account-create-update" Jan 27 15:30:59 crc kubenswrapper[4697]: E0127 15:30:59.334614 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65153f23-c661-4b49-b86c-294fd3a42610" containerName="sg-core" Jan 27 15:30:59 crc kubenswrapper[4697]: I0127 15:30:59.334620 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="65153f23-c661-4b49-b86c-294fd3a42610" containerName="sg-core" Jan 27 15:30:59 crc kubenswrapper[4697]: E0127 15:30:59.334629 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6172572f-5fa4-419b-8d75-695458f7b4bd" containerName="mariadb-database-create" Jan 27 15:30:59 crc kubenswrapper[4697]: I0127 15:30:59.334636 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="6172572f-5fa4-419b-8d75-695458f7b4bd" containerName="mariadb-database-create" Jan 27 15:30:59 crc kubenswrapper[4697]: E0127 15:30:59.334648 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f723bccb-279a-4613-8825-6af12bf7a421" containerName="mariadb-account-create-update" Jan 27 15:30:59 crc kubenswrapper[4697]: I0127 15:30:59.334654 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="f723bccb-279a-4613-8825-6af12bf7a421" containerName="mariadb-account-create-update" Jan 27 15:30:59 crc kubenswrapper[4697]: E0127 15:30:59.334671 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65153f23-c661-4b49-b86c-294fd3a42610" containerName="proxy-httpd" Jan 27 15:30:59 crc kubenswrapper[4697]: I0127 15:30:59.334676 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="65153f23-c661-4b49-b86c-294fd3a42610" containerName="proxy-httpd" Jan 27 15:30:59 crc kubenswrapper[4697]: E0127 15:30:59.334686 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d02d925a-22fb-46b8-a24e-754310c36008" containerName="mariadb-database-create" Jan 27 15:30:59 crc kubenswrapper[4697]: I0127 15:30:59.334692 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="d02d925a-22fb-46b8-a24e-754310c36008" containerName="mariadb-database-create" Jan 27 15:30:59 crc kubenswrapper[4697]: E0127 15:30:59.334698 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="597107a0-5b36-457f-82f2-486eb5d2880b" containerName="mariadb-account-create-update" Jan 27 15:30:59 crc kubenswrapper[4697]: I0127 15:30:59.334705 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="597107a0-5b36-457f-82f2-486eb5d2880b" containerName="mariadb-account-create-update" Jan 27 15:30:59 crc kubenswrapper[4697]: E0127 15:30:59.334723 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65153f23-c661-4b49-b86c-294fd3a42610" containerName="ceilometer-notification-agent" Jan 27 15:30:59 crc kubenswrapper[4697]: I0127 15:30:59.334732 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="65153f23-c661-4b49-b86c-294fd3a42610" containerName="ceilometer-notification-agent" Jan 27 15:30:59 crc kubenswrapper[4697]: I0127 15:30:59.334988 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="6172572f-5fa4-419b-8d75-695458f7b4bd" containerName="mariadb-database-create" Jan 27 15:30:59 crc kubenswrapper[4697]: I0127 15:30:59.335011 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="d02d925a-22fb-46b8-a24e-754310c36008" containerName="mariadb-database-create" Jan 27 15:30:59 crc kubenswrapper[4697]: I0127 15:30:59.335024 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="f723bccb-279a-4613-8825-6af12bf7a421" containerName="mariadb-account-create-update" Jan 27 15:30:59 crc kubenswrapper[4697]: I0127 15:30:59.335035 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="65153f23-c661-4b49-b86c-294fd3a42610" containerName="ceilometer-notification-agent" Jan 27 15:30:59 crc kubenswrapper[4697]: I0127 15:30:59.335048 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0fb28a5-449f-4063-8063-47fe549d8b30" containerName="mariadb-database-create" Jan 27 15:30:59 crc kubenswrapper[4697]: I0127 15:30:59.335062 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5466297-9701-4292-954c-ac110351d448" containerName="mariadb-account-create-update" Jan 27 15:30:59 crc kubenswrapper[4697]: I0127 15:30:59.335072 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="597107a0-5b36-457f-82f2-486eb5d2880b" containerName="mariadb-account-create-update" Jan 27 15:30:59 crc kubenswrapper[4697]: I0127 15:30:59.335081 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="65153f23-c661-4b49-b86c-294fd3a42610" containerName="proxy-httpd" Jan 27 15:30:59 crc kubenswrapper[4697]: I0127 15:30:59.335091 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="65153f23-c661-4b49-b86c-294fd3a42610" containerName="ceilometer-central-agent" Jan 27 15:30:59 crc kubenswrapper[4697]: I0127 15:30:59.335103 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="65153f23-c661-4b49-b86c-294fd3a42610" containerName="sg-core" Jan 27 15:30:59 crc kubenswrapper[4697]: I0127 15:30:59.335651 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-qbv8q" Jan 27 15:30:59 crc kubenswrapper[4697]: I0127 15:30:59.353012 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65153f23-c661-4b49-b86c-294fd3a42610-kube-api-access-rcdb2" (OuterVolumeSpecName: "kube-api-access-rcdb2") pod "65153f23-c661-4b49-b86c-294fd3a42610" (UID: "65153f23-c661-4b49-b86c-294fd3a42610"). InnerVolumeSpecName "kube-api-access-rcdb2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:30:59 crc kubenswrapper[4697]: I0127 15:30:59.353533 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 27 15:30:59 crc kubenswrapper[4697]: I0127 15:30:59.353774 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Jan 27 15:30:59 crc kubenswrapper[4697]: I0127 15:30:59.353954 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-nb7hg" Jan 27 15:30:59 crc kubenswrapper[4697]: I0127 15:30:59.358409 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-qbv8q"] Jan 27 15:30:59 crc kubenswrapper[4697]: I0127 15:30:59.384250 4697 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/65153f23-c661-4b49-b86c-294fd3a42610-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 15:30:59 crc kubenswrapper[4697]: I0127 15:30:59.384279 4697 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/65153f23-c661-4b49-b86c-294fd3a42610-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 15:30:59 crc kubenswrapper[4697]: I0127 15:30:59.384288 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rcdb2\" (UniqueName: \"kubernetes.io/projected/65153f23-c661-4b49-b86c-294fd3a42610-kube-api-access-rcdb2\") on node \"crc\" DevicePath \"\"" Jan 27 15:30:59 crc kubenswrapper[4697]: I0127 15:30:59.384297 4697 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/65153f23-c661-4b49-b86c-294fd3a42610-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 15:30:59 crc kubenswrapper[4697]: I0127 15:30:59.429031 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65153f23-c661-4b49-b86c-294fd3a42610-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "65153f23-c661-4b49-b86c-294fd3a42610" (UID: "65153f23-c661-4b49-b86c-294fd3a42610"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:30:59 crc kubenswrapper[4697]: I0127 15:30:59.498046 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/68d5ac88-2f0e-4785-8f16-908526425bf5-scripts\") pod \"nova-cell0-conductor-db-sync-qbv8q\" (UID: \"68d5ac88-2f0e-4785-8f16-908526425bf5\") " pod="openstack/nova-cell0-conductor-db-sync-qbv8q" Jan 27 15:30:59 crc kubenswrapper[4697]: I0127 15:30:59.498115 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2qw2\" (UniqueName: \"kubernetes.io/projected/68d5ac88-2f0e-4785-8f16-908526425bf5-kube-api-access-p2qw2\") pod \"nova-cell0-conductor-db-sync-qbv8q\" (UID: \"68d5ac88-2f0e-4785-8f16-908526425bf5\") " pod="openstack/nova-cell0-conductor-db-sync-qbv8q" Jan 27 15:30:59 crc kubenswrapper[4697]: I0127 15:30:59.498199 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68d5ac88-2f0e-4785-8f16-908526425bf5-config-data\") pod \"nova-cell0-conductor-db-sync-qbv8q\" (UID: \"68d5ac88-2f0e-4785-8f16-908526425bf5\") " pod="openstack/nova-cell0-conductor-db-sync-qbv8q" Jan 27 15:30:59 crc kubenswrapper[4697]: I0127 15:30:59.498582 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68d5ac88-2f0e-4785-8f16-908526425bf5-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-qbv8q\" (UID: \"68d5ac88-2f0e-4785-8f16-908526425bf5\") " pod="openstack/nova-cell0-conductor-db-sync-qbv8q" Jan 27 15:30:59 crc kubenswrapper[4697]: I0127 15:30:59.500563 4697 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/65153f23-c661-4b49-b86c-294fd3a42610-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 27 15:30:59 crc kubenswrapper[4697]: I0127 15:30:59.504928 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65153f23-c661-4b49-b86c-294fd3a42610-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "65153f23-c661-4b49-b86c-294fd3a42610" (UID: "65153f23-c661-4b49-b86c-294fd3a42610"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:30:59 crc kubenswrapper[4697]: I0127 15:30:59.524865 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65153f23-c661-4b49-b86c-294fd3a42610-config-data" (OuterVolumeSpecName: "config-data") pod "65153f23-c661-4b49-b86c-294fd3a42610" (UID: "65153f23-c661-4b49-b86c-294fd3a42610"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:30:59 crc kubenswrapper[4697]: I0127 15:30:59.602087 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/68d5ac88-2f0e-4785-8f16-908526425bf5-scripts\") pod \"nova-cell0-conductor-db-sync-qbv8q\" (UID: \"68d5ac88-2f0e-4785-8f16-908526425bf5\") " pod="openstack/nova-cell0-conductor-db-sync-qbv8q" Jan 27 15:30:59 crc kubenswrapper[4697]: I0127 15:30:59.602376 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2qw2\" (UniqueName: \"kubernetes.io/projected/68d5ac88-2f0e-4785-8f16-908526425bf5-kube-api-access-p2qw2\") pod \"nova-cell0-conductor-db-sync-qbv8q\" (UID: \"68d5ac88-2f0e-4785-8f16-908526425bf5\") " pod="openstack/nova-cell0-conductor-db-sync-qbv8q" Jan 27 15:30:59 crc kubenswrapper[4697]: I0127 15:30:59.602407 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68d5ac88-2f0e-4785-8f16-908526425bf5-config-data\") pod \"nova-cell0-conductor-db-sync-qbv8q\" (UID: \"68d5ac88-2f0e-4785-8f16-908526425bf5\") " pod="openstack/nova-cell0-conductor-db-sync-qbv8q" Jan 27 15:30:59 crc kubenswrapper[4697]: I0127 15:30:59.602487 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68d5ac88-2f0e-4785-8f16-908526425bf5-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-qbv8q\" (UID: \"68d5ac88-2f0e-4785-8f16-908526425bf5\") " pod="openstack/nova-cell0-conductor-db-sync-qbv8q" Jan 27 15:30:59 crc kubenswrapper[4697]: I0127 15:30:59.602562 4697 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65153f23-c661-4b49-b86c-294fd3a42610-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 15:30:59 crc kubenswrapper[4697]: I0127 15:30:59.602573 4697 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65153f23-c661-4b49-b86c-294fd3a42610-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 15:30:59 crc kubenswrapper[4697]: I0127 15:30:59.610020 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/68d5ac88-2f0e-4785-8f16-908526425bf5-scripts\") pod \"nova-cell0-conductor-db-sync-qbv8q\" (UID: \"68d5ac88-2f0e-4785-8f16-908526425bf5\") " pod="openstack/nova-cell0-conductor-db-sync-qbv8q" Jan 27 15:30:59 crc kubenswrapper[4697]: I0127 15:30:59.612355 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68d5ac88-2f0e-4785-8f16-908526425bf5-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-qbv8q\" (UID: \"68d5ac88-2f0e-4785-8f16-908526425bf5\") " pod="openstack/nova-cell0-conductor-db-sync-qbv8q" Jan 27 15:30:59 crc kubenswrapper[4697]: I0127 15:30:59.613439 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68d5ac88-2f0e-4785-8f16-908526425bf5-config-data\") pod \"nova-cell0-conductor-db-sync-qbv8q\" (UID: \"68d5ac88-2f0e-4785-8f16-908526425bf5\") " pod="openstack/nova-cell0-conductor-db-sync-qbv8q" Jan 27 15:30:59 crc kubenswrapper[4697]: I0127 15:30:59.636622 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2qw2\" (UniqueName: \"kubernetes.io/projected/68d5ac88-2f0e-4785-8f16-908526425bf5-kube-api-access-p2qw2\") pod \"nova-cell0-conductor-db-sync-qbv8q\" (UID: \"68d5ac88-2f0e-4785-8f16-908526425bf5\") " pod="openstack/nova-cell0-conductor-db-sync-qbv8q" Jan 27 15:30:59 crc kubenswrapper[4697]: I0127 15:30:59.770042 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-qbv8q" Jan 27 15:31:00 crc kubenswrapper[4697]: I0127 15:31:00.067227 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 27 15:31:00 crc kubenswrapper[4697]: I0127 15:31:00.067570 4697 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 15:31:00 crc kubenswrapper[4697]: I0127 15:31:00.083128 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 15:31:00 crc kubenswrapper[4697]: I0127 15:31:00.159838 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 15:31:00 crc kubenswrapper[4697]: I0127 15:31:00.184849 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 27 15:31:00 crc kubenswrapper[4697]: I0127 15:31:00.196751 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 27 15:31:00 crc kubenswrapper[4697]: I0127 15:31:00.199115 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 15:31:00 crc kubenswrapper[4697]: I0127 15:31:00.206978 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 15:31:00 crc kubenswrapper[4697]: I0127 15:31:00.221182 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 27 15:31:00 crc kubenswrapper[4697]: I0127 15:31:00.221386 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 27 15:31:00 crc kubenswrapper[4697]: I0127 15:31:00.323328 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1bf768d4-eae7-44ff-8189-12585be16e89-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1bf768d4-eae7-44ff-8189-12585be16e89\") " pod="openstack/ceilometer-0" Jan 27 15:31:00 crc kubenswrapper[4697]: I0127 15:31:00.323375 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1bf768d4-eae7-44ff-8189-12585be16e89-run-httpd\") pod \"ceilometer-0\" (UID: \"1bf768d4-eae7-44ff-8189-12585be16e89\") " pod="openstack/ceilometer-0" Jan 27 15:31:00 crc kubenswrapper[4697]: I0127 15:31:00.323401 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8kd7m\" (UniqueName: \"kubernetes.io/projected/1bf768d4-eae7-44ff-8189-12585be16e89-kube-api-access-8kd7m\") pod \"ceilometer-0\" (UID: \"1bf768d4-eae7-44ff-8189-12585be16e89\") " pod="openstack/ceilometer-0" Jan 27 15:31:00 crc kubenswrapper[4697]: I0127 15:31:00.323440 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1bf768d4-eae7-44ff-8189-12585be16e89-log-httpd\") pod \"ceilometer-0\" (UID: \"1bf768d4-eae7-44ff-8189-12585be16e89\") " pod="openstack/ceilometer-0" Jan 27 15:31:00 crc kubenswrapper[4697]: I0127 15:31:00.323466 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1bf768d4-eae7-44ff-8189-12585be16e89-scripts\") pod \"ceilometer-0\" (UID: \"1bf768d4-eae7-44ff-8189-12585be16e89\") " pod="openstack/ceilometer-0" Jan 27 15:31:00 crc kubenswrapper[4697]: I0127 15:31:00.323509 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bf768d4-eae7-44ff-8189-12585be16e89-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1bf768d4-eae7-44ff-8189-12585be16e89\") " pod="openstack/ceilometer-0" Jan 27 15:31:00 crc kubenswrapper[4697]: I0127 15:31:00.323553 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bf768d4-eae7-44ff-8189-12585be16e89-config-data\") pod \"ceilometer-0\" (UID: \"1bf768d4-eae7-44ff-8189-12585be16e89\") " pod="openstack/ceilometer-0" Jan 27 15:31:00 crc kubenswrapper[4697]: I0127 15:31:00.357118 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-qbv8q"] Jan 27 15:31:00 crc kubenswrapper[4697]: I0127 15:31:00.425016 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1bf768d4-eae7-44ff-8189-12585be16e89-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1bf768d4-eae7-44ff-8189-12585be16e89\") " pod="openstack/ceilometer-0" Jan 27 15:31:00 crc kubenswrapper[4697]: I0127 15:31:00.425096 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1bf768d4-eae7-44ff-8189-12585be16e89-run-httpd\") pod \"ceilometer-0\" (UID: \"1bf768d4-eae7-44ff-8189-12585be16e89\") " pod="openstack/ceilometer-0" Jan 27 15:31:00 crc kubenswrapper[4697]: I0127 15:31:00.425137 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8kd7m\" (UniqueName: \"kubernetes.io/projected/1bf768d4-eae7-44ff-8189-12585be16e89-kube-api-access-8kd7m\") pod \"ceilometer-0\" (UID: \"1bf768d4-eae7-44ff-8189-12585be16e89\") " pod="openstack/ceilometer-0" Jan 27 15:31:00 crc kubenswrapper[4697]: I0127 15:31:00.425200 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1bf768d4-eae7-44ff-8189-12585be16e89-log-httpd\") pod \"ceilometer-0\" (UID: \"1bf768d4-eae7-44ff-8189-12585be16e89\") " pod="openstack/ceilometer-0" Jan 27 15:31:00 crc kubenswrapper[4697]: I0127 15:31:00.425239 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1bf768d4-eae7-44ff-8189-12585be16e89-scripts\") pod \"ceilometer-0\" (UID: \"1bf768d4-eae7-44ff-8189-12585be16e89\") " pod="openstack/ceilometer-0" Jan 27 15:31:00 crc kubenswrapper[4697]: I0127 15:31:00.425301 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bf768d4-eae7-44ff-8189-12585be16e89-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1bf768d4-eae7-44ff-8189-12585be16e89\") " pod="openstack/ceilometer-0" Jan 27 15:31:00 crc kubenswrapper[4697]: I0127 15:31:00.425364 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bf768d4-eae7-44ff-8189-12585be16e89-config-data\") pod \"ceilometer-0\" (UID: \"1bf768d4-eae7-44ff-8189-12585be16e89\") " pod="openstack/ceilometer-0" Jan 27 15:31:00 crc kubenswrapper[4697]: I0127 15:31:00.425691 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1bf768d4-eae7-44ff-8189-12585be16e89-run-httpd\") pod \"ceilometer-0\" (UID: \"1bf768d4-eae7-44ff-8189-12585be16e89\") " pod="openstack/ceilometer-0" Jan 27 15:31:00 crc kubenswrapper[4697]: I0127 15:31:00.426048 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1bf768d4-eae7-44ff-8189-12585be16e89-log-httpd\") pod \"ceilometer-0\" (UID: \"1bf768d4-eae7-44ff-8189-12585be16e89\") " pod="openstack/ceilometer-0" Jan 27 15:31:00 crc kubenswrapper[4697]: I0127 15:31:00.432508 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1bf768d4-eae7-44ff-8189-12585be16e89-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1bf768d4-eae7-44ff-8189-12585be16e89\") " pod="openstack/ceilometer-0" Jan 27 15:31:00 crc kubenswrapper[4697]: I0127 15:31:00.432898 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bf768d4-eae7-44ff-8189-12585be16e89-config-data\") pod \"ceilometer-0\" (UID: \"1bf768d4-eae7-44ff-8189-12585be16e89\") " pod="openstack/ceilometer-0" Jan 27 15:31:00 crc kubenswrapper[4697]: I0127 15:31:00.439123 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1bf768d4-eae7-44ff-8189-12585be16e89-scripts\") pod \"ceilometer-0\" (UID: \"1bf768d4-eae7-44ff-8189-12585be16e89\") " pod="openstack/ceilometer-0" Jan 27 15:31:00 crc kubenswrapper[4697]: I0127 15:31:00.461270 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bf768d4-eae7-44ff-8189-12585be16e89-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1bf768d4-eae7-44ff-8189-12585be16e89\") " pod="openstack/ceilometer-0" Jan 27 15:31:00 crc kubenswrapper[4697]: I0127 15:31:00.468368 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8kd7m\" (UniqueName: \"kubernetes.io/projected/1bf768d4-eae7-44ff-8189-12585be16e89-kube-api-access-8kd7m\") pod \"ceilometer-0\" (UID: \"1bf768d4-eae7-44ff-8189-12585be16e89\") " pod="openstack/ceilometer-0" Jan 27 15:31:00 crc kubenswrapper[4697]: I0127 15:31:00.577370 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65153f23-c661-4b49-b86c-294fd3a42610" path="/var/lib/kubelet/pods/65153f23-c661-4b49-b86c-294fd3a42610/volumes" Jan 27 15:31:00 crc kubenswrapper[4697]: I0127 15:31:00.585225 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 15:31:00 crc kubenswrapper[4697]: I0127 15:31:00.919626 4697 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5b9dc56b78-cpxnx" podUID="ca5e937a-90cf-44e0-bf5c-bcb75c95a2f4" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.148:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.148:8443: connect: connection refused" Jan 27 15:31:00 crc kubenswrapper[4697]: I0127 15:31:00.922080 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 27 15:31:01 crc kubenswrapper[4697]: I0127 15:31:01.094479 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-qbv8q" event={"ID":"68d5ac88-2f0e-4785-8f16-908526425bf5","Type":"ContainerStarted","Data":"4290f8d03deff0d1434565b08bac6f726ef9df6fa0e28ceab02199d7a7451a52"} Jan 27 15:31:01 crc kubenswrapper[4697]: I0127 15:31:01.112187 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 15:31:01 crc kubenswrapper[4697]: W0127 15:31:01.142101 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1bf768d4_eae7_44ff_8189_12585be16e89.slice/crio-1c5669c2a9e0639f17af1813db03277ad7e136d662c0776d1ae3300a7cd9847f WatchSource:0}: Error finding container 1c5669c2a9e0639f17af1813db03277ad7e136d662c0776d1ae3300a7cd9847f: Status 404 returned error can't find the container with id 1c5669c2a9e0639f17af1813db03277ad7e136d662c0776d1ae3300a7cd9847f Jan 27 15:31:02 crc kubenswrapper[4697]: I0127 15:31:02.106261 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1bf768d4-eae7-44ff-8189-12585be16e89","Type":"ContainerStarted","Data":"27492c2c3bfab31c4d3b4b7947a6484fc42f3a22c27d0482832fe61e6e124530"} Jan 27 15:31:02 crc kubenswrapper[4697]: I0127 15:31:02.106725 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1bf768d4-eae7-44ff-8189-12585be16e89","Type":"ContainerStarted","Data":"1c5669c2a9e0639f17af1813db03277ad7e136d662c0776d1ae3300a7cd9847f"} Jan 27 15:31:03 crc kubenswrapper[4697]: I0127 15:31:03.123820 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1bf768d4-eae7-44ff-8189-12585be16e89","Type":"ContainerStarted","Data":"4edc4b4bccc40800f1c66ffa912b5176c84f47f28e5aae9bd6a13d7a62d49a72"} Jan 27 15:31:04 crc kubenswrapper[4697]: I0127 15:31:04.144514 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1bf768d4-eae7-44ff-8189-12585be16e89","Type":"ContainerStarted","Data":"4162b57a6d68fb074c56bed34a6faf6a97c03baef8499213ebfd428e8ca3944b"} Jan 27 15:31:04 crc kubenswrapper[4697]: I0127 15:31:04.782887 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 15:31:05 crc kubenswrapper[4697]: I0127 15:31:05.634007 4697 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5965fc65fb-dvhzz" podUID="d6ad161d-fe95-4ad3-8f60-1f1310b2974c" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.147:8443/dashboard/auth/login/?next=/dashboard/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 15:31:05 crc kubenswrapper[4697]: I0127 15:31:05.634351 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-5965fc65fb-dvhzz" Jan 27 15:31:05 crc kubenswrapper[4697]: I0127 15:31:05.635089 4697 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"ebe38bf6f6e82a4ae410ea90d70082a99bbf9864bd0a371e11f885c6c1ee2d61"} pod="openstack/horizon-5965fc65fb-dvhzz" containerMessage="Container horizon failed startup probe, will be restarted" Jan 27 15:31:05 crc kubenswrapper[4697]: I0127 15:31:05.635124 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5965fc65fb-dvhzz" podUID="d6ad161d-fe95-4ad3-8f60-1f1310b2974c" containerName="horizon" containerID="cri-o://ebe38bf6f6e82a4ae410ea90d70082a99bbf9864bd0a371e11f885c6c1ee2d61" gracePeriod=30 Jan 27 15:31:06 crc kubenswrapper[4697]: I0127 15:31:06.184154 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1bf768d4-eae7-44ff-8189-12585be16e89","Type":"ContainerStarted","Data":"269a5e8b4b15c9dbb16c18cb87962e8036a49f3d145b3e73bc26125bb31f9919"} Jan 27 15:31:06 crc kubenswrapper[4697]: I0127 15:31:06.184743 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1bf768d4-eae7-44ff-8189-12585be16e89" containerName="ceilometer-central-agent" containerID="cri-o://27492c2c3bfab31c4d3b4b7947a6484fc42f3a22c27d0482832fe61e6e124530" gracePeriod=30 Jan 27 15:31:06 crc kubenswrapper[4697]: I0127 15:31:06.184892 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 27 15:31:06 crc kubenswrapper[4697]: I0127 15:31:06.185410 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1bf768d4-eae7-44ff-8189-12585be16e89" containerName="proxy-httpd" containerID="cri-o://269a5e8b4b15c9dbb16c18cb87962e8036a49f3d145b3e73bc26125bb31f9919" gracePeriod=30 Jan 27 15:31:06 crc kubenswrapper[4697]: I0127 15:31:06.185493 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1bf768d4-eae7-44ff-8189-12585be16e89" containerName="sg-core" containerID="cri-o://4162b57a6d68fb074c56bed34a6faf6a97c03baef8499213ebfd428e8ca3944b" gracePeriod=30 Jan 27 15:31:06 crc kubenswrapper[4697]: I0127 15:31:06.185557 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1bf768d4-eae7-44ff-8189-12585be16e89" containerName="ceilometer-notification-agent" containerID="cri-o://4edc4b4bccc40800f1c66ffa912b5176c84f47f28e5aae9bd6a13d7a62d49a72" gracePeriod=30 Jan 27 15:31:07 crc kubenswrapper[4697]: I0127 15:31:07.196037 4697 generic.go:334] "Generic (PLEG): container finished" podID="1bf768d4-eae7-44ff-8189-12585be16e89" containerID="269a5e8b4b15c9dbb16c18cb87962e8036a49f3d145b3e73bc26125bb31f9919" exitCode=0 Jan 27 15:31:07 crc kubenswrapper[4697]: I0127 15:31:07.196364 4697 generic.go:334] "Generic (PLEG): container finished" podID="1bf768d4-eae7-44ff-8189-12585be16e89" containerID="4162b57a6d68fb074c56bed34a6faf6a97c03baef8499213ebfd428e8ca3944b" exitCode=2 Jan 27 15:31:07 crc kubenswrapper[4697]: I0127 15:31:07.196377 4697 generic.go:334] "Generic (PLEG): container finished" podID="1bf768d4-eae7-44ff-8189-12585be16e89" containerID="4edc4b4bccc40800f1c66ffa912b5176c84f47f28e5aae9bd6a13d7a62d49a72" exitCode=0 Jan 27 15:31:07 crc kubenswrapper[4697]: I0127 15:31:07.196078 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1bf768d4-eae7-44ff-8189-12585be16e89","Type":"ContainerDied","Data":"269a5e8b4b15c9dbb16c18cb87962e8036a49f3d145b3e73bc26125bb31f9919"} Jan 27 15:31:07 crc kubenswrapper[4697]: I0127 15:31:07.196411 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1bf768d4-eae7-44ff-8189-12585be16e89","Type":"ContainerDied","Data":"4162b57a6d68fb074c56bed34a6faf6a97c03baef8499213ebfd428e8ca3944b"} Jan 27 15:31:07 crc kubenswrapper[4697]: I0127 15:31:07.196424 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1bf768d4-eae7-44ff-8189-12585be16e89","Type":"ContainerDied","Data":"4edc4b4bccc40800f1c66ffa912b5176c84f47f28e5aae9bd6a13d7a62d49a72"} Jan 27 15:31:08 crc kubenswrapper[4697]: I0127 15:31:08.209571 4697 generic.go:334] "Generic (PLEG): container finished" podID="1bf768d4-eae7-44ff-8189-12585be16e89" containerID="27492c2c3bfab31c4d3b4b7947a6484fc42f3a22c27d0482832fe61e6e124530" exitCode=0 Jan 27 15:31:08 crc kubenswrapper[4697]: I0127 15:31:08.209583 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1bf768d4-eae7-44ff-8189-12585be16e89","Type":"ContainerDied","Data":"27492c2c3bfab31c4d3b4b7947a6484fc42f3a22c27d0482832fe61e6e124530"} Jan 27 15:31:10 crc kubenswrapper[4697]: I0127 15:31:10.922911 4697 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5b9dc56b78-cpxnx" podUID="ca5e937a-90cf-44e0-bf5c-bcb75c95a2f4" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.148:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.148:8443: connect: connection refused" Jan 27 15:31:11 crc kubenswrapper[4697]: I0127 15:31:11.261741 4697 generic.go:334] "Generic (PLEG): container finished" podID="d6ad161d-fe95-4ad3-8f60-1f1310b2974c" containerID="ebe38bf6f6e82a4ae410ea90d70082a99bbf9864bd0a371e11f885c6c1ee2d61" exitCode=0 Jan 27 15:31:11 crc kubenswrapper[4697]: I0127 15:31:11.261808 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5965fc65fb-dvhzz" event={"ID":"d6ad161d-fe95-4ad3-8f60-1f1310b2974c","Type":"ContainerDied","Data":"ebe38bf6f6e82a4ae410ea90d70082a99bbf9864bd0a371e11f885c6c1ee2d61"} Jan 27 15:31:11 crc kubenswrapper[4697]: I0127 15:31:11.261838 4697 scope.go:117] "RemoveContainer" containerID="e54450188c94f7298427d91a62c88df853535928735655dd6ef49dea887a8a99" Jan 27 15:31:14 crc kubenswrapper[4697]: I0127 15:31:14.477377 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 15:31:14 crc kubenswrapper[4697]: I0127 15:31:14.653002 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1bf768d4-eae7-44ff-8189-12585be16e89-run-httpd\") pod \"1bf768d4-eae7-44ff-8189-12585be16e89\" (UID: \"1bf768d4-eae7-44ff-8189-12585be16e89\") " Jan 27 15:31:14 crc kubenswrapper[4697]: I0127 15:31:14.653065 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bf768d4-eae7-44ff-8189-12585be16e89-combined-ca-bundle\") pod \"1bf768d4-eae7-44ff-8189-12585be16e89\" (UID: \"1bf768d4-eae7-44ff-8189-12585be16e89\") " Jan 27 15:31:14 crc kubenswrapper[4697]: I0127 15:31:14.653146 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bf768d4-eae7-44ff-8189-12585be16e89-config-data\") pod \"1bf768d4-eae7-44ff-8189-12585be16e89\" (UID: \"1bf768d4-eae7-44ff-8189-12585be16e89\") " Jan 27 15:31:14 crc kubenswrapper[4697]: I0127 15:31:14.653212 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1bf768d4-eae7-44ff-8189-12585be16e89-sg-core-conf-yaml\") pod \"1bf768d4-eae7-44ff-8189-12585be16e89\" (UID: \"1bf768d4-eae7-44ff-8189-12585be16e89\") " Jan 27 15:31:14 crc kubenswrapper[4697]: I0127 15:31:14.653326 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8kd7m\" (UniqueName: \"kubernetes.io/projected/1bf768d4-eae7-44ff-8189-12585be16e89-kube-api-access-8kd7m\") pod \"1bf768d4-eae7-44ff-8189-12585be16e89\" (UID: \"1bf768d4-eae7-44ff-8189-12585be16e89\") " Jan 27 15:31:14 crc kubenswrapper[4697]: I0127 15:31:14.653407 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1bf768d4-eae7-44ff-8189-12585be16e89-log-httpd\") pod \"1bf768d4-eae7-44ff-8189-12585be16e89\" (UID: \"1bf768d4-eae7-44ff-8189-12585be16e89\") " Jan 27 15:31:14 crc kubenswrapper[4697]: I0127 15:31:14.653452 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1bf768d4-eae7-44ff-8189-12585be16e89-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "1bf768d4-eae7-44ff-8189-12585be16e89" (UID: "1bf768d4-eae7-44ff-8189-12585be16e89"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:31:14 crc kubenswrapper[4697]: I0127 15:31:14.653471 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1bf768d4-eae7-44ff-8189-12585be16e89-scripts\") pod \"1bf768d4-eae7-44ff-8189-12585be16e89\" (UID: \"1bf768d4-eae7-44ff-8189-12585be16e89\") " Jan 27 15:31:14 crc kubenswrapper[4697]: I0127 15:31:14.654398 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1bf768d4-eae7-44ff-8189-12585be16e89-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "1bf768d4-eae7-44ff-8189-12585be16e89" (UID: "1bf768d4-eae7-44ff-8189-12585be16e89"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:31:14 crc kubenswrapper[4697]: I0127 15:31:14.654465 4697 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1bf768d4-eae7-44ff-8189-12585be16e89-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 15:31:14 crc kubenswrapper[4697]: I0127 15:31:14.658921 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf768d4-eae7-44ff-8189-12585be16e89-scripts" (OuterVolumeSpecName: "scripts") pod "1bf768d4-eae7-44ff-8189-12585be16e89" (UID: "1bf768d4-eae7-44ff-8189-12585be16e89"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:31:14 crc kubenswrapper[4697]: I0127 15:31:14.660710 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf768d4-eae7-44ff-8189-12585be16e89-kube-api-access-8kd7m" (OuterVolumeSpecName: "kube-api-access-8kd7m") pod "1bf768d4-eae7-44ff-8189-12585be16e89" (UID: "1bf768d4-eae7-44ff-8189-12585be16e89"). InnerVolumeSpecName "kube-api-access-8kd7m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:31:14 crc kubenswrapper[4697]: I0127 15:31:14.707230 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf768d4-eae7-44ff-8189-12585be16e89-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "1bf768d4-eae7-44ff-8189-12585be16e89" (UID: "1bf768d4-eae7-44ff-8189-12585be16e89"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:31:14 crc kubenswrapper[4697]: I0127 15:31:14.723207 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf768d4-eae7-44ff-8189-12585be16e89-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1bf768d4-eae7-44ff-8189-12585be16e89" (UID: "1bf768d4-eae7-44ff-8189-12585be16e89"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:31:14 crc kubenswrapper[4697]: I0127 15:31:14.758390 4697 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bf768d4-eae7-44ff-8189-12585be16e89-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 15:31:14 crc kubenswrapper[4697]: I0127 15:31:14.758533 4697 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1bf768d4-eae7-44ff-8189-12585be16e89-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 27 15:31:14 crc kubenswrapper[4697]: I0127 15:31:14.758545 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8kd7m\" (UniqueName: \"kubernetes.io/projected/1bf768d4-eae7-44ff-8189-12585be16e89-kube-api-access-8kd7m\") on node \"crc\" DevicePath \"\"" Jan 27 15:31:14 crc kubenswrapper[4697]: I0127 15:31:14.758560 4697 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1bf768d4-eae7-44ff-8189-12585be16e89-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 15:31:14 crc kubenswrapper[4697]: I0127 15:31:14.758569 4697 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1bf768d4-eae7-44ff-8189-12585be16e89-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 15:31:14 crc kubenswrapper[4697]: I0127 15:31:14.772324 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf768d4-eae7-44ff-8189-12585be16e89-config-data" (OuterVolumeSpecName: "config-data") pod "1bf768d4-eae7-44ff-8189-12585be16e89" (UID: "1bf768d4-eae7-44ff-8189-12585be16e89"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:31:14 crc kubenswrapper[4697]: I0127 15:31:14.862052 4697 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bf768d4-eae7-44ff-8189-12585be16e89-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 15:31:15 crc kubenswrapper[4697]: I0127 15:31:15.316648 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1bf768d4-eae7-44ff-8189-12585be16e89","Type":"ContainerDied","Data":"1c5669c2a9e0639f17af1813db03277ad7e136d662c0776d1ae3300a7cd9847f"} Jan 27 15:31:15 crc kubenswrapper[4697]: I0127 15:31:15.316687 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 15:31:15 crc kubenswrapper[4697]: I0127 15:31:15.317475 4697 scope.go:117] "RemoveContainer" containerID="269a5e8b4b15c9dbb16c18cb87962e8036a49f3d145b3e73bc26125bb31f9919" Jan 27 15:31:15 crc kubenswrapper[4697]: I0127 15:31:15.363442 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 15:31:15 crc kubenswrapper[4697]: I0127 15:31:15.375979 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 27 15:31:15 crc kubenswrapper[4697]: I0127 15:31:15.391494 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 27 15:31:15 crc kubenswrapper[4697]: E0127 15:31:15.391817 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bf768d4-eae7-44ff-8189-12585be16e89" containerName="ceilometer-notification-agent" Jan 27 15:31:15 crc kubenswrapper[4697]: I0127 15:31:15.391833 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bf768d4-eae7-44ff-8189-12585be16e89" containerName="ceilometer-notification-agent" Jan 27 15:31:15 crc kubenswrapper[4697]: E0127 15:31:15.391844 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bf768d4-eae7-44ff-8189-12585be16e89" containerName="ceilometer-central-agent" Jan 27 15:31:15 crc kubenswrapper[4697]: I0127 15:31:15.391850 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bf768d4-eae7-44ff-8189-12585be16e89" containerName="ceilometer-central-agent" Jan 27 15:31:15 crc kubenswrapper[4697]: E0127 15:31:15.391875 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bf768d4-eae7-44ff-8189-12585be16e89" containerName="proxy-httpd" Jan 27 15:31:15 crc kubenswrapper[4697]: I0127 15:31:15.391882 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bf768d4-eae7-44ff-8189-12585be16e89" containerName="proxy-httpd" Jan 27 15:31:15 crc kubenswrapper[4697]: E0127 15:31:15.391945 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bf768d4-eae7-44ff-8189-12585be16e89" containerName="sg-core" Jan 27 15:31:15 crc kubenswrapper[4697]: I0127 15:31:15.391952 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bf768d4-eae7-44ff-8189-12585be16e89" containerName="sg-core" Jan 27 15:31:15 crc kubenswrapper[4697]: I0127 15:31:15.392178 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="1bf768d4-eae7-44ff-8189-12585be16e89" containerName="ceilometer-central-agent" Jan 27 15:31:15 crc kubenswrapper[4697]: I0127 15:31:15.392200 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="1bf768d4-eae7-44ff-8189-12585be16e89" containerName="ceilometer-notification-agent" Jan 27 15:31:15 crc kubenswrapper[4697]: I0127 15:31:15.392211 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="1bf768d4-eae7-44ff-8189-12585be16e89" containerName="proxy-httpd" Jan 27 15:31:15 crc kubenswrapper[4697]: I0127 15:31:15.392220 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="1bf768d4-eae7-44ff-8189-12585be16e89" containerName="sg-core" Jan 27 15:31:15 crc kubenswrapper[4697]: I0127 15:31:15.394241 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 15:31:15 crc kubenswrapper[4697]: I0127 15:31:15.399625 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 27 15:31:15 crc kubenswrapper[4697]: I0127 15:31:15.399641 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 27 15:31:15 crc kubenswrapper[4697]: I0127 15:31:15.416819 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 15:31:15 crc kubenswrapper[4697]: I0127 15:31:15.449413 4697 scope.go:117] "RemoveContainer" containerID="4162b57a6d68fb074c56bed34a6faf6a97c03baef8499213ebfd428e8ca3944b" Jan 27 15:31:15 crc kubenswrapper[4697]: I0127 15:31:15.474132 4697 scope.go:117] "RemoveContainer" containerID="4edc4b4bccc40800f1c66ffa912b5176c84f47f28e5aae9bd6a13d7a62d49a72" Jan 27 15:31:15 crc kubenswrapper[4697]: I0127 15:31:15.497294 4697 scope.go:117] "RemoveContainer" containerID="27492c2c3bfab31c4d3b4b7947a6484fc42f3a22c27d0482832fe61e6e124530" Jan 27 15:31:15 crc kubenswrapper[4697]: I0127 15:31:15.580107 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/98d051f1-7cf9-4c72-ac30-c68d4c025c61-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"98d051f1-7cf9-4c72-ac30-c68d4c025c61\") " pod="openstack/ceilometer-0" Jan 27 15:31:15 crc kubenswrapper[4697]: I0127 15:31:15.580708 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/98d051f1-7cf9-4c72-ac30-c68d4c025c61-log-httpd\") pod \"ceilometer-0\" (UID: \"98d051f1-7cf9-4c72-ac30-c68d4c025c61\") " pod="openstack/ceilometer-0" Jan 27 15:31:15 crc kubenswrapper[4697]: I0127 15:31:15.580867 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98d051f1-7cf9-4c72-ac30-c68d4c025c61-config-data\") pod \"ceilometer-0\" (UID: \"98d051f1-7cf9-4c72-ac30-c68d4c025c61\") " pod="openstack/ceilometer-0" Jan 27 15:31:15 crc kubenswrapper[4697]: I0127 15:31:15.581075 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48sxz\" (UniqueName: \"kubernetes.io/projected/98d051f1-7cf9-4c72-ac30-c68d4c025c61-kube-api-access-48sxz\") pod \"ceilometer-0\" (UID: \"98d051f1-7cf9-4c72-ac30-c68d4c025c61\") " pod="openstack/ceilometer-0" Jan 27 15:31:15 crc kubenswrapper[4697]: I0127 15:31:15.581249 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98d051f1-7cf9-4c72-ac30-c68d4c025c61-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"98d051f1-7cf9-4c72-ac30-c68d4c025c61\") " pod="openstack/ceilometer-0" Jan 27 15:31:15 crc kubenswrapper[4697]: I0127 15:31:15.581386 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/98d051f1-7cf9-4c72-ac30-c68d4c025c61-run-httpd\") pod \"ceilometer-0\" (UID: \"98d051f1-7cf9-4c72-ac30-c68d4c025c61\") " pod="openstack/ceilometer-0" Jan 27 15:31:15 crc kubenswrapper[4697]: I0127 15:31:15.581532 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/98d051f1-7cf9-4c72-ac30-c68d4c025c61-scripts\") pod \"ceilometer-0\" (UID: \"98d051f1-7cf9-4c72-ac30-c68d4c025c61\") " pod="openstack/ceilometer-0" Jan 27 15:31:15 crc kubenswrapper[4697]: I0127 15:31:15.683355 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/98d051f1-7cf9-4c72-ac30-c68d4c025c61-run-httpd\") pod \"ceilometer-0\" (UID: \"98d051f1-7cf9-4c72-ac30-c68d4c025c61\") " pod="openstack/ceilometer-0" Jan 27 15:31:15 crc kubenswrapper[4697]: I0127 15:31:15.683475 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/98d051f1-7cf9-4c72-ac30-c68d4c025c61-scripts\") pod \"ceilometer-0\" (UID: \"98d051f1-7cf9-4c72-ac30-c68d4c025c61\") " pod="openstack/ceilometer-0" Jan 27 15:31:15 crc kubenswrapper[4697]: I0127 15:31:15.683507 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/98d051f1-7cf9-4c72-ac30-c68d4c025c61-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"98d051f1-7cf9-4c72-ac30-c68d4c025c61\") " pod="openstack/ceilometer-0" Jan 27 15:31:15 crc kubenswrapper[4697]: I0127 15:31:15.683537 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/98d051f1-7cf9-4c72-ac30-c68d4c025c61-log-httpd\") pod \"ceilometer-0\" (UID: \"98d051f1-7cf9-4c72-ac30-c68d4c025c61\") " pod="openstack/ceilometer-0" Jan 27 15:31:15 crc kubenswrapper[4697]: I0127 15:31:15.683566 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98d051f1-7cf9-4c72-ac30-c68d4c025c61-config-data\") pod \"ceilometer-0\" (UID: \"98d051f1-7cf9-4c72-ac30-c68d4c025c61\") " pod="openstack/ceilometer-0" Jan 27 15:31:15 crc kubenswrapper[4697]: I0127 15:31:15.683581 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-48sxz\" (UniqueName: \"kubernetes.io/projected/98d051f1-7cf9-4c72-ac30-c68d4c025c61-kube-api-access-48sxz\") pod \"ceilometer-0\" (UID: \"98d051f1-7cf9-4c72-ac30-c68d4c025c61\") " pod="openstack/ceilometer-0" Jan 27 15:31:15 crc kubenswrapper[4697]: I0127 15:31:15.683610 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98d051f1-7cf9-4c72-ac30-c68d4c025c61-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"98d051f1-7cf9-4c72-ac30-c68d4c025c61\") " pod="openstack/ceilometer-0" Jan 27 15:31:15 crc kubenswrapper[4697]: I0127 15:31:15.684653 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/98d051f1-7cf9-4c72-ac30-c68d4c025c61-run-httpd\") pod \"ceilometer-0\" (UID: \"98d051f1-7cf9-4c72-ac30-c68d4c025c61\") " pod="openstack/ceilometer-0" Jan 27 15:31:15 crc kubenswrapper[4697]: I0127 15:31:15.684654 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/98d051f1-7cf9-4c72-ac30-c68d4c025c61-log-httpd\") pod \"ceilometer-0\" (UID: \"98d051f1-7cf9-4c72-ac30-c68d4c025c61\") " pod="openstack/ceilometer-0" Jan 27 15:31:15 crc kubenswrapper[4697]: I0127 15:31:15.688587 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/98d051f1-7cf9-4c72-ac30-c68d4c025c61-scripts\") pod \"ceilometer-0\" (UID: \"98d051f1-7cf9-4c72-ac30-c68d4c025c61\") " pod="openstack/ceilometer-0" Jan 27 15:31:15 crc kubenswrapper[4697]: I0127 15:31:15.690813 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98d051f1-7cf9-4c72-ac30-c68d4c025c61-config-data\") pod \"ceilometer-0\" (UID: \"98d051f1-7cf9-4c72-ac30-c68d4c025c61\") " pod="openstack/ceilometer-0" Jan 27 15:31:15 crc kubenswrapper[4697]: I0127 15:31:15.692397 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/98d051f1-7cf9-4c72-ac30-c68d4c025c61-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"98d051f1-7cf9-4c72-ac30-c68d4c025c61\") " pod="openstack/ceilometer-0" Jan 27 15:31:15 crc kubenswrapper[4697]: I0127 15:31:15.704605 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98d051f1-7cf9-4c72-ac30-c68d4c025c61-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"98d051f1-7cf9-4c72-ac30-c68d4c025c61\") " pod="openstack/ceilometer-0" Jan 27 15:31:15 crc kubenswrapper[4697]: I0127 15:31:15.708489 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-48sxz\" (UniqueName: \"kubernetes.io/projected/98d051f1-7cf9-4c72-ac30-c68d4c025c61-kube-api-access-48sxz\") pod \"ceilometer-0\" (UID: \"98d051f1-7cf9-4c72-ac30-c68d4c025c61\") " pod="openstack/ceilometer-0" Jan 27 15:31:15 crc kubenswrapper[4697]: I0127 15:31:15.727392 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 15:31:16 crc kubenswrapper[4697]: I0127 15:31:16.251124 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 15:31:16 crc kubenswrapper[4697]: I0127 15:31:16.336509 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"98d051f1-7cf9-4c72-ac30-c68d4c025c61","Type":"ContainerStarted","Data":"235f2b44207cb3cbaaaf913800433e0fcad4f7ba5ef522e5c37c06eae7dd26c2"} Jan 27 15:31:16 crc kubenswrapper[4697]: I0127 15:31:16.338723 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5965fc65fb-dvhzz" event={"ID":"d6ad161d-fe95-4ad3-8f60-1f1310b2974c","Type":"ContainerStarted","Data":"5482b300f2fd66a56a6e5b2b822ea47551b3a8a006085c8bcf8f05ac76f7cc29"} Jan 27 15:31:16 crc kubenswrapper[4697]: I0127 15:31:16.578971 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf768d4-eae7-44ff-8189-12585be16e89" path="/var/lib/kubelet/pods/1bf768d4-eae7-44ff-8189-12585be16e89/volumes" Jan 27 15:31:19 crc kubenswrapper[4697]: I0127 15:31:19.368859 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-qbv8q" event={"ID":"68d5ac88-2f0e-4785-8f16-908526425bf5","Type":"ContainerStarted","Data":"ea5959e372e764bda1a7480ee5898792d34f826f9d14924c3af38f6211ba8f13"} Jan 27 15:31:19 crc kubenswrapper[4697]: I0127 15:31:19.371208 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"98d051f1-7cf9-4c72-ac30-c68d4c025c61","Type":"ContainerStarted","Data":"504d28743af4dd4ded9053aede2fce8048a20eec942acbd9c85f2a29b721f676"} Jan 27 15:31:19 crc kubenswrapper[4697]: I0127 15:31:19.391564 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-qbv8q" podStartSLOduration=2.483763369 podStartE2EDuration="20.391546835s" podCreationTimestamp="2026-01-27 15:30:59 +0000 UTC" firstStartedPulling="2026-01-27 15:31:00.3559907 +0000 UTC m=+1356.528390481" lastFinishedPulling="2026-01-27 15:31:18.263774166 +0000 UTC m=+1374.436173947" observedRunningTime="2026-01-27 15:31:19.389243389 +0000 UTC m=+1375.561643170" watchObservedRunningTime="2026-01-27 15:31:19.391546835 +0000 UTC m=+1375.563946616" Jan 27 15:31:20 crc kubenswrapper[4697]: I0127 15:31:20.381522 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"98d051f1-7cf9-4c72-ac30-c68d4c025c61","Type":"ContainerStarted","Data":"b49c9e557e6803da5cc5e9a8bbfc251a7fe1bba910726f8f16a3ae022ef965d6"} Jan 27 15:31:20 crc kubenswrapper[4697]: I0127 15:31:20.628803 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-5965fc65fb-dvhzz" Jan 27 15:31:20 crc kubenswrapper[4697]: I0127 15:31:20.628884 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5965fc65fb-dvhzz" Jan 27 15:31:21 crc kubenswrapper[4697]: I0127 15:31:21.390953 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"98d051f1-7cf9-4c72-ac30-c68d4c025c61","Type":"ContainerStarted","Data":"67befd37cbc1c3cb462a3e837892b25b8eae71c3c7212b77fbfa49f1c3b80878"} Jan 27 15:31:25 crc kubenswrapper[4697]: I0127 15:31:25.923020 4697 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5b9dc56b78-cpxnx" podUID="ca5e937a-90cf-44e0-bf5c-bcb75c95a2f4" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.148:8443/dashboard/auth/login/?next=/dashboard/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 15:31:25 crc kubenswrapper[4697]: I0127 15:31:25.923553 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-5b9dc56b78-cpxnx" Jan 27 15:31:25 crc kubenswrapper[4697]: I0127 15:31:25.924461 4697 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"17f787dabadd4f23183666fbe27a45b7ec9d6d7331dd1eb5e5057b2d0ac827cd"} pod="openstack/horizon-5b9dc56b78-cpxnx" containerMessage="Container horizon failed startup probe, will be restarted" Jan 27 15:31:25 crc kubenswrapper[4697]: I0127 15:31:25.924502 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5b9dc56b78-cpxnx" podUID="ca5e937a-90cf-44e0-bf5c-bcb75c95a2f4" containerName="horizon" containerID="cri-o://17f787dabadd4f23183666fbe27a45b7ec9d6d7331dd1eb5e5057b2d0ac827cd" gracePeriod=30 Jan 27 15:31:26 crc kubenswrapper[4697]: I0127 15:31:26.444560 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"98d051f1-7cf9-4c72-ac30-c68d4c025c61","Type":"ContainerStarted","Data":"411020ffcf37e62d7f9f1904a494419c167140cb80d4e55498024040126f2eac"} Jan 27 15:31:26 crc kubenswrapper[4697]: I0127 15:31:26.445097 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 27 15:31:30 crc kubenswrapper[4697]: I0127 15:31:30.630719 4697 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5965fc65fb-dvhzz" podUID="d6ad161d-fe95-4ad3-8f60-1f1310b2974c" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.147:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.147:8443: connect: connection refused" Jan 27 15:31:31 crc kubenswrapper[4697]: I0127 15:31:31.511908 4697 generic.go:334] "Generic (PLEG): container finished" podID="ca5e937a-90cf-44e0-bf5c-bcb75c95a2f4" containerID="17f787dabadd4f23183666fbe27a45b7ec9d6d7331dd1eb5e5057b2d0ac827cd" exitCode=0 Jan 27 15:31:31 crc kubenswrapper[4697]: I0127 15:31:31.512173 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5b9dc56b78-cpxnx" event={"ID":"ca5e937a-90cf-44e0-bf5c-bcb75c95a2f4","Type":"ContainerDied","Data":"17f787dabadd4f23183666fbe27a45b7ec9d6d7331dd1eb5e5057b2d0ac827cd"} Jan 27 15:31:31 crc kubenswrapper[4697]: I0127 15:31:31.512204 4697 scope.go:117] "RemoveContainer" containerID="94e5a0ea328ee095ebea3b739ec83ee42ff649968869720920ee234c3045166f" Jan 27 15:31:32 crc kubenswrapper[4697]: I0127 15:31:32.388021 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=8.215296542 podStartE2EDuration="17.387990822s" podCreationTimestamp="2026-01-27 15:31:15 +0000 UTC" firstStartedPulling="2026-01-27 15:31:16.239600199 +0000 UTC m=+1372.411999980" lastFinishedPulling="2026-01-27 15:31:25.412294479 +0000 UTC m=+1381.584694260" observedRunningTime="2026-01-27 15:31:26.472540777 +0000 UTC m=+1382.644940558" watchObservedRunningTime="2026-01-27 15:31:32.387990822 +0000 UTC m=+1388.560390613" Jan 27 15:31:32 crc kubenswrapper[4697]: I0127 15:31:32.402106 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 15:31:32 crc kubenswrapper[4697]: I0127 15:31:32.402538 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="98d051f1-7cf9-4c72-ac30-c68d4c025c61" containerName="ceilometer-central-agent" containerID="cri-o://504d28743af4dd4ded9053aede2fce8048a20eec942acbd9c85f2a29b721f676" gracePeriod=30 Jan 27 15:31:32 crc kubenswrapper[4697]: I0127 15:31:32.402584 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="98d051f1-7cf9-4c72-ac30-c68d4c025c61" containerName="sg-core" containerID="cri-o://67befd37cbc1c3cb462a3e837892b25b8eae71c3c7212b77fbfa49f1c3b80878" gracePeriod=30 Jan 27 15:31:32 crc kubenswrapper[4697]: I0127 15:31:32.402743 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="98d051f1-7cf9-4c72-ac30-c68d4c025c61" containerName="ceilometer-notification-agent" containerID="cri-o://b49c9e557e6803da5cc5e9a8bbfc251a7fe1bba910726f8f16a3ae022ef965d6" gracePeriod=30 Jan 27 15:31:32 crc kubenswrapper[4697]: I0127 15:31:32.402760 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="98d051f1-7cf9-4c72-ac30-c68d4c025c61" containerName="proxy-httpd" containerID="cri-o://411020ffcf37e62d7f9f1904a494419c167140cb80d4e55498024040126f2eac" gracePeriod=30 Jan 27 15:31:32 crc kubenswrapper[4697]: I0127 15:31:32.526125 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5b9dc56b78-cpxnx" event={"ID":"ca5e937a-90cf-44e0-bf5c-bcb75c95a2f4","Type":"ContainerStarted","Data":"4884fa9c584a4ea2cb1672da27c692ec0d6c188cd88892856c002b7eca71388f"} Jan 27 15:31:32 crc kubenswrapper[4697]: I0127 15:31:32.533561 4697 generic.go:334] "Generic (PLEG): container finished" podID="98d051f1-7cf9-4c72-ac30-c68d4c025c61" containerID="67befd37cbc1c3cb462a3e837892b25b8eae71c3c7212b77fbfa49f1c3b80878" exitCode=2 Jan 27 15:31:32 crc kubenswrapper[4697]: I0127 15:31:32.533609 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"98d051f1-7cf9-4c72-ac30-c68d4c025c61","Type":"ContainerDied","Data":"67befd37cbc1c3cb462a3e837892b25b8eae71c3c7212b77fbfa49f1c3b80878"} Jan 27 15:31:33 crc kubenswrapper[4697]: I0127 15:31:33.546230 4697 generic.go:334] "Generic (PLEG): container finished" podID="98d051f1-7cf9-4c72-ac30-c68d4c025c61" containerID="411020ffcf37e62d7f9f1904a494419c167140cb80d4e55498024040126f2eac" exitCode=0 Jan 27 15:31:33 crc kubenswrapper[4697]: I0127 15:31:33.546523 4697 generic.go:334] "Generic (PLEG): container finished" podID="98d051f1-7cf9-4c72-ac30-c68d4c025c61" containerID="b49c9e557e6803da5cc5e9a8bbfc251a7fe1bba910726f8f16a3ae022ef965d6" exitCode=0 Jan 27 15:31:33 crc kubenswrapper[4697]: I0127 15:31:33.546306 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"98d051f1-7cf9-4c72-ac30-c68d4c025c61","Type":"ContainerDied","Data":"411020ffcf37e62d7f9f1904a494419c167140cb80d4e55498024040126f2eac"} Jan 27 15:31:33 crc kubenswrapper[4697]: I0127 15:31:33.546562 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"98d051f1-7cf9-4c72-ac30-c68d4c025c61","Type":"ContainerDied","Data":"b49c9e557e6803da5cc5e9a8bbfc251a7fe1bba910726f8f16a3ae022ef965d6"} Jan 27 15:31:34 crc kubenswrapper[4697]: I0127 15:31:34.567654 4697 generic.go:334] "Generic (PLEG): container finished" podID="98d051f1-7cf9-4c72-ac30-c68d4c025c61" containerID="504d28743af4dd4ded9053aede2fce8048a20eec942acbd9c85f2a29b721f676" exitCode=0 Jan 27 15:31:34 crc kubenswrapper[4697]: I0127 15:31:34.580714 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"98d051f1-7cf9-4c72-ac30-c68d4c025c61","Type":"ContainerDied","Data":"504d28743af4dd4ded9053aede2fce8048a20eec942acbd9c85f2a29b721f676"} Jan 27 15:31:34 crc kubenswrapper[4697]: I0127 15:31:34.929300 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 15:31:34 crc kubenswrapper[4697]: I0127 15:31:34.970362 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/98d051f1-7cf9-4c72-ac30-c68d4c025c61-log-httpd\") pod \"98d051f1-7cf9-4c72-ac30-c68d4c025c61\" (UID: \"98d051f1-7cf9-4c72-ac30-c68d4c025c61\") " Jan 27 15:31:34 crc kubenswrapper[4697]: I0127 15:31:34.970468 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98d051f1-7cf9-4c72-ac30-c68d4c025c61-combined-ca-bundle\") pod \"98d051f1-7cf9-4c72-ac30-c68d4c025c61\" (UID: \"98d051f1-7cf9-4c72-ac30-c68d4c025c61\") " Jan 27 15:31:34 crc kubenswrapper[4697]: I0127 15:31:34.970543 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/98d051f1-7cf9-4c72-ac30-c68d4c025c61-run-httpd\") pod \"98d051f1-7cf9-4c72-ac30-c68d4c025c61\" (UID: \"98d051f1-7cf9-4c72-ac30-c68d4c025c61\") " Jan 27 15:31:34 crc kubenswrapper[4697]: I0127 15:31:34.970574 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-48sxz\" (UniqueName: \"kubernetes.io/projected/98d051f1-7cf9-4c72-ac30-c68d4c025c61-kube-api-access-48sxz\") pod \"98d051f1-7cf9-4c72-ac30-c68d4c025c61\" (UID: \"98d051f1-7cf9-4c72-ac30-c68d4c025c61\") " Jan 27 15:31:34 crc kubenswrapper[4697]: I0127 15:31:34.970620 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/98d051f1-7cf9-4c72-ac30-c68d4c025c61-scripts\") pod \"98d051f1-7cf9-4c72-ac30-c68d4c025c61\" (UID: \"98d051f1-7cf9-4c72-ac30-c68d4c025c61\") " Jan 27 15:31:34 crc kubenswrapper[4697]: I0127 15:31:34.970647 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/98d051f1-7cf9-4c72-ac30-c68d4c025c61-sg-core-conf-yaml\") pod \"98d051f1-7cf9-4c72-ac30-c68d4c025c61\" (UID: \"98d051f1-7cf9-4c72-ac30-c68d4c025c61\") " Jan 27 15:31:34 crc kubenswrapper[4697]: I0127 15:31:34.970667 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98d051f1-7cf9-4c72-ac30-c68d4c025c61-config-data\") pod \"98d051f1-7cf9-4c72-ac30-c68d4c025c61\" (UID: \"98d051f1-7cf9-4c72-ac30-c68d4c025c61\") " Jan 27 15:31:34 crc kubenswrapper[4697]: I0127 15:31:34.971251 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/98d051f1-7cf9-4c72-ac30-c68d4c025c61-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "98d051f1-7cf9-4c72-ac30-c68d4c025c61" (UID: "98d051f1-7cf9-4c72-ac30-c68d4c025c61"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:31:34 crc kubenswrapper[4697]: I0127 15:31:34.971405 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/98d051f1-7cf9-4c72-ac30-c68d4c025c61-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "98d051f1-7cf9-4c72-ac30-c68d4c025c61" (UID: "98d051f1-7cf9-4c72-ac30-c68d4c025c61"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:31:34 crc kubenswrapper[4697]: I0127 15:31:34.971829 4697 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/98d051f1-7cf9-4c72-ac30-c68d4c025c61-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 15:31:34 crc kubenswrapper[4697]: I0127 15:31:34.971850 4697 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/98d051f1-7cf9-4c72-ac30-c68d4c025c61-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 15:31:34 crc kubenswrapper[4697]: I0127 15:31:34.997111 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98d051f1-7cf9-4c72-ac30-c68d4c025c61-kube-api-access-48sxz" (OuterVolumeSpecName: "kube-api-access-48sxz") pod "98d051f1-7cf9-4c72-ac30-c68d4c025c61" (UID: "98d051f1-7cf9-4c72-ac30-c68d4c025c61"). InnerVolumeSpecName "kube-api-access-48sxz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:31:35 crc kubenswrapper[4697]: I0127 15:31:35.014951 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98d051f1-7cf9-4c72-ac30-c68d4c025c61-scripts" (OuterVolumeSpecName: "scripts") pod "98d051f1-7cf9-4c72-ac30-c68d4c025c61" (UID: "98d051f1-7cf9-4c72-ac30-c68d4c025c61"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:31:35 crc kubenswrapper[4697]: I0127 15:31:35.018982 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98d051f1-7cf9-4c72-ac30-c68d4c025c61-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "98d051f1-7cf9-4c72-ac30-c68d4c025c61" (UID: "98d051f1-7cf9-4c72-ac30-c68d4c025c61"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:31:35 crc kubenswrapper[4697]: I0127 15:31:35.073332 4697 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/98d051f1-7cf9-4c72-ac30-c68d4c025c61-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 15:31:35 crc kubenswrapper[4697]: I0127 15:31:35.073681 4697 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/98d051f1-7cf9-4c72-ac30-c68d4c025c61-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 27 15:31:35 crc kubenswrapper[4697]: I0127 15:31:35.073806 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-48sxz\" (UniqueName: \"kubernetes.io/projected/98d051f1-7cf9-4c72-ac30-c68d4c025c61-kube-api-access-48sxz\") on node \"crc\" DevicePath \"\"" Jan 27 15:31:35 crc kubenswrapper[4697]: I0127 15:31:35.110234 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98d051f1-7cf9-4c72-ac30-c68d4c025c61-config-data" (OuterVolumeSpecName: "config-data") pod "98d051f1-7cf9-4c72-ac30-c68d4c025c61" (UID: "98d051f1-7cf9-4c72-ac30-c68d4c025c61"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:31:35 crc kubenswrapper[4697]: I0127 15:31:35.120293 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98d051f1-7cf9-4c72-ac30-c68d4c025c61-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "98d051f1-7cf9-4c72-ac30-c68d4c025c61" (UID: "98d051f1-7cf9-4c72-ac30-c68d4c025c61"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:31:35 crc kubenswrapper[4697]: I0127 15:31:35.176026 4697 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98d051f1-7cf9-4c72-ac30-c68d4c025c61-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 15:31:35 crc kubenswrapper[4697]: I0127 15:31:35.176267 4697 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98d051f1-7cf9-4c72-ac30-c68d4c025c61-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 15:31:35 crc kubenswrapper[4697]: I0127 15:31:35.578972 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"98d051f1-7cf9-4c72-ac30-c68d4c025c61","Type":"ContainerDied","Data":"235f2b44207cb3cbaaaf913800433e0fcad4f7ba5ef522e5c37c06eae7dd26c2"} Jan 27 15:31:35 crc kubenswrapper[4697]: I0127 15:31:35.579050 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 15:31:35 crc kubenswrapper[4697]: I0127 15:31:35.579995 4697 scope.go:117] "RemoveContainer" containerID="411020ffcf37e62d7f9f1904a494419c167140cb80d4e55498024040126f2eac" Jan 27 15:31:35 crc kubenswrapper[4697]: I0127 15:31:35.600182 4697 scope.go:117] "RemoveContainer" containerID="67befd37cbc1c3cb462a3e837892b25b8eae71c3c7212b77fbfa49f1c3b80878" Jan 27 15:31:35 crc kubenswrapper[4697]: I0127 15:31:35.619258 4697 scope.go:117] "RemoveContainer" containerID="b49c9e557e6803da5cc5e9a8bbfc251a7fe1bba910726f8f16a3ae022ef965d6" Jan 27 15:31:35 crc kubenswrapper[4697]: I0127 15:31:35.626959 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 15:31:35 crc kubenswrapper[4697]: I0127 15:31:35.646141 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 27 15:31:35 crc kubenswrapper[4697]: I0127 15:31:35.648133 4697 scope.go:117] "RemoveContainer" containerID="504d28743af4dd4ded9053aede2fce8048a20eec942acbd9c85f2a29b721f676" Jan 27 15:31:35 crc kubenswrapper[4697]: I0127 15:31:35.673722 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 27 15:31:35 crc kubenswrapper[4697]: E0127 15:31:35.674251 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98d051f1-7cf9-4c72-ac30-c68d4c025c61" containerName="ceilometer-notification-agent" Jan 27 15:31:35 crc kubenswrapper[4697]: I0127 15:31:35.674279 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="98d051f1-7cf9-4c72-ac30-c68d4c025c61" containerName="ceilometer-notification-agent" Jan 27 15:31:35 crc kubenswrapper[4697]: E0127 15:31:35.674306 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98d051f1-7cf9-4c72-ac30-c68d4c025c61" containerName="proxy-httpd" Jan 27 15:31:35 crc kubenswrapper[4697]: I0127 15:31:35.674316 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="98d051f1-7cf9-4c72-ac30-c68d4c025c61" containerName="proxy-httpd" Jan 27 15:31:35 crc kubenswrapper[4697]: E0127 15:31:35.674332 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98d051f1-7cf9-4c72-ac30-c68d4c025c61" containerName="sg-core" Jan 27 15:31:35 crc kubenswrapper[4697]: I0127 15:31:35.674339 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="98d051f1-7cf9-4c72-ac30-c68d4c025c61" containerName="sg-core" Jan 27 15:31:35 crc kubenswrapper[4697]: E0127 15:31:35.674371 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98d051f1-7cf9-4c72-ac30-c68d4c025c61" containerName="ceilometer-central-agent" Jan 27 15:31:35 crc kubenswrapper[4697]: I0127 15:31:35.674403 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="98d051f1-7cf9-4c72-ac30-c68d4c025c61" containerName="ceilometer-central-agent" Jan 27 15:31:35 crc kubenswrapper[4697]: I0127 15:31:35.674617 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="98d051f1-7cf9-4c72-ac30-c68d4c025c61" containerName="proxy-httpd" Jan 27 15:31:35 crc kubenswrapper[4697]: I0127 15:31:35.674644 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="98d051f1-7cf9-4c72-ac30-c68d4c025c61" containerName="sg-core" Jan 27 15:31:35 crc kubenswrapper[4697]: I0127 15:31:35.674661 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="98d051f1-7cf9-4c72-ac30-c68d4c025c61" containerName="ceilometer-central-agent" Jan 27 15:31:35 crc kubenswrapper[4697]: I0127 15:31:35.674677 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="98d051f1-7cf9-4c72-ac30-c68d4c025c61" containerName="ceilometer-notification-agent" Jan 27 15:31:35 crc kubenswrapper[4697]: I0127 15:31:35.676690 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 15:31:35 crc kubenswrapper[4697]: I0127 15:31:35.680553 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 27 15:31:35 crc kubenswrapper[4697]: I0127 15:31:35.680742 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 27 15:31:35 crc kubenswrapper[4697]: I0127 15:31:35.683651 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 15:31:35 crc kubenswrapper[4697]: I0127 15:31:35.785002 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1ce006f9-2ea0-4e6a-886a-e6564c9bcb38-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1ce006f9-2ea0-4e6a-886a-e6564c9bcb38\") " pod="openstack/ceilometer-0" Jan 27 15:31:35 crc kubenswrapper[4697]: I0127 15:31:35.785041 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ce006f9-2ea0-4e6a-886a-e6564c9bcb38-config-data\") pod \"ceilometer-0\" (UID: \"1ce006f9-2ea0-4e6a-886a-e6564c9bcb38\") " pod="openstack/ceilometer-0" Jan 27 15:31:35 crc kubenswrapper[4697]: I0127 15:31:35.785057 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1ce006f9-2ea0-4e6a-886a-e6564c9bcb38-run-httpd\") pod \"ceilometer-0\" (UID: \"1ce006f9-2ea0-4e6a-886a-e6564c9bcb38\") " pod="openstack/ceilometer-0" Jan 27 15:31:35 crc kubenswrapper[4697]: I0127 15:31:35.785087 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ce006f9-2ea0-4e6a-886a-e6564c9bcb38-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1ce006f9-2ea0-4e6a-886a-e6564c9bcb38\") " pod="openstack/ceilometer-0" Jan 27 15:31:35 crc kubenswrapper[4697]: I0127 15:31:35.785106 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2s2c\" (UniqueName: \"kubernetes.io/projected/1ce006f9-2ea0-4e6a-886a-e6564c9bcb38-kube-api-access-f2s2c\") pod \"ceilometer-0\" (UID: \"1ce006f9-2ea0-4e6a-886a-e6564c9bcb38\") " pod="openstack/ceilometer-0" Jan 27 15:31:35 crc kubenswrapper[4697]: I0127 15:31:35.785250 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1ce006f9-2ea0-4e6a-886a-e6564c9bcb38-log-httpd\") pod \"ceilometer-0\" (UID: \"1ce006f9-2ea0-4e6a-886a-e6564c9bcb38\") " pod="openstack/ceilometer-0" Jan 27 15:31:35 crc kubenswrapper[4697]: I0127 15:31:35.785313 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1ce006f9-2ea0-4e6a-886a-e6564c9bcb38-scripts\") pod \"ceilometer-0\" (UID: \"1ce006f9-2ea0-4e6a-886a-e6564c9bcb38\") " pod="openstack/ceilometer-0" Jan 27 15:31:35 crc kubenswrapper[4697]: I0127 15:31:35.887272 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1ce006f9-2ea0-4e6a-886a-e6564c9bcb38-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1ce006f9-2ea0-4e6a-886a-e6564c9bcb38\") " pod="openstack/ceilometer-0" Jan 27 15:31:35 crc kubenswrapper[4697]: I0127 15:31:35.887310 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ce006f9-2ea0-4e6a-886a-e6564c9bcb38-config-data\") pod \"ceilometer-0\" (UID: \"1ce006f9-2ea0-4e6a-886a-e6564c9bcb38\") " pod="openstack/ceilometer-0" Jan 27 15:31:35 crc kubenswrapper[4697]: I0127 15:31:35.887326 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1ce006f9-2ea0-4e6a-886a-e6564c9bcb38-run-httpd\") pod \"ceilometer-0\" (UID: \"1ce006f9-2ea0-4e6a-886a-e6564c9bcb38\") " pod="openstack/ceilometer-0" Jan 27 15:31:35 crc kubenswrapper[4697]: I0127 15:31:35.887352 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ce006f9-2ea0-4e6a-886a-e6564c9bcb38-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1ce006f9-2ea0-4e6a-886a-e6564c9bcb38\") " pod="openstack/ceilometer-0" Jan 27 15:31:35 crc kubenswrapper[4697]: I0127 15:31:35.887370 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f2s2c\" (UniqueName: \"kubernetes.io/projected/1ce006f9-2ea0-4e6a-886a-e6564c9bcb38-kube-api-access-f2s2c\") pod \"ceilometer-0\" (UID: \"1ce006f9-2ea0-4e6a-886a-e6564c9bcb38\") " pod="openstack/ceilometer-0" Jan 27 15:31:35 crc kubenswrapper[4697]: I0127 15:31:35.887403 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1ce006f9-2ea0-4e6a-886a-e6564c9bcb38-log-httpd\") pod \"ceilometer-0\" (UID: \"1ce006f9-2ea0-4e6a-886a-e6564c9bcb38\") " pod="openstack/ceilometer-0" Jan 27 15:31:35 crc kubenswrapper[4697]: I0127 15:31:35.887432 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1ce006f9-2ea0-4e6a-886a-e6564c9bcb38-scripts\") pod \"ceilometer-0\" (UID: \"1ce006f9-2ea0-4e6a-886a-e6564c9bcb38\") " pod="openstack/ceilometer-0" Jan 27 15:31:35 crc kubenswrapper[4697]: I0127 15:31:35.888732 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1ce006f9-2ea0-4e6a-886a-e6564c9bcb38-log-httpd\") pod \"ceilometer-0\" (UID: \"1ce006f9-2ea0-4e6a-886a-e6564c9bcb38\") " pod="openstack/ceilometer-0" Jan 27 15:31:35 crc kubenswrapper[4697]: I0127 15:31:35.888796 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1ce006f9-2ea0-4e6a-886a-e6564c9bcb38-run-httpd\") pod \"ceilometer-0\" (UID: \"1ce006f9-2ea0-4e6a-886a-e6564c9bcb38\") " pod="openstack/ceilometer-0" Jan 27 15:31:35 crc kubenswrapper[4697]: I0127 15:31:35.891589 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1ce006f9-2ea0-4e6a-886a-e6564c9bcb38-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1ce006f9-2ea0-4e6a-886a-e6564c9bcb38\") " pod="openstack/ceilometer-0" Jan 27 15:31:35 crc kubenswrapper[4697]: I0127 15:31:35.891729 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ce006f9-2ea0-4e6a-886a-e6564c9bcb38-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1ce006f9-2ea0-4e6a-886a-e6564c9bcb38\") " pod="openstack/ceilometer-0" Jan 27 15:31:35 crc kubenswrapper[4697]: I0127 15:31:35.892427 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ce006f9-2ea0-4e6a-886a-e6564c9bcb38-config-data\") pod \"ceilometer-0\" (UID: \"1ce006f9-2ea0-4e6a-886a-e6564c9bcb38\") " pod="openstack/ceilometer-0" Jan 27 15:31:35 crc kubenswrapper[4697]: I0127 15:31:35.899563 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1ce006f9-2ea0-4e6a-886a-e6564c9bcb38-scripts\") pod \"ceilometer-0\" (UID: \"1ce006f9-2ea0-4e6a-886a-e6564c9bcb38\") " pod="openstack/ceilometer-0" Jan 27 15:31:35 crc kubenswrapper[4697]: I0127 15:31:35.912308 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2s2c\" (UniqueName: \"kubernetes.io/projected/1ce006f9-2ea0-4e6a-886a-e6564c9bcb38-kube-api-access-f2s2c\") pod \"ceilometer-0\" (UID: \"1ce006f9-2ea0-4e6a-886a-e6564c9bcb38\") " pod="openstack/ceilometer-0" Jan 27 15:31:36 crc kubenswrapper[4697]: I0127 15:31:36.012846 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 15:31:36 crc kubenswrapper[4697]: I0127 15:31:36.520248 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 15:31:36 crc kubenswrapper[4697]: I0127 15:31:36.582143 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98d051f1-7cf9-4c72-ac30-c68d4c025c61" path="/var/lib/kubelet/pods/98d051f1-7cf9-4c72-ac30-c68d4c025c61/volumes" Jan 27 15:31:36 crc kubenswrapper[4697]: I0127 15:31:36.591838 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1ce006f9-2ea0-4e6a-886a-e6564c9bcb38","Type":"ContainerStarted","Data":"bf44f1a537af6113eed83b573f4c8686d9425c210e151f85afb5c5a16414062c"} Jan 27 15:31:38 crc kubenswrapper[4697]: I0127 15:31:38.619070 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1ce006f9-2ea0-4e6a-886a-e6564c9bcb38","Type":"ContainerStarted","Data":"02c13ce5699751ffc248a521ce8ea10f7ae72816264ac0d931dbf5406d971344"} Jan 27 15:31:39 crc kubenswrapper[4697]: I0127 15:31:39.628283 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1ce006f9-2ea0-4e6a-886a-e6564c9bcb38","Type":"ContainerStarted","Data":"ab10c11cb891f533205f21d94ae30a3fcc03bf56b596c5d40083de3816ac850d"} Jan 27 15:31:40 crc kubenswrapper[4697]: I0127 15:31:40.629335 4697 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5965fc65fb-dvhzz" podUID="d6ad161d-fe95-4ad3-8f60-1f1310b2974c" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.147:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.147:8443: connect: connection refused" Jan 27 15:31:40 crc kubenswrapper[4697]: I0127 15:31:40.640328 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1ce006f9-2ea0-4e6a-886a-e6564c9bcb38","Type":"ContainerStarted","Data":"5b2ce5859050736dc922fbccbb00f7471aad78680e2038dafce29a03dd8cc67e"} Jan 27 15:31:40 crc kubenswrapper[4697]: I0127 15:31:40.918532 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-5b9dc56b78-cpxnx" Jan 27 15:31:40 crc kubenswrapper[4697]: I0127 15:31:40.918645 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5b9dc56b78-cpxnx" Jan 27 15:31:42 crc kubenswrapper[4697]: I0127 15:31:42.659181 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1ce006f9-2ea0-4e6a-886a-e6564c9bcb38","Type":"ContainerStarted","Data":"59128e65e95be31d74769bfebd00b67ea58744e50173e43384730053b920ada6"} Jan 27 15:31:42 crc kubenswrapper[4697]: I0127 15:31:42.660645 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 27 15:31:42 crc kubenswrapper[4697]: I0127 15:31:42.679319 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.506134366 podStartE2EDuration="7.679296597s" podCreationTimestamp="2026-01-27 15:31:35 +0000 UTC" firstStartedPulling="2026-01-27 15:31:36.510535478 +0000 UTC m=+1392.682935269" lastFinishedPulling="2026-01-27 15:31:41.683697719 +0000 UTC m=+1397.856097500" observedRunningTime="2026-01-27 15:31:42.677269968 +0000 UTC m=+1398.849669749" watchObservedRunningTime="2026-01-27 15:31:42.679296597 +0000 UTC m=+1398.851696378" Jan 27 15:31:49 crc kubenswrapper[4697]: I0127 15:31:49.583549 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-7j258"] Jan 27 15:31:49 crc kubenswrapper[4697]: I0127 15:31:49.586889 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7j258" Jan 27 15:31:49 crc kubenswrapper[4697]: I0127 15:31:49.610423 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7j258"] Jan 27 15:31:49 crc kubenswrapper[4697]: I0127 15:31:49.647065 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4da5f97-06a4-452e-9f5e-87c97c5bf1f5-catalog-content\") pod \"redhat-operators-7j258\" (UID: \"b4da5f97-06a4-452e-9f5e-87c97c5bf1f5\") " pod="openshift-marketplace/redhat-operators-7j258" Jan 27 15:31:49 crc kubenswrapper[4697]: I0127 15:31:49.647126 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4da5f97-06a4-452e-9f5e-87c97c5bf1f5-utilities\") pod \"redhat-operators-7j258\" (UID: \"b4da5f97-06a4-452e-9f5e-87c97c5bf1f5\") " pod="openshift-marketplace/redhat-operators-7j258" Jan 27 15:31:49 crc kubenswrapper[4697]: I0127 15:31:49.647253 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvrck\" (UniqueName: \"kubernetes.io/projected/b4da5f97-06a4-452e-9f5e-87c97c5bf1f5-kube-api-access-dvrck\") pod \"redhat-operators-7j258\" (UID: \"b4da5f97-06a4-452e-9f5e-87c97c5bf1f5\") " pod="openshift-marketplace/redhat-operators-7j258" Jan 27 15:31:49 crc kubenswrapper[4697]: I0127 15:31:49.749322 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvrck\" (UniqueName: \"kubernetes.io/projected/b4da5f97-06a4-452e-9f5e-87c97c5bf1f5-kube-api-access-dvrck\") pod \"redhat-operators-7j258\" (UID: \"b4da5f97-06a4-452e-9f5e-87c97c5bf1f5\") " pod="openshift-marketplace/redhat-operators-7j258" Jan 27 15:31:49 crc kubenswrapper[4697]: I0127 15:31:49.750233 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4da5f97-06a4-452e-9f5e-87c97c5bf1f5-catalog-content\") pod \"redhat-operators-7j258\" (UID: \"b4da5f97-06a4-452e-9f5e-87c97c5bf1f5\") " pod="openshift-marketplace/redhat-operators-7j258" Jan 27 15:31:49 crc kubenswrapper[4697]: I0127 15:31:49.750568 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4da5f97-06a4-452e-9f5e-87c97c5bf1f5-catalog-content\") pod \"redhat-operators-7j258\" (UID: \"b4da5f97-06a4-452e-9f5e-87c97c5bf1f5\") " pod="openshift-marketplace/redhat-operators-7j258" Jan 27 15:31:49 crc kubenswrapper[4697]: I0127 15:31:49.750650 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4da5f97-06a4-452e-9f5e-87c97c5bf1f5-utilities\") pod \"redhat-operators-7j258\" (UID: \"b4da5f97-06a4-452e-9f5e-87c97c5bf1f5\") " pod="openshift-marketplace/redhat-operators-7j258" Jan 27 15:31:49 crc kubenswrapper[4697]: I0127 15:31:49.750943 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4da5f97-06a4-452e-9f5e-87c97c5bf1f5-utilities\") pod \"redhat-operators-7j258\" (UID: \"b4da5f97-06a4-452e-9f5e-87c97c5bf1f5\") " pod="openshift-marketplace/redhat-operators-7j258" Jan 27 15:31:49 crc kubenswrapper[4697]: I0127 15:31:49.769011 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvrck\" (UniqueName: \"kubernetes.io/projected/b4da5f97-06a4-452e-9f5e-87c97c5bf1f5-kube-api-access-dvrck\") pod \"redhat-operators-7j258\" (UID: \"b4da5f97-06a4-452e-9f5e-87c97c5bf1f5\") " pod="openshift-marketplace/redhat-operators-7j258" Jan 27 15:31:49 crc kubenswrapper[4697]: I0127 15:31:49.904967 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7j258" Jan 27 15:31:50 crc kubenswrapper[4697]: I0127 15:31:50.443240 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7j258"] Jan 27 15:31:50 crc kubenswrapper[4697]: I0127 15:31:50.729042 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7j258" event={"ID":"b4da5f97-06a4-452e-9f5e-87c97c5bf1f5","Type":"ContainerStarted","Data":"2e9bb9c5460149a2c391b02c74b231c146bacd04bb2f91f1f234ffc1b54b746a"} Jan 27 15:31:50 crc kubenswrapper[4697]: I0127 15:31:50.922108 4697 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5b9dc56b78-cpxnx" podUID="ca5e937a-90cf-44e0-bf5c-bcb75c95a2f4" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.148:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.148:8443: connect: connection refused" Jan 27 15:31:51 crc kubenswrapper[4697]: I0127 15:31:51.741072 4697 generic.go:334] "Generic (PLEG): container finished" podID="b4da5f97-06a4-452e-9f5e-87c97c5bf1f5" containerID="93faab635d469c12b2ccee2910326b2a5f9fb1297387e5c99b7c450f07f44e2a" exitCode=0 Jan 27 15:31:51 crc kubenswrapper[4697]: I0127 15:31:51.741255 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7j258" event={"ID":"b4da5f97-06a4-452e-9f5e-87c97c5bf1f5","Type":"ContainerDied","Data":"93faab635d469c12b2ccee2910326b2a5f9fb1297387e5c99b7c450f07f44e2a"} Jan 27 15:31:54 crc kubenswrapper[4697]: I0127 15:31:54.771631 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7j258" event={"ID":"b4da5f97-06a4-452e-9f5e-87c97c5bf1f5","Type":"ContainerStarted","Data":"d879a71a97b56e74aa6db971b1c472a6c257bfb43d15a2933584b4488fc577a3"} Jan 27 15:31:55 crc kubenswrapper[4697]: I0127 15:31:55.633975 4697 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5965fc65fb-dvhzz" podUID="d6ad161d-fe95-4ad3-8f60-1f1310b2974c" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.147:8443/dashboard/auth/login/?next=/dashboard/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 15:31:55 crc kubenswrapper[4697]: I0127 15:31:55.634080 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-5965fc65fb-dvhzz" Jan 27 15:31:55 crc kubenswrapper[4697]: I0127 15:31:55.634742 4697 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"5482b300f2fd66a56a6e5b2b822ea47551b3a8a006085c8bcf8f05ac76f7cc29"} pod="openstack/horizon-5965fc65fb-dvhzz" containerMessage="Container horizon failed startup probe, will be restarted" Jan 27 15:31:55 crc kubenswrapper[4697]: I0127 15:31:55.634800 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5965fc65fb-dvhzz" podUID="d6ad161d-fe95-4ad3-8f60-1f1310b2974c" containerName="horizon" containerID="cri-o://5482b300f2fd66a56a6e5b2b822ea47551b3a8a006085c8bcf8f05ac76f7cc29" gracePeriod=30 Jan 27 15:32:00 crc kubenswrapper[4697]: I0127 15:32:00.826133 4697 generic.go:334] "Generic (PLEG): container finished" podID="d6ad161d-fe95-4ad3-8f60-1f1310b2974c" containerID="5482b300f2fd66a56a6e5b2b822ea47551b3a8a006085c8bcf8f05ac76f7cc29" exitCode=0 Jan 27 15:32:00 crc kubenswrapper[4697]: I0127 15:32:00.826311 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5965fc65fb-dvhzz" event={"ID":"d6ad161d-fe95-4ad3-8f60-1f1310b2974c","Type":"ContainerDied","Data":"5482b300f2fd66a56a6e5b2b822ea47551b3a8a006085c8bcf8f05ac76f7cc29"} Jan 27 15:32:00 crc kubenswrapper[4697]: I0127 15:32:00.827937 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5965fc65fb-dvhzz" event={"ID":"d6ad161d-fe95-4ad3-8f60-1f1310b2974c","Type":"ContainerStarted","Data":"2df99ec05f3e11b983a2be572f2382aaa1f7adad62f6fff7f6f6a32e40fc7f05"} Jan 27 15:32:00 crc kubenswrapper[4697]: I0127 15:32:00.827962 4697 scope.go:117] "RemoveContainer" containerID="ebe38bf6f6e82a4ae410ea90d70082a99bbf9864bd0a371e11f885c6c1ee2d61" Jan 27 15:32:02 crc kubenswrapper[4697]: I0127 15:32:02.853275 4697 generic.go:334] "Generic (PLEG): container finished" podID="68d5ac88-2f0e-4785-8f16-908526425bf5" containerID="ea5959e372e764bda1a7480ee5898792d34f826f9d14924c3af38f6211ba8f13" exitCode=0 Jan 27 15:32:02 crc kubenswrapper[4697]: I0127 15:32:02.853318 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-qbv8q" event={"ID":"68d5ac88-2f0e-4785-8f16-908526425bf5","Type":"ContainerDied","Data":"ea5959e372e764bda1a7480ee5898792d34f826f9d14924c3af38f6211ba8f13"} Jan 27 15:32:04 crc kubenswrapper[4697]: I0127 15:32:04.214680 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-qbv8q" Jan 27 15:32:04 crc kubenswrapper[4697]: I0127 15:32:04.343531 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68d5ac88-2f0e-4785-8f16-908526425bf5-combined-ca-bundle\") pod \"68d5ac88-2f0e-4785-8f16-908526425bf5\" (UID: \"68d5ac88-2f0e-4785-8f16-908526425bf5\") " Jan 27 15:32:04 crc kubenswrapper[4697]: I0127 15:32:04.343580 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68d5ac88-2f0e-4785-8f16-908526425bf5-config-data\") pod \"68d5ac88-2f0e-4785-8f16-908526425bf5\" (UID: \"68d5ac88-2f0e-4785-8f16-908526425bf5\") " Jan 27 15:32:04 crc kubenswrapper[4697]: I0127 15:32:04.343701 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/68d5ac88-2f0e-4785-8f16-908526425bf5-scripts\") pod \"68d5ac88-2f0e-4785-8f16-908526425bf5\" (UID: \"68d5ac88-2f0e-4785-8f16-908526425bf5\") " Jan 27 15:32:04 crc kubenswrapper[4697]: I0127 15:32:04.343773 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p2qw2\" (UniqueName: \"kubernetes.io/projected/68d5ac88-2f0e-4785-8f16-908526425bf5-kube-api-access-p2qw2\") pod \"68d5ac88-2f0e-4785-8f16-908526425bf5\" (UID: \"68d5ac88-2f0e-4785-8f16-908526425bf5\") " Jan 27 15:32:04 crc kubenswrapper[4697]: I0127 15:32:04.355354 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68d5ac88-2f0e-4785-8f16-908526425bf5-kube-api-access-p2qw2" (OuterVolumeSpecName: "kube-api-access-p2qw2") pod "68d5ac88-2f0e-4785-8f16-908526425bf5" (UID: "68d5ac88-2f0e-4785-8f16-908526425bf5"). InnerVolumeSpecName "kube-api-access-p2qw2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:32:04 crc kubenswrapper[4697]: I0127 15:32:04.355424 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68d5ac88-2f0e-4785-8f16-908526425bf5-scripts" (OuterVolumeSpecName: "scripts") pod "68d5ac88-2f0e-4785-8f16-908526425bf5" (UID: "68d5ac88-2f0e-4785-8f16-908526425bf5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:32:04 crc kubenswrapper[4697]: I0127 15:32:04.385854 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68d5ac88-2f0e-4785-8f16-908526425bf5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "68d5ac88-2f0e-4785-8f16-908526425bf5" (UID: "68d5ac88-2f0e-4785-8f16-908526425bf5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:32:04 crc kubenswrapper[4697]: I0127 15:32:04.385997 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68d5ac88-2f0e-4785-8f16-908526425bf5-config-data" (OuterVolumeSpecName: "config-data") pod "68d5ac88-2f0e-4785-8f16-908526425bf5" (UID: "68d5ac88-2f0e-4785-8f16-908526425bf5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:32:04 crc kubenswrapper[4697]: I0127 15:32:04.445266 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p2qw2\" (UniqueName: \"kubernetes.io/projected/68d5ac88-2f0e-4785-8f16-908526425bf5-kube-api-access-p2qw2\") on node \"crc\" DevicePath \"\"" Jan 27 15:32:04 crc kubenswrapper[4697]: I0127 15:32:04.445433 4697 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68d5ac88-2f0e-4785-8f16-908526425bf5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 15:32:04 crc kubenswrapper[4697]: I0127 15:32:04.445506 4697 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68d5ac88-2f0e-4785-8f16-908526425bf5-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 15:32:04 crc kubenswrapper[4697]: I0127 15:32:04.445567 4697 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/68d5ac88-2f0e-4785-8f16-908526425bf5-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 15:32:04 crc kubenswrapper[4697]: I0127 15:32:04.872403 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-qbv8q" event={"ID":"68d5ac88-2f0e-4785-8f16-908526425bf5","Type":"ContainerDied","Data":"4290f8d03deff0d1434565b08bac6f726ef9df6fa0e28ceab02199d7a7451a52"} Jan 27 15:32:04 crc kubenswrapper[4697]: I0127 15:32:04.872688 4697 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4290f8d03deff0d1434565b08bac6f726ef9df6fa0e28ceab02199d7a7451a52" Jan 27 15:32:04 crc kubenswrapper[4697]: I0127 15:32:04.872469 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-qbv8q" Jan 27 15:32:05 crc kubenswrapper[4697]: I0127 15:32:05.923058 4697 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5b9dc56b78-cpxnx" podUID="ca5e937a-90cf-44e0-bf5c-bcb75c95a2f4" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.148:8443/dashboard/auth/login/?next=/dashboard/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 15:32:05 crc kubenswrapper[4697]: I0127 15:32:05.979002 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 27 15:32:05 crc kubenswrapper[4697]: E0127 15:32:05.979398 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68d5ac88-2f0e-4785-8f16-908526425bf5" containerName="nova-cell0-conductor-db-sync" Jan 27 15:32:05 crc kubenswrapper[4697]: I0127 15:32:05.979417 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="68d5ac88-2f0e-4785-8f16-908526425bf5" containerName="nova-cell0-conductor-db-sync" Jan 27 15:32:05 crc kubenswrapper[4697]: I0127 15:32:05.979598 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="68d5ac88-2f0e-4785-8f16-908526425bf5" containerName="nova-cell0-conductor-db-sync" Jan 27 15:32:05 crc kubenswrapper[4697]: I0127 15:32:05.980227 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 27 15:32:05 crc kubenswrapper[4697]: I0127 15:32:05.984968 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 27 15:32:05 crc kubenswrapper[4697]: I0127 15:32:05.986417 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-nb7hg" Jan 27 15:32:05 crc kubenswrapper[4697]: I0127 15:32:05.994355 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 27 15:32:06 crc kubenswrapper[4697]: I0127 15:32:06.043046 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 27 15:32:06 crc kubenswrapper[4697]: I0127 15:32:06.085334 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9t749\" (UniqueName: \"kubernetes.io/projected/42230c5d-4496-4618-bd71-9b11d49bde9b-kube-api-access-9t749\") pod \"nova-cell0-conductor-0\" (UID: \"42230c5d-4496-4618-bd71-9b11d49bde9b\") " pod="openstack/nova-cell0-conductor-0" Jan 27 15:32:06 crc kubenswrapper[4697]: I0127 15:32:06.087892 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42230c5d-4496-4618-bd71-9b11d49bde9b-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"42230c5d-4496-4618-bd71-9b11d49bde9b\") " pod="openstack/nova-cell0-conductor-0" Jan 27 15:32:06 crc kubenswrapper[4697]: I0127 15:32:06.088747 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42230c5d-4496-4618-bd71-9b11d49bde9b-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"42230c5d-4496-4618-bd71-9b11d49bde9b\") " pod="openstack/nova-cell0-conductor-0" Jan 27 15:32:06 crc kubenswrapper[4697]: I0127 15:32:06.190342 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9t749\" (UniqueName: \"kubernetes.io/projected/42230c5d-4496-4618-bd71-9b11d49bde9b-kube-api-access-9t749\") pod \"nova-cell0-conductor-0\" (UID: \"42230c5d-4496-4618-bd71-9b11d49bde9b\") " pod="openstack/nova-cell0-conductor-0" Jan 27 15:32:06 crc kubenswrapper[4697]: I0127 15:32:06.190765 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42230c5d-4496-4618-bd71-9b11d49bde9b-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"42230c5d-4496-4618-bd71-9b11d49bde9b\") " pod="openstack/nova-cell0-conductor-0" Jan 27 15:32:06 crc kubenswrapper[4697]: I0127 15:32:06.191028 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42230c5d-4496-4618-bd71-9b11d49bde9b-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"42230c5d-4496-4618-bd71-9b11d49bde9b\") " pod="openstack/nova-cell0-conductor-0" Jan 27 15:32:06 crc kubenswrapper[4697]: I0127 15:32:06.196742 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42230c5d-4496-4618-bd71-9b11d49bde9b-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"42230c5d-4496-4618-bd71-9b11d49bde9b\") " pod="openstack/nova-cell0-conductor-0" Jan 27 15:32:06 crc kubenswrapper[4697]: I0127 15:32:06.208242 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42230c5d-4496-4618-bd71-9b11d49bde9b-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"42230c5d-4496-4618-bd71-9b11d49bde9b\") " pod="openstack/nova-cell0-conductor-0" Jan 27 15:32:06 crc kubenswrapper[4697]: I0127 15:32:06.209417 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9t749\" (UniqueName: \"kubernetes.io/projected/42230c5d-4496-4618-bd71-9b11d49bde9b-kube-api-access-9t749\") pod \"nova-cell0-conductor-0\" (UID: \"42230c5d-4496-4618-bd71-9b11d49bde9b\") " pod="openstack/nova-cell0-conductor-0" Jan 27 15:32:06 crc kubenswrapper[4697]: I0127 15:32:06.296111 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 27 15:32:06 crc kubenswrapper[4697]: I0127 15:32:06.960262 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 27 15:32:07 crc kubenswrapper[4697]: I0127 15:32:07.905723 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"42230c5d-4496-4618-bd71-9b11d49bde9b","Type":"ContainerStarted","Data":"bb4cb777b15dcab0e791c3407cf0b0e1c5582c3eaf131faf4e749bf72e76e042"} Jan 27 15:32:07 crc kubenswrapper[4697]: I0127 15:32:07.906258 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 27 15:32:07 crc kubenswrapper[4697]: I0127 15:32:07.906271 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"42230c5d-4496-4618-bd71-9b11d49bde9b","Type":"ContainerStarted","Data":"fd99e101a68b12b6d26d8cf3da03c6cb4cc2d6f666fea43b1f418502f7011b6f"} Jan 27 15:32:07 crc kubenswrapper[4697]: I0127 15:32:07.922733 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.92271535 podStartE2EDuration="2.92271535s" podCreationTimestamp="2026-01-27 15:32:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:32:07.920064715 +0000 UTC m=+1424.092464506" watchObservedRunningTime="2026-01-27 15:32:07.92271535 +0000 UTC m=+1424.095115131" Jan 27 15:32:08 crc kubenswrapper[4697]: I0127 15:32:08.915462 4697 generic.go:334] "Generic (PLEG): container finished" podID="b4da5f97-06a4-452e-9f5e-87c97c5bf1f5" containerID="d879a71a97b56e74aa6db971b1c472a6c257bfb43d15a2933584b4488fc577a3" exitCode=0 Jan 27 15:32:08 crc kubenswrapper[4697]: I0127 15:32:08.915674 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7j258" event={"ID":"b4da5f97-06a4-452e-9f5e-87c97c5bf1f5","Type":"ContainerDied","Data":"d879a71a97b56e74aa6db971b1c472a6c257bfb43d15a2933584b4488fc577a3"} Jan 27 15:32:09 crc kubenswrapper[4697]: I0127 15:32:09.642104 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 27 15:32:09 crc kubenswrapper[4697]: I0127 15:32:09.642389 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="ea92e796-e6f5-458e-a47f-b7d34100f837" containerName="kube-state-metrics" containerID="cri-o://7c1379f719177b3e46f07e56f9214d8deef213a4eee982fdfd7188d710e672a0" gracePeriod=30 Jan 27 15:32:09 crc kubenswrapper[4697]: I0127 15:32:09.929128 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7j258" event={"ID":"b4da5f97-06a4-452e-9f5e-87c97c5bf1f5","Type":"ContainerStarted","Data":"49a8f4cc8f5bbc6e5d3bd8d63ad50c5e8e600539e0cb37b1d39713bd944703c3"} Jan 27 15:32:09 crc kubenswrapper[4697]: I0127 15:32:09.932209 4697 generic.go:334] "Generic (PLEG): container finished" podID="ea92e796-e6f5-458e-a47f-b7d34100f837" containerID="7c1379f719177b3e46f07e56f9214d8deef213a4eee982fdfd7188d710e672a0" exitCode=2 Jan 27 15:32:09 crc kubenswrapper[4697]: I0127 15:32:09.932369 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"ea92e796-e6f5-458e-a47f-b7d34100f837","Type":"ContainerDied","Data":"7c1379f719177b3e46f07e56f9214d8deef213a4eee982fdfd7188d710e672a0"} Jan 27 15:32:09 crc kubenswrapper[4697]: I0127 15:32:09.950277 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-7j258" podStartSLOduration=3.134465577 podStartE2EDuration="20.950259268s" podCreationTimestamp="2026-01-27 15:31:49 +0000 UTC" firstStartedPulling="2026-01-27 15:31:51.745473148 +0000 UTC m=+1407.917872919" lastFinishedPulling="2026-01-27 15:32:09.561266829 +0000 UTC m=+1425.733666610" observedRunningTime="2026-01-27 15:32:09.947376498 +0000 UTC m=+1426.119776279" watchObservedRunningTime="2026-01-27 15:32:09.950259268 +0000 UTC m=+1426.122659049" Jan 27 15:32:10 crc kubenswrapper[4697]: I0127 15:32:10.249471 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 27 15:32:10 crc kubenswrapper[4697]: I0127 15:32:10.300333 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f9kzx\" (UniqueName: \"kubernetes.io/projected/ea92e796-e6f5-458e-a47f-b7d34100f837-kube-api-access-f9kzx\") pod \"ea92e796-e6f5-458e-a47f-b7d34100f837\" (UID: \"ea92e796-e6f5-458e-a47f-b7d34100f837\") " Jan 27 15:32:10 crc kubenswrapper[4697]: I0127 15:32:10.322743 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea92e796-e6f5-458e-a47f-b7d34100f837-kube-api-access-f9kzx" (OuterVolumeSpecName: "kube-api-access-f9kzx") pod "ea92e796-e6f5-458e-a47f-b7d34100f837" (UID: "ea92e796-e6f5-458e-a47f-b7d34100f837"). InnerVolumeSpecName "kube-api-access-f9kzx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:32:10 crc kubenswrapper[4697]: I0127 15:32:10.402960 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f9kzx\" (UniqueName: \"kubernetes.io/projected/ea92e796-e6f5-458e-a47f-b7d34100f837-kube-api-access-f9kzx\") on node \"crc\" DevicePath \"\"" Jan 27 15:32:10 crc kubenswrapper[4697]: I0127 15:32:10.629167 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-5965fc65fb-dvhzz" Jan 27 15:32:10 crc kubenswrapper[4697]: I0127 15:32:10.629647 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5965fc65fb-dvhzz" Jan 27 15:32:10 crc kubenswrapper[4697]: I0127 15:32:10.630545 4697 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5965fc65fb-dvhzz" podUID="d6ad161d-fe95-4ad3-8f60-1f1310b2974c" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.147:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.147:8443: connect: connection refused" Jan 27 15:32:10 crc kubenswrapper[4697]: I0127 15:32:10.942528 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"ea92e796-e6f5-458e-a47f-b7d34100f837","Type":"ContainerDied","Data":"c1f29118d462bffab6a5dee92e2ef85537d6518473f2db018ddb652d3aff6a36"} Jan 27 15:32:10 crc kubenswrapper[4697]: I0127 15:32:10.942569 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 27 15:32:10 crc kubenswrapper[4697]: I0127 15:32:10.942587 4697 scope.go:117] "RemoveContainer" containerID="7c1379f719177b3e46f07e56f9214d8deef213a4eee982fdfd7188d710e672a0" Jan 27 15:32:10 crc kubenswrapper[4697]: I0127 15:32:10.976101 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 27 15:32:10 crc kubenswrapper[4697]: I0127 15:32:10.981077 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 27 15:32:10 crc kubenswrapper[4697]: I0127 15:32:10.999514 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 27 15:32:11 crc kubenswrapper[4697]: E0127 15:32:10.999973 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea92e796-e6f5-458e-a47f-b7d34100f837" containerName="kube-state-metrics" Jan 27 15:32:11 crc kubenswrapper[4697]: I0127 15:32:10.999998 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea92e796-e6f5-458e-a47f-b7d34100f837" containerName="kube-state-metrics" Jan 27 15:32:11 crc kubenswrapper[4697]: I0127 15:32:11.000237 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea92e796-e6f5-458e-a47f-b7d34100f837" containerName="kube-state-metrics" Jan 27 15:32:11 crc kubenswrapper[4697]: I0127 15:32:11.000982 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 27 15:32:11 crc kubenswrapper[4697]: I0127 15:32:11.003380 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Jan 27 15:32:11 crc kubenswrapper[4697]: I0127 15:32:11.006599 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Jan 27 15:32:11 crc kubenswrapper[4697]: I0127 15:32:11.012701 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/48bb37d7-5e93-4523-8526-b8b664997fb3-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"48bb37d7-5e93-4523-8526-b8b664997fb3\") " pod="openstack/kube-state-metrics-0" Jan 27 15:32:11 crc kubenswrapper[4697]: I0127 15:32:11.012765 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/48bb37d7-5e93-4523-8526-b8b664997fb3-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"48bb37d7-5e93-4523-8526-b8b664997fb3\") " pod="openstack/kube-state-metrics-0" Jan 27 15:32:11 crc kubenswrapper[4697]: I0127 15:32:11.012972 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8845\" (UniqueName: \"kubernetes.io/projected/48bb37d7-5e93-4523-8526-b8b664997fb3-kube-api-access-l8845\") pod \"kube-state-metrics-0\" (UID: \"48bb37d7-5e93-4523-8526-b8b664997fb3\") " pod="openstack/kube-state-metrics-0" Jan 27 15:32:11 crc kubenswrapper[4697]: I0127 15:32:11.013495 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48bb37d7-5e93-4523-8526-b8b664997fb3-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"48bb37d7-5e93-4523-8526-b8b664997fb3\") " pod="openstack/kube-state-metrics-0" Jan 27 15:32:11 crc kubenswrapper[4697]: I0127 15:32:11.015260 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 27 15:32:11 crc kubenswrapper[4697]: I0127 15:32:11.116104 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/48bb37d7-5e93-4523-8526-b8b664997fb3-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"48bb37d7-5e93-4523-8526-b8b664997fb3\") " pod="openstack/kube-state-metrics-0" Jan 27 15:32:11 crc kubenswrapper[4697]: I0127 15:32:11.116182 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/48bb37d7-5e93-4523-8526-b8b664997fb3-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"48bb37d7-5e93-4523-8526-b8b664997fb3\") " pod="openstack/kube-state-metrics-0" Jan 27 15:32:11 crc kubenswrapper[4697]: I0127 15:32:11.116346 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8845\" (UniqueName: \"kubernetes.io/projected/48bb37d7-5e93-4523-8526-b8b664997fb3-kube-api-access-l8845\") pod \"kube-state-metrics-0\" (UID: \"48bb37d7-5e93-4523-8526-b8b664997fb3\") " pod="openstack/kube-state-metrics-0" Jan 27 15:32:11 crc kubenswrapper[4697]: I0127 15:32:11.116414 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48bb37d7-5e93-4523-8526-b8b664997fb3-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"48bb37d7-5e93-4523-8526-b8b664997fb3\") " pod="openstack/kube-state-metrics-0" Jan 27 15:32:11 crc kubenswrapper[4697]: I0127 15:32:11.120217 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/48bb37d7-5e93-4523-8526-b8b664997fb3-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"48bb37d7-5e93-4523-8526-b8b664997fb3\") " pod="openstack/kube-state-metrics-0" Jan 27 15:32:11 crc kubenswrapper[4697]: I0127 15:32:11.120441 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/48bb37d7-5e93-4523-8526-b8b664997fb3-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"48bb37d7-5e93-4523-8526-b8b664997fb3\") " pod="openstack/kube-state-metrics-0" Jan 27 15:32:11 crc kubenswrapper[4697]: I0127 15:32:11.121470 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48bb37d7-5e93-4523-8526-b8b664997fb3-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"48bb37d7-5e93-4523-8526-b8b664997fb3\") " pod="openstack/kube-state-metrics-0" Jan 27 15:32:11 crc kubenswrapper[4697]: I0127 15:32:11.134506 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8845\" (UniqueName: \"kubernetes.io/projected/48bb37d7-5e93-4523-8526-b8b664997fb3-kube-api-access-l8845\") pod \"kube-state-metrics-0\" (UID: \"48bb37d7-5e93-4523-8526-b8b664997fb3\") " pod="openstack/kube-state-metrics-0" Jan 27 15:32:11 crc kubenswrapper[4697]: I0127 15:32:11.322301 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 27 15:32:11 crc kubenswrapper[4697]: I0127 15:32:11.867600 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 27 15:32:11 crc kubenswrapper[4697]: I0127 15:32:11.954234 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"48bb37d7-5e93-4523-8526-b8b664997fb3","Type":"ContainerStarted","Data":"7fb527322f8a81396d69f53fe67c5fec02d8f8d66c17857ad85a0f62cb764702"} Jan 27 15:32:12 crc kubenswrapper[4697]: I0127 15:32:12.161595 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 15:32:12 crc kubenswrapper[4697]: I0127 15:32:12.161930 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1ce006f9-2ea0-4e6a-886a-e6564c9bcb38" containerName="ceilometer-central-agent" containerID="cri-o://02c13ce5699751ffc248a521ce8ea10f7ae72816264ac0d931dbf5406d971344" gracePeriod=30 Jan 27 15:32:12 crc kubenswrapper[4697]: I0127 15:32:12.162065 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1ce006f9-2ea0-4e6a-886a-e6564c9bcb38" containerName="proxy-httpd" containerID="cri-o://59128e65e95be31d74769bfebd00b67ea58744e50173e43384730053b920ada6" gracePeriod=30 Jan 27 15:32:12 crc kubenswrapper[4697]: I0127 15:32:12.162121 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1ce006f9-2ea0-4e6a-886a-e6564c9bcb38" containerName="sg-core" containerID="cri-o://5b2ce5859050736dc922fbccbb00f7471aad78680e2038dafce29a03dd8cc67e" gracePeriod=30 Jan 27 15:32:12 crc kubenswrapper[4697]: I0127 15:32:12.162162 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1ce006f9-2ea0-4e6a-886a-e6564c9bcb38" containerName="ceilometer-notification-agent" containerID="cri-o://ab10c11cb891f533205f21d94ae30a3fcc03bf56b596c5d40083de3816ac850d" gracePeriod=30 Jan 27 15:32:12 crc kubenswrapper[4697]: I0127 15:32:12.582826 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea92e796-e6f5-458e-a47f-b7d34100f837" path="/var/lib/kubelet/pods/ea92e796-e6f5-458e-a47f-b7d34100f837/volumes" Jan 27 15:32:12 crc kubenswrapper[4697]: I0127 15:32:12.966914 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"48bb37d7-5e93-4523-8526-b8b664997fb3","Type":"ContainerStarted","Data":"5fff4a94c2d7d44720bbfc23fc82a7b8231b2632bf01884e7efc5ea0a75d86e0"} Jan 27 15:32:12 crc kubenswrapper[4697]: I0127 15:32:12.967158 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 27 15:32:12 crc kubenswrapper[4697]: I0127 15:32:12.970299 4697 generic.go:334] "Generic (PLEG): container finished" podID="1ce006f9-2ea0-4e6a-886a-e6564c9bcb38" containerID="59128e65e95be31d74769bfebd00b67ea58744e50173e43384730053b920ada6" exitCode=0 Jan 27 15:32:12 crc kubenswrapper[4697]: I0127 15:32:12.970325 4697 generic.go:334] "Generic (PLEG): container finished" podID="1ce006f9-2ea0-4e6a-886a-e6564c9bcb38" containerID="5b2ce5859050736dc922fbccbb00f7471aad78680e2038dafce29a03dd8cc67e" exitCode=2 Jan 27 15:32:12 crc kubenswrapper[4697]: I0127 15:32:12.970332 4697 generic.go:334] "Generic (PLEG): container finished" podID="1ce006f9-2ea0-4e6a-886a-e6564c9bcb38" containerID="02c13ce5699751ffc248a521ce8ea10f7ae72816264ac0d931dbf5406d971344" exitCode=0 Jan 27 15:32:12 crc kubenswrapper[4697]: I0127 15:32:12.970349 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1ce006f9-2ea0-4e6a-886a-e6564c9bcb38","Type":"ContainerDied","Data":"59128e65e95be31d74769bfebd00b67ea58744e50173e43384730053b920ada6"} Jan 27 15:32:12 crc kubenswrapper[4697]: I0127 15:32:12.970367 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1ce006f9-2ea0-4e6a-886a-e6564c9bcb38","Type":"ContainerDied","Data":"5b2ce5859050736dc922fbccbb00f7471aad78680e2038dafce29a03dd8cc67e"} Jan 27 15:32:12 crc kubenswrapper[4697]: I0127 15:32:12.970377 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1ce006f9-2ea0-4e6a-886a-e6564c9bcb38","Type":"ContainerDied","Data":"02c13ce5699751ffc248a521ce8ea10f7ae72816264ac0d931dbf5406d971344"} Jan 27 15:32:12 crc kubenswrapper[4697]: I0127 15:32:12.994458 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.315635126 podStartE2EDuration="2.994433467s" podCreationTimestamp="2026-01-27 15:32:10 +0000 UTC" firstStartedPulling="2026-01-27 15:32:11.874107208 +0000 UTC m=+1428.046506989" lastFinishedPulling="2026-01-27 15:32:12.552905549 +0000 UTC m=+1428.725305330" observedRunningTime="2026-01-27 15:32:12.986458843 +0000 UTC m=+1429.158858644" watchObservedRunningTime="2026-01-27 15:32:12.994433467 +0000 UTC m=+1429.166833248" Jan 27 15:32:13 crc kubenswrapper[4697]: I0127 15:32:13.407177 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-5b9dc56b78-cpxnx" Jan 27 15:32:15 crc kubenswrapper[4697]: I0127 15:32:15.717508 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-5b9dc56b78-cpxnx" Jan 27 15:32:15 crc kubenswrapper[4697]: I0127 15:32:15.811656 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5965fc65fb-dvhzz"] Jan 27 15:32:15 crc kubenswrapper[4697]: I0127 15:32:15.821397 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5965fc65fb-dvhzz" podUID="d6ad161d-fe95-4ad3-8f60-1f1310b2974c" containerName="horizon-log" containerID="cri-o://d1e760cbe02185bc38a0ab3d68834dd5be89159d85d23e6c2893a23d0cd8eff0" gracePeriod=30 Jan 27 15:32:15 crc kubenswrapper[4697]: I0127 15:32:15.821448 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5965fc65fb-dvhzz" podUID="d6ad161d-fe95-4ad3-8f60-1f1310b2974c" containerName="horizon" containerID="cri-o://2df99ec05f3e11b983a2be572f2382aaa1f7adad62f6fff7f6f6a32e40fc7f05" gracePeriod=30 Jan 27 15:32:16 crc kubenswrapper[4697]: I0127 15:32:16.323529 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 27 15:32:17 crc kubenswrapper[4697]: I0127 15:32:17.151568 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-szx5d"] Jan 27 15:32:17 crc kubenswrapper[4697]: I0127 15:32:17.160275 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-szx5d" Jan 27 15:32:17 crc kubenswrapper[4697]: I0127 15:32:17.166921 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Jan 27 15:32:17 crc kubenswrapper[4697]: I0127 15:32:17.167197 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Jan 27 15:32:17 crc kubenswrapper[4697]: I0127 15:32:17.176627 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-szx5d"] Jan 27 15:32:17 crc kubenswrapper[4697]: I0127 15:32:17.248087 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5544478-ac7d-47a8-a27f-8da131efb0fd-config-data\") pod \"nova-cell0-cell-mapping-szx5d\" (UID: \"b5544478-ac7d-47a8-a27f-8da131efb0fd\") " pod="openstack/nova-cell0-cell-mapping-szx5d" Jan 27 15:32:17 crc kubenswrapper[4697]: I0127 15:32:17.248145 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hlhl\" (UniqueName: \"kubernetes.io/projected/b5544478-ac7d-47a8-a27f-8da131efb0fd-kube-api-access-2hlhl\") pod \"nova-cell0-cell-mapping-szx5d\" (UID: \"b5544478-ac7d-47a8-a27f-8da131efb0fd\") " pod="openstack/nova-cell0-cell-mapping-szx5d" Jan 27 15:32:17 crc kubenswrapper[4697]: I0127 15:32:17.248177 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5544478-ac7d-47a8-a27f-8da131efb0fd-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-szx5d\" (UID: \"b5544478-ac7d-47a8-a27f-8da131efb0fd\") " pod="openstack/nova-cell0-cell-mapping-szx5d" Jan 27 15:32:17 crc kubenswrapper[4697]: I0127 15:32:17.248249 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b5544478-ac7d-47a8-a27f-8da131efb0fd-scripts\") pod \"nova-cell0-cell-mapping-szx5d\" (UID: \"b5544478-ac7d-47a8-a27f-8da131efb0fd\") " pod="openstack/nova-cell0-cell-mapping-szx5d" Jan 27 15:32:17 crc kubenswrapper[4697]: I0127 15:32:17.349874 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5544478-ac7d-47a8-a27f-8da131efb0fd-config-data\") pod \"nova-cell0-cell-mapping-szx5d\" (UID: \"b5544478-ac7d-47a8-a27f-8da131efb0fd\") " pod="openstack/nova-cell0-cell-mapping-szx5d" Jan 27 15:32:17 crc kubenswrapper[4697]: I0127 15:32:17.350353 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2hlhl\" (UniqueName: \"kubernetes.io/projected/b5544478-ac7d-47a8-a27f-8da131efb0fd-kube-api-access-2hlhl\") pod \"nova-cell0-cell-mapping-szx5d\" (UID: \"b5544478-ac7d-47a8-a27f-8da131efb0fd\") " pod="openstack/nova-cell0-cell-mapping-szx5d" Jan 27 15:32:17 crc kubenswrapper[4697]: I0127 15:32:17.350468 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5544478-ac7d-47a8-a27f-8da131efb0fd-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-szx5d\" (UID: \"b5544478-ac7d-47a8-a27f-8da131efb0fd\") " pod="openstack/nova-cell0-cell-mapping-szx5d" Jan 27 15:32:17 crc kubenswrapper[4697]: I0127 15:32:17.350679 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b5544478-ac7d-47a8-a27f-8da131efb0fd-scripts\") pod \"nova-cell0-cell-mapping-szx5d\" (UID: \"b5544478-ac7d-47a8-a27f-8da131efb0fd\") " pod="openstack/nova-cell0-cell-mapping-szx5d" Jan 27 15:32:17 crc kubenswrapper[4697]: I0127 15:32:17.355666 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5544478-ac7d-47a8-a27f-8da131efb0fd-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-szx5d\" (UID: \"b5544478-ac7d-47a8-a27f-8da131efb0fd\") " pod="openstack/nova-cell0-cell-mapping-szx5d" Jan 27 15:32:17 crc kubenswrapper[4697]: I0127 15:32:17.356636 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b5544478-ac7d-47a8-a27f-8da131efb0fd-scripts\") pod \"nova-cell0-cell-mapping-szx5d\" (UID: \"b5544478-ac7d-47a8-a27f-8da131efb0fd\") " pod="openstack/nova-cell0-cell-mapping-szx5d" Jan 27 15:32:17 crc kubenswrapper[4697]: I0127 15:32:17.359427 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5544478-ac7d-47a8-a27f-8da131efb0fd-config-data\") pod \"nova-cell0-cell-mapping-szx5d\" (UID: \"b5544478-ac7d-47a8-a27f-8da131efb0fd\") " pod="openstack/nova-cell0-cell-mapping-szx5d" Jan 27 15:32:17 crc kubenswrapper[4697]: I0127 15:32:17.423231 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 27 15:32:17 crc kubenswrapper[4697]: I0127 15:32:17.429426 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 15:32:17 crc kubenswrapper[4697]: I0127 15:32:17.447050 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 27 15:32:17 crc kubenswrapper[4697]: I0127 15:32:17.461361 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2hlhl\" (UniqueName: \"kubernetes.io/projected/b5544478-ac7d-47a8-a27f-8da131efb0fd-kube-api-access-2hlhl\") pod \"nova-cell0-cell-mapping-szx5d\" (UID: \"b5544478-ac7d-47a8-a27f-8da131efb0fd\") " pod="openstack/nova-cell0-cell-mapping-szx5d" Jan 27 15:32:17 crc kubenswrapper[4697]: I0127 15:32:17.463163 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/32decd79-6c0c-400e-8749-d78720c44d38-logs\") pod \"nova-metadata-0\" (UID: \"32decd79-6c0c-400e-8749-d78720c44d38\") " pod="openstack/nova-metadata-0" Jan 27 15:32:17 crc kubenswrapper[4697]: I0127 15:32:17.463240 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32decd79-6c0c-400e-8749-d78720c44d38-config-data\") pod \"nova-metadata-0\" (UID: \"32decd79-6c0c-400e-8749-d78720c44d38\") " pod="openstack/nova-metadata-0" Jan 27 15:32:17 crc kubenswrapper[4697]: I0127 15:32:17.463311 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lt9zp\" (UniqueName: \"kubernetes.io/projected/32decd79-6c0c-400e-8749-d78720c44d38-kube-api-access-lt9zp\") pod \"nova-metadata-0\" (UID: \"32decd79-6c0c-400e-8749-d78720c44d38\") " pod="openstack/nova-metadata-0" Jan 27 15:32:17 crc kubenswrapper[4697]: I0127 15:32:17.463342 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32decd79-6c0c-400e-8749-d78720c44d38-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"32decd79-6c0c-400e-8749-d78720c44d38\") " pod="openstack/nova-metadata-0" Jan 27 15:32:17 crc kubenswrapper[4697]: I0127 15:32:17.488234 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-szx5d" Jan 27 15:32:17 crc kubenswrapper[4697]: I0127 15:32:17.496244 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 15:32:17 crc kubenswrapper[4697]: I0127 15:32:17.590535 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32decd79-6c0c-400e-8749-d78720c44d38-config-data\") pod \"nova-metadata-0\" (UID: \"32decd79-6c0c-400e-8749-d78720c44d38\") " pod="openstack/nova-metadata-0" Jan 27 15:32:17 crc kubenswrapper[4697]: I0127 15:32:17.590711 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lt9zp\" (UniqueName: \"kubernetes.io/projected/32decd79-6c0c-400e-8749-d78720c44d38-kube-api-access-lt9zp\") pod \"nova-metadata-0\" (UID: \"32decd79-6c0c-400e-8749-d78720c44d38\") " pod="openstack/nova-metadata-0" Jan 27 15:32:17 crc kubenswrapper[4697]: I0127 15:32:17.590757 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32decd79-6c0c-400e-8749-d78720c44d38-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"32decd79-6c0c-400e-8749-d78720c44d38\") " pod="openstack/nova-metadata-0" Jan 27 15:32:17 crc kubenswrapper[4697]: I0127 15:32:17.590877 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/32decd79-6c0c-400e-8749-d78720c44d38-logs\") pod \"nova-metadata-0\" (UID: \"32decd79-6c0c-400e-8749-d78720c44d38\") " pod="openstack/nova-metadata-0" Jan 27 15:32:17 crc kubenswrapper[4697]: I0127 15:32:17.591576 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/32decd79-6c0c-400e-8749-d78720c44d38-logs\") pod \"nova-metadata-0\" (UID: \"32decd79-6c0c-400e-8749-d78720c44d38\") " pod="openstack/nova-metadata-0" Jan 27 15:32:17 crc kubenswrapper[4697]: I0127 15:32:17.598562 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32decd79-6c0c-400e-8749-d78720c44d38-config-data\") pod \"nova-metadata-0\" (UID: \"32decd79-6c0c-400e-8749-d78720c44d38\") " pod="openstack/nova-metadata-0" Jan 27 15:32:17 crc kubenswrapper[4697]: I0127 15:32:17.605392 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32decd79-6c0c-400e-8749-d78720c44d38-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"32decd79-6c0c-400e-8749-d78720c44d38\") " pod="openstack/nova-metadata-0" Jan 27 15:32:17 crc kubenswrapper[4697]: I0127 15:32:17.629064 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 27 15:32:17 crc kubenswrapper[4697]: I0127 15:32:17.631518 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 15:32:17 crc kubenswrapper[4697]: I0127 15:32:17.647749 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 27 15:32:17 crc kubenswrapper[4697]: I0127 15:32:17.669582 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 27 15:32:17 crc kubenswrapper[4697]: I0127 15:32:17.692233 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87e9d4dd-cc6a-4809-b4bb-51bc9f3b5443-config-data\") pod \"nova-api-0\" (UID: \"87e9d4dd-cc6a-4809-b4bb-51bc9f3b5443\") " pod="openstack/nova-api-0" Jan 27 15:32:17 crc kubenswrapper[4697]: I0127 15:32:17.692310 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/87e9d4dd-cc6a-4809-b4bb-51bc9f3b5443-logs\") pod \"nova-api-0\" (UID: \"87e9d4dd-cc6a-4809-b4bb-51bc9f3b5443\") " pod="openstack/nova-api-0" Jan 27 15:32:17 crc kubenswrapper[4697]: I0127 15:32:17.695199 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmk4n\" (UniqueName: \"kubernetes.io/projected/87e9d4dd-cc6a-4809-b4bb-51bc9f3b5443-kube-api-access-jmk4n\") pod \"nova-api-0\" (UID: \"87e9d4dd-cc6a-4809-b4bb-51bc9f3b5443\") " pod="openstack/nova-api-0" Jan 27 15:32:17 crc kubenswrapper[4697]: I0127 15:32:17.695338 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87e9d4dd-cc6a-4809-b4bb-51bc9f3b5443-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"87e9d4dd-cc6a-4809-b4bb-51bc9f3b5443\") " pod="openstack/nova-api-0" Jan 27 15:32:17 crc kubenswrapper[4697]: I0127 15:32:17.704873 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-czt4r"] Jan 27 15:32:17 crc kubenswrapper[4697]: I0127 15:32:17.706767 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-czt4r" Jan 27 15:32:17 crc kubenswrapper[4697]: I0127 15:32:17.714139 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lt9zp\" (UniqueName: \"kubernetes.io/projected/32decd79-6c0c-400e-8749-d78720c44d38-kube-api-access-lt9zp\") pod \"nova-metadata-0\" (UID: \"32decd79-6c0c-400e-8749-d78720c44d38\") " pod="openstack/nova-metadata-0" Jan 27 15:32:17 crc kubenswrapper[4697]: I0127 15:32:17.725312 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-czt4r"] Jan 27 15:32:17 crc kubenswrapper[4697]: I0127 15:32:17.797425 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpd28\" (UniqueName: \"kubernetes.io/projected/2ff89669-e519-40aa-bf6e-93e0d6ebced7-kube-api-access-zpd28\") pod \"dnsmasq-dns-757b4f8459-czt4r\" (UID: \"2ff89669-e519-40aa-bf6e-93e0d6ebced7\") " pod="openstack/dnsmasq-dns-757b4f8459-czt4r" Jan 27 15:32:17 crc kubenswrapper[4697]: I0127 15:32:17.797544 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87e9d4dd-cc6a-4809-b4bb-51bc9f3b5443-config-data\") pod \"nova-api-0\" (UID: \"87e9d4dd-cc6a-4809-b4bb-51bc9f3b5443\") " pod="openstack/nova-api-0" Jan 27 15:32:17 crc kubenswrapper[4697]: I0127 15:32:17.797584 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2ff89669-e519-40aa-bf6e-93e0d6ebced7-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-czt4r\" (UID: \"2ff89669-e519-40aa-bf6e-93e0d6ebced7\") " pod="openstack/dnsmasq-dns-757b4f8459-czt4r" Jan 27 15:32:17 crc kubenswrapper[4697]: I0127 15:32:17.797644 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/87e9d4dd-cc6a-4809-b4bb-51bc9f3b5443-logs\") pod \"nova-api-0\" (UID: \"87e9d4dd-cc6a-4809-b4bb-51bc9f3b5443\") " pod="openstack/nova-api-0" Jan 27 15:32:17 crc kubenswrapper[4697]: I0127 15:32:17.797713 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2ff89669-e519-40aa-bf6e-93e0d6ebced7-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-czt4r\" (UID: \"2ff89669-e519-40aa-bf6e-93e0d6ebced7\") " pod="openstack/dnsmasq-dns-757b4f8459-czt4r" Jan 27 15:32:17 crc kubenswrapper[4697]: I0127 15:32:17.797743 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2ff89669-e519-40aa-bf6e-93e0d6ebced7-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-czt4r\" (UID: \"2ff89669-e519-40aa-bf6e-93e0d6ebced7\") " pod="openstack/dnsmasq-dns-757b4f8459-czt4r" Jan 27 15:32:17 crc kubenswrapper[4697]: I0127 15:32:17.797805 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jmk4n\" (UniqueName: \"kubernetes.io/projected/87e9d4dd-cc6a-4809-b4bb-51bc9f3b5443-kube-api-access-jmk4n\") pod \"nova-api-0\" (UID: \"87e9d4dd-cc6a-4809-b4bb-51bc9f3b5443\") " pod="openstack/nova-api-0" Jan 27 15:32:17 crc kubenswrapper[4697]: I0127 15:32:17.798047 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2ff89669-e519-40aa-bf6e-93e0d6ebced7-dns-svc\") pod \"dnsmasq-dns-757b4f8459-czt4r\" (UID: \"2ff89669-e519-40aa-bf6e-93e0d6ebced7\") " pod="openstack/dnsmasq-dns-757b4f8459-czt4r" Jan 27 15:32:17 crc kubenswrapper[4697]: I0127 15:32:17.798114 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ff89669-e519-40aa-bf6e-93e0d6ebced7-config\") pod \"dnsmasq-dns-757b4f8459-czt4r\" (UID: \"2ff89669-e519-40aa-bf6e-93e0d6ebced7\") " pod="openstack/dnsmasq-dns-757b4f8459-czt4r" Jan 27 15:32:17 crc kubenswrapper[4697]: I0127 15:32:17.798293 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87e9d4dd-cc6a-4809-b4bb-51bc9f3b5443-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"87e9d4dd-cc6a-4809-b4bb-51bc9f3b5443\") " pod="openstack/nova-api-0" Jan 27 15:32:17 crc kubenswrapper[4697]: I0127 15:32:17.799616 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/87e9d4dd-cc6a-4809-b4bb-51bc9f3b5443-logs\") pod \"nova-api-0\" (UID: \"87e9d4dd-cc6a-4809-b4bb-51bc9f3b5443\") " pod="openstack/nova-api-0" Jan 27 15:32:17 crc kubenswrapper[4697]: I0127 15:32:17.821689 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87e9d4dd-cc6a-4809-b4bb-51bc9f3b5443-config-data\") pod \"nova-api-0\" (UID: \"87e9d4dd-cc6a-4809-b4bb-51bc9f3b5443\") " pod="openstack/nova-api-0" Jan 27 15:32:17 crc kubenswrapper[4697]: I0127 15:32:17.821933 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87e9d4dd-cc6a-4809-b4bb-51bc9f3b5443-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"87e9d4dd-cc6a-4809-b4bb-51bc9f3b5443\") " pod="openstack/nova-api-0" Jan 27 15:32:17 crc kubenswrapper[4697]: I0127 15:32:17.850902 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 15:32:17 crc kubenswrapper[4697]: I0127 15:32:17.866381 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 15:32:17 crc kubenswrapper[4697]: I0127 15:32:17.871252 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 27 15:32:17 crc kubenswrapper[4697]: I0127 15:32:17.877545 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jmk4n\" (UniqueName: \"kubernetes.io/projected/87e9d4dd-cc6a-4809-b4bb-51bc9f3b5443-kube-api-access-jmk4n\") pod \"nova-api-0\" (UID: \"87e9d4dd-cc6a-4809-b4bb-51bc9f3b5443\") " pod="openstack/nova-api-0" Jan 27 15:32:17 crc kubenswrapper[4697]: I0127 15:32:17.902643 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2ff89669-e519-40aa-bf6e-93e0d6ebced7-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-czt4r\" (UID: \"2ff89669-e519-40aa-bf6e-93e0d6ebced7\") " pod="openstack/dnsmasq-dns-757b4f8459-czt4r" Jan 27 15:32:17 crc kubenswrapper[4697]: I0127 15:32:17.902700 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2ff89669-e519-40aa-bf6e-93e0d6ebced7-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-czt4r\" (UID: \"2ff89669-e519-40aa-bf6e-93e0d6ebced7\") " pod="openstack/dnsmasq-dns-757b4f8459-czt4r" Jan 27 15:32:17 crc kubenswrapper[4697]: I0127 15:32:17.902755 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2ff89669-e519-40aa-bf6e-93e0d6ebced7-dns-svc\") pod \"dnsmasq-dns-757b4f8459-czt4r\" (UID: \"2ff89669-e519-40aa-bf6e-93e0d6ebced7\") " pod="openstack/dnsmasq-dns-757b4f8459-czt4r" Jan 27 15:32:17 crc kubenswrapper[4697]: I0127 15:32:17.904196 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2ff89669-e519-40aa-bf6e-93e0d6ebced7-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-czt4r\" (UID: \"2ff89669-e519-40aa-bf6e-93e0d6ebced7\") " pod="openstack/dnsmasq-dns-757b4f8459-czt4r" Jan 27 15:32:17 crc kubenswrapper[4697]: I0127 15:32:17.906081 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2ff89669-e519-40aa-bf6e-93e0d6ebced7-dns-svc\") pod \"dnsmasq-dns-757b4f8459-czt4r\" (UID: \"2ff89669-e519-40aa-bf6e-93e0d6ebced7\") " pod="openstack/dnsmasq-dns-757b4f8459-czt4r" Jan 27 15:32:17 crc kubenswrapper[4697]: I0127 15:32:17.911723 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ff89669-e519-40aa-bf6e-93e0d6ebced7-config\") pod \"dnsmasq-dns-757b4f8459-czt4r\" (UID: \"2ff89669-e519-40aa-bf6e-93e0d6ebced7\") " pod="openstack/dnsmasq-dns-757b4f8459-czt4r" Jan 27 15:32:17 crc kubenswrapper[4697]: I0127 15:32:17.912055 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zpd28\" (UniqueName: \"kubernetes.io/projected/2ff89669-e519-40aa-bf6e-93e0d6ebced7-kube-api-access-zpd28\") pod \"dnsmasq-dns-757b4f8459-czt4r\" (UID: \"2ff89669-e519-40aa-bf6e-93e0d6ebced7\") " pod="openstack/dnsmasq-dns-757b4f8459-czt4r" Jan 27 15:32:17 crc kubenswrapper[4697]: I0127 15:32:17.912293 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2ff89669-e519-40aa-bf6e-93e0d6ebced7-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-czt4r\" (UID: \"2ff89669-e519-40aa-bf6e-93e0d6ebced7\") " pod="openstack/dnsmasq-dns-757b4f8459-czt4r" Jan 27 15:32:17 crc kubenswrapper[4697]: I0127 15:32:17.913353 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2ff89669-e519-40aa-bf6e-93e0d6ebced7-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-czt4r\" (UID: \"2ff89669-e519-40aa-bf6e-93e0d6ebced7\") " pod="openstack/dnsmasq-dns-757b4f8459-czt4r" Jan 27 15:32:17 crc kubenswrapper[4697]: I0127 15:32:17.914148 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ff89669-e519-40aa-bf6e-93e0d6ebced7-config\") pod \"dnsmasq-dns-757b4f8459-czt4r\" (UID: \"2ff89669-e519-40aa-bf6e-93e0d6ebced7\") " pod="openstack/dnsmasq-dns-757b4f8459-czt4r" Jan 27 15:32:17 crc kubenswrapper[4697]: I0127 15:32:17.926357 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2ff89669-e519-40aa-bf6e-93e0d6ebced7-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-czt4r\" (UID: \"2ff89669-e519-40aa-bf6e-93e0d6ebced7\") " pod="openstack/dnsmasq-dns-757b4f8459-czt4r" Jan 27 15:32:17 crc kubenswrapper[4697]: I0127 15:32:17.933968 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 15:32:17 crc kubenswrapper[4697]: I0127 15:32:17.937911 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zpd28\" (UniqueName: \"kubernetes.io/projected/2ff89669-e519-40aa-bf6e-93e0d6ebced7-kube-api-access-zpd28\") pod \"dnsmasq-dns-757b4f8459-czt4r\" (UID: \"2ff89669-e519-40aa-bf6e-93e0d6ebced7\") " pod="openstack/dnsmasq-dns-757b4f8459-czt4r" Jan 27 15:32:17 crc kubenswrapper[4697]: I0127 15:32:17.944176 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 15:32:17 crc kubenswrapper[4697]: I0127 15:32:17.951093 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 15:32:17 crc kubenswrapper[4697]: I0127 15:32:17.952673 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 27 15:32:17 crc kubenswrapper[4697]: I0127 15:32:17.964057 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 15:32:17 crc kubenswrapper[4697]: I0127 15:32:17.970047 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 27 15:32:17 crc kubenswrapper[4697]: I0127 15:32:17.973570 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-czt4r" Jan 27 15:32:17 crc kubenswrapper[4697]: I0127 15:32:17.987305 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 15:32:18 crc kubenswrapper[4697]: I0127 15:32:18.014428 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sx9tb\" (UniqueName: \"kubernetes.io/projected/3c43bdb1-1b62-4407-87e6-14184c7d7ea2-kube-api-access-sx9tb\") pod \"nova-cell1-novncproxy-0\" (UID: \"3c43bdb1-1b62-4407-87e6-14184c7d7ea2\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 15:32:18 crc kubenswrapper[4697]: I0127 15:32:18.014544 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c43bdb1-1b62-4407-87e6-14184c7d7ea2-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"3c43bdb1-1b62-4407-87e6-14184c7d7ea2\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 15:32:18 crc kubenswrapper[4697]: I0127 15:32:18.014576 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-msgb5\" (UniqueName: \"kubernetes.io/projected/00db5b9d-f113-4bc3-b628-ba42b53388b1-kube-api-access-msgb5\") pod \"nova-scheduler-0\" (UID: \"00db5b9d-f113-4bc3-b628-ba42b53388b1\") " pod="openstack/nova-scheduler-0" Jan 27 15:32:18 crc kubenswrapper[4697]: I0127 15:32:18.014593 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00db5b9d-f113-4bc3-b628-ba42b53388b1-config-data\") pod \"nova-scheduler-0\" (UID: \"00db5b9d-f113-4bc3-b628-ba42b53388b1\") " pod="openstack/nova-scheduler-0" Jan 27 15:32:18 crc kubenswrapper[4697]: I0127 15:32:18.014640 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c43bdb1-1b62-4407-87e6-14184c7d7ea2-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"3c43bdb1-1b62-4407-87e6-14184c7d7ea2\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 15:32:18 crc kubenswrapper[4697]: I0127 15:32:18.014655 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00db5b9d-f113-4bc3-b628-ba42b53388b1-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"00db5b9d-f113-4bc3-b628-ba42b53388b1\") " pod="openstack/nova-scheduler-0" Jan 27 15:32:18 crc kubenswrapper[4697]: I0127 15:32:18.054125 4697 generic.go:334] "Generic (PLEG): container finished" podID="1ce006f9-2ea0-4e6a-886a-e6564c9bcb38" containerID="ab10c11cb891f533205f21d94ae30a3fcc03bf56b596c5d40083de3816ac850d" exitCode=0 Jan 27 15:32:18 crc kubenswrapper[4697]: I0127 15:32:18.054327 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1ce006f9-2ea0-4e6a-886a-e6564c9bcb38","Type":"ContainerDied","Data":"ab10c11cb891f533205f21d94ae30a3fcc03bf56b596c5d40083de3816ac850d"} Jan 27 15:32:18 crc kubenswrapper[4697]: I0127 15:32:18.054355 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1ce006f9-2ea0-4e6a-886a-e6564c9bcb38","Type":"ContainerDied","Data":"bf44f1a537af6113eed83b573f4c8686d9425c210e151f85afb5c5a16414062c"} Jan 27 15:32:18 crc kubenswrapper[4697]: I0127 15:32:18.054383 4697 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bf44f1a537af6113eed83b573f4c8686d9425c210e151f85afb5c5a16414062c" Jan 27 15:32:18 crc kubenswrapper[4697]: I0127 15:32:18.116461 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c43bdb1-1b62-4407-87e6-14184c7d7ea2-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"3c43bdb1-1b62-4407-87e6-14184c7d7ea2\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 15:32:18 crc kubenswrapper[4697]: I0127 15:32:18.116532 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-msgb5\" (UniqueName: \"kubernetes.io/projected/00db5b9d-f113-4bc3-b628-ba42b53388b1-kube-api-access-msgb5\") pod \"nova-scheduler-0\" (UID: \"00db5b9d-f113-4bc3-b628-ba42b53388b1\") " pod="openstack/nova-scheduler-0" Jan 27 15:32:18 crc kubenswrapper[4697]: I0127 15:32:18.116560 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00db5b9d-f113-4bc3-b628-ba42b53388b1-config-data\") pod \"nova-scheduler-0\" (UID: \"00db5b9d-f113-4bc3-b628-ba42b53388b1\") " pod="openstack/nova-scheduler-0" Jan 27 15:32:18 crc kubenswrapper[4697]: I0127 15:32:18.117635 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c43bdb1-1b62-4407-87e6-14184c7d7ea2-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"3c43bdb1-1b62-4407-87e6-14184c7d7ea2\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 15:32:18 crc kubenswrapper[4697]: I0127 15:32:18.117665 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00db5b9d-f113-4bc3-b628-ba42b53388b1-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"00db5b9d-f113-4bc3-b628-ba42b53388b1\") " pod="openstack/nova-scheduler-0" Jan 27 15:32:18 crc kubenswrapper[4697]: I0127 15:32:18.118118 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sx9tb\" (UniqueName: \"kubernetes.io/projected/3c43bdb1-1b62-4407-87e6-14184c7d7ea2-kube-api-access-sx9tb\") pod \"nova-cell1-novncproxy-0\" (UID: \"3c43bdb1-1b62-4407-87e6-14184c7d7ea2\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 15:32:18 crc kubenswrapper[4697]: I0127 15:32:18.132162 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00db5b9d-f113-4bc3-b628-ba42b53388b1-config-data\") pod \"nova-scheduler-0\" (UID: \"00db5b9d-f113-4bc3-b628-ba42b53388b1\") " pod="openstack/nova-scheduler-0" Jan 27 15:32:18 crc kubenswrapper[4697]: I0127 15:32:18.132189 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c43bdb1-1b62-4407-87e6-14184c7d7ea2-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"3c43bdb1-1b62-4407-87e6-14184c7d7ea2\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 15:32:18 crc kubenswrapper[4697]: I0127 15:32:18.133694 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c43bdb1-1b62-4407-87e6-14184c7d7ea2-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"3c43bdb1-1b62-4407-87e6-14184c7d7ea2\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 15:32:18 crc kubenswrapper[4697]: I0127 15:32:18.135947 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00db5b9d-f113-4bc3-b628-ba42b53388b1-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"00db5b9d-f113-4bc3-b628-ba42b53388b1\") " pod="openstack/nova-scheduler-0" Jan 27 15:32:18 crc kubenswrapper[4697]: I0127 15:32:18.140630 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-msgb5\" (UniqueName: \"kubernetes.io/projected/00db5b9d-f113-4bc3-b628-ba42b53388b1-kube-api-access-msgb5\") pod \"nova-scheduler-0\" (UID: \"00db5b9d-f113-4bc3-b628-ba42b53388b1\") " pod="openstack/nova-scheduler-0" Jan 27 15:32:18 crc kubenswrapper[4697]: I0127 15:32:18.141699 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sx9tb\" (UniqueName: \"kubernetes.io/projected/3c43bdb1-1b62-4407-87e6-14184c7d7ea2-kube-api-access-sx9tb\") pod \"nova-cell1-novncproxy-0\" (UID: \"3c43bdb1-1b62-4407-87e6-14184c7d7ea2\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 15:32:18 crc kubenswrapper[4697]: I0127 15:32:18.382341 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 15:32:18 crc kubenswrapper[4697]: I0127 15:32:18.561420 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 27 15:32:18 crc kubenswrapper[4697]: I0127 15:32:18.574621 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 15:32:18 crc kubenswrapper[4697]: I0127 15:32:18.645489 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1ce006f9-2ea0-4e6a-886a-e6564c9bcb38-scripts\") pod \"1ce006f9-2ea0-4e6a-886a-e6564c9bcb38\" (UID: \"1ce006f9-2ea0-4e6a-886a-e6564c9bcb38\") " Jan 27 15:32:18 crc kubenswrapper[4697]: I0127 15:32:18.645530 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ce006f9-2ea0-4e6a-886a-e6564c9bcb38-combined-ca-bundle\") pod \"1ce006f9-2ea0-4e6a-886a-e6564c9bcb38\" (UID: \"1ce006f9-2ea0-4e6a-886a-e6564c9bcb38\") " Jan 27 15:32:18 crc kubenswrapper[4697]: I0127 15:32:18.645573 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f2s2c\" (UniqueName: \"kubernetes.io/projected/1ce006f9-2ea0-4e6a-886a-e6564c9bcb38-kube-api-access-f2s2c\") pod \"1ce006f9-2ea0-4e6a-886a-e6564c9bcb38\" (UID: \"1ce006f9-2ea0-4e6a-886a-e6564c9bcb38\") " Jan 27 15:32:18 crc kubenswrapper[4697]: I0127 15:32:18.645591 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1ce006f9-2ea0-4e6a-886a-e6564c9bcb38-log-httpd\") pod \"1ce006f9-2ea0-4e6a-886a-e6564c9bcb38\" (UID: \"1ce006f9-2ea0-4e6a-886a-e6564c9bcb38\") " Jan 27 15:32:18 crc kubenswrapper[4697]: I0127 15:32:18.645633 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ce006f9-2ea0-4e6a-886a-e6564c9bcb38-config-data\") pod \"1ce006f9-2ea0-4e6a-886a-e6564c9bcb38\" (UID: \"1ce006f9-2ea0-4e6a-886a-e6564c9bcb38\") " Jan 27 15:32:18 crc kubenswrapper[4697]: I0127 15:32:18.645752 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1ce006f9-2ea0-4e6a-886a-e6564c9bcb38-run-httpd\") pod \"1ce006f9-2ea0-4e6a-886a-e6564c9bcb38\" (UID: \"1ce006f9-2ea0-4e6a-886a-e6564c9bcb38\") " Jan 27 15:32:18 crc kubenswrapper[4697]: I0127 15:32:18.645855 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1ce006f9-2ea0-4e6a-886a-e6564c9bcb38-sg-core-conf-yaml\") pod \"1ce006f9-2ea0-4e6a-886a-e6564c9bcb38\" (UID: \"1ce006f9-2ea0-4e6a-886a-e6564c9bcb38\") " Jan 27 15:32:18 crc kubenswrapper[4697]: I0127 15:32:18.647125 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ce006f9-2ea0-4e6a-886a-e6564c9bcb38-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "1ce006f9-2ea0-4e6a-886a-e6564c9bcb38" (UID: "1ce006f9-2ea0-4e6a-886a-e6564c9bcb38"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:32:18 crc kubenswrapper[4697]: I0127 15:32:18.660047 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ce006f9-2ea0-4e6a-886a-e6564c9bcb38-kube-api-access-f2s2c" (OuterVolumeSpecName: "kube-api-access-f2s2c") pod "1ce006f9-2ea0-4e6a-886a-e6564c9bcb38" (UID: "1ce006f9-2ea0-4e6a-886a-e6564c9bcb38"). InnerVolumeSpecName "kube-api-access-f2s2c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:32:18 crc kubenswrapper[4697]: I0127 15:32:18.660950 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ce006f9-2ea0-4e6a-886a-e6564c9bcb38-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "1ce006f9-2ea0-4e6a-886a-e6564c9bcb38" (UID: "1ce006f9-2ea0-4e6a-886a-e6564c9bcb38"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:32:18 crc kubenswrapper[4697]: I0127 15:32:18.672553 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ce006f9-2ea0-4e6a-886a-e6564c9bcb38-scripts" (OuterVolumeSpecName: "scripts") pod "1ce006f9-2ea0-4e6a-886a-e6564c9bcb38" (UID: "1ce006f9-2ea0-4e6a-886a-e6564c9bcb38"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:32:18 crc kubenswrapper[4697]: I0127 15:32:18.771864 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ce006f9-2ea0-4e6a-886a-e6564c9bcb38-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "1ce006f9-2ea0-4e6a-886a-e6564c9bcb38" (UID: "1ce006f9-2ea0-4e6a-886a-e6564c9bcb38"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:32:18 crc kubenswrapper[4697]: I0127 15:32:18.772381 4697 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1ce006f9-2ea0-4e6a-886a-e6564c9bcb38-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 27 15:32:18 crc kubenswrapper[4697]: I0127 15:32:18.772395 4697 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1ce006f9-2ea0-4e6a-886a-e6564c9bcb38-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 15:32:18 crc kubenswrapper[4697]: I0127 15:32:18.772405 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f2s2c\" (UniqueName: \"kubernetes.io/projected/1ce006f9-2ea0-4e6a-886a-e6564c9bcb38-kube-api-access-f2s2c\") on node \"crc\" DevicePath \"\"" Jan 27 15:32:18 crc kubenswrapper[4697]: I0127 15:32:18.772421 4697 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1ce006f9-2ea0-4e6a-886a-e6564c9bcb38-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 15:32:18 crc kubenswrapper[4697]: I0127 15:32:18.772430 4697 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1ce006f9-2ea0-4e6a-886a-e6564c9bcb38-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 15:32:18 crc kubenswrapper[4697]: I0127 15:32:18.818333 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-czt4r"] Jan 27 15:32:18 crc kubenswrapper[4697]: I0127 15:32:18.944098 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ce006f9-2ea0-4e6a-886a-e6564c9bcb38-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1ce006f9-2ea0-4e6a-886a-e6564c9bcb38" (UID: "1ce006f9-2ea0-4e6a-886a-e6564c9bcb38"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:32:18 crc kubenswrapper[4697]: I0127 15:32:18.944892 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-szx5d"] Jan 27 15:32:18 crc kubenswrapper[4697]: I0127 15:32:18.970085 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ce006f9-2ea0-4e6a-886a-e6564c9bcb38-config-data" (OuterVolumeSpecName: "config-data") pod "1ce006f9-2ea0-4e6a-886a-e6564c9bcb38" (UID: "1ce006f9-2ea0-4e6a-886a-e6564c9bcb38"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:32:18 crc kubenswrapper[4697]: I0127 15:32:18.982634 4697 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ce006f9-2ea0-4e6a-886a-e6564c9bcb38-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 15:32:18 crc kubenswrapper[4697]: I0127 15:32:18.982665 4697 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ce006f9-2ea0-4e6a-886a-e6564c9bcb38-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 15:32:19 crc kubenswrapper[4697]: I0127 15:32:19.050108 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 27 15:32:19 crc kubenswrapper[4697]: I0127 15:32:19.076079 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-8qn4m"] Jan 27 15:32:19 crc kubenswrapper[4697]: E0127 15:32:19.076480 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ce006f9-2ea0-4e6a-886a-e6564c9bcb38" containerName="sg-core" Jan 27 15:32:19 crc kubenswrapper[4697]: I0127 15:32:19.076496 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ce006f9-2ea0-4e6a-886a-e6564c9bcb38" containerName="sg-core" Jan 27 15:32:19 crc kubenswrapper[4697]: E0127 15:32:19.076508 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ce006f9-2ea0-4e6a-886a-e6564c9bcb38" containerName="proxy-httpd" Jan 27 15:32:19 crc kubenswrapper[4697]: I0127 15:32:19.076514 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ce006f9-2ea0-4e6a-886a-e6564c9bcb38" containerName="proxy-httpd" Jan 27 15:32:19 crc kubenswrapper[4697]: E0127 15:32:19.076531 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ce006f9-2ea0-4e6a-886a-e6564c9bcb38" containerName="ceilometer-central-agent" Jan 27 15:32:19 crc kubenswrapper[4697]: I0127 15:32:19.076537 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ce006f9-2ea0-4e6a-886a-e6564c9bcb38" containerName="ceilometer-central-agent" Jan 27 15:32:19 crc kubenswrapper[4697]: E0127 15:32:19.076561 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ce006f9-2ea0-4e6a-886a-e6564c9bcb38" containerName="ceilometer-notification-agent" Jan 27 15:32:19 crc kubenswrapper[4697]: I0127 15:32:19.076567 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ce006f9-2ea0-4e6a-886a-e6564c9bcb38" containerName="ceilometer-notification-agent" Jan 27 15:32:19 crc kubenswrapper[4697]: I0127 15:32:19.076761 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ce006f9-2ea0-4e6a-886a-e6564c9bcb38" containerName="sg-core" Jan 27 15:32:19 crc kubenswrapper[4697]: I0127 15:32:19.076817 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ce006f9-2ea0-4e6a-886a-e6564c9bcb38" containerName="ceilometer-central-agent" Jan 27 15:32:19 crc kubenswrapper[4697]: I0127 15:32:19.076831 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ce006f9-2ea0-4e6a-886a-e6564c9bcb38" containerName="proxy-httpd" Jan 27 15:32:19 crc kubenswrapper[4697]: I0127 15:32:19.076838 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ce006f9-2ea0-4e6a-886a-e6564c9bcb38" containerName="ceilometer-notification-agent" Jan 27 15:32:19 crc kubenswrapper[4697]: I0127 15:32:19.077382 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-szx5d" event={"ID":"b5544478-ac7d-47a8-a27f-8da131efb0fd","Type":"ContainerStarted","Data":"036616cf2287a923c88116e8e34f22b440ae3bb17546e83ab23da44c33a71745"} Jan 27 15:32:19 crc kubenswrapper[4697]: I0127 15:32:19.077465 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-8qn4m" Jan 27 15:32:19 crc kubenswrapper[4697]: I0127 15:32:19.081597 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Jan 27 15:32:19 crc kubenswrapper[4697]: I0127 15:32:19.081607 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 27 15:32:19 crc kubenswrapper[4697]: I0127 15:32:19.082197 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 15:32:19 crc kubenswrapper[4697]: I0127 15:32:19.102293 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-czt4r" event={"ID":"2ff89669-e519-40aa-bf6e-93e0d6ebced7","Type":"ContainerStarted","Data":"b8ece5b9d60b5baaefce0a72baf3feb919bfaf3c14d4ad0a1d12970078b90896"} Jan 27 15:32:19 crc kubenswrapper[4697]: I0127 15:32:19.105988 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-8qn4m"] Jan 27 15:32:19 crc kubenswrapper[4697]: W0127 15:32:19.112924 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod87e9d4dd_cc6a_4809_b4bb_51bc9f3b5443.slice/crio-62d3ea6b449e68d02e0335486cf7671effc89f5be223e37f88b7dba47631a354 WatchSource:0}: Error finding container 62d3ea6b449e68d02e0335486cf7671effc89f5be223e37f88b7dba47631a354: Status 404 returned error can't find the container with id 62d3ea6b449e68d02e0335486cf7671effc89f5be223e37f88b7dba47631a354 Jan 27 15:32:19 crc kubenswrapper[4697]: I0127 15:32:19.193877 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 15:32:19 crc kubenswrapper[4697]: I0127 15:32:19.204246 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 27 15:32:19 crc kubenswrapper[4697]: I0127 15:32:19.210054 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dab4d694-1486-4154-89dd-5f2f04639abe-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-8qn4m\" (UID: \"dab4d694-1486-4154-89dd-5f2f04639abe\") " pod="openstack/nova-cell1-conductor-db-sync-8qn4m" Jan 27 15:32:19 crc kubenswrapper[4697]: I0127 15:32:19.210343 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27hl5\" (UniqueName: \"kubernetes.io/projected/dab4d694-1486-4154-89dd-5f2f04639abe-kube-api-access-27hl5\") pod \"nova-cell1-conductor-db-sync-8qn4m\" (UID: \"dab4d694-1486-4154-89dd-5f2f04639abe\") " pod="openstack/nova-cell1-conductor-db-sync-8qn4m" Jan 27 15:32:19 crc kubenswrapper[4697]: I0127 15:32:19.210529 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dab4d694-1486-4154-89dd-5f2f04639abe-config-data\") pod \"nova-cell1-conductor-db-sync-8qn4m\" (UID: \"dab4d694-1486-4154-89dd-5f2f04639abe\") " pod="openstack/nova-cell1-conductor-db-sync-8qn4m" Jan 27 15:32:19 crc kubenswrapper[4697]: I0127 15:32:19.210698 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dab4d694-1486-4154-89dd-5f2f04639abe-scripts\") pod \"nova-cell1-conductor-db-sync-8qn4m\" (UID: \"dab4d694-1486-4154-89dd-5f2f04639abe\") " pod="openstack/nova-cell1-conductor-db-sync-8qn4m" Jan 27 15:32:19 crc kubenswrapper[4697]: I0127 15:32:19.225869 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 27 15:32:19 crc kubenswrapper[4697]: I0127 15:32:19.228836 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 15:32:19 crc kubenswrapper[4697]: I0127 15:32:19.232732 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 27 15:32:19 crc kubenswrapper[4697]: I0127 15:32:19.232936 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 27 15:32:19 crc kubenswrapper[4697]: I0127 15:32:19.233379 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 27 15:32:19 crc kubenswrapper[4697]: I0127 15:32:19.290575 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 15:32:19 crc kubenswrapper[4697]: I0127 15:32:19.317492 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03c328cb-43ac-46ab-8677-80be0dde18e3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"03c328cb-43ac-46ab-8677-80be0dde18e3\") " pod="openstack/ceilometer-0" Jan 27 15:32:19 crc kubenswrapper[4697]: I0127 15:32:19.317885 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/03c328cb-43ac-46ab-8677-80be0dde18e3-scripts\") pod \"ceilometer-0\" (UID: \"03c328cb-43ac-46ab-8677-80be0dde18e3\") " pod="openstack/ceilometer-0" Jan 27 15:32:19 crc kubenswrapper[4697]: I0127 15:32:19.317924 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/03c328cb-43ac-46ab-8677-80be0dde18e3-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"03c328cb-43ac-46ab-8677-80be0dde18e3\") " pod="openstack/ceilometer-0" Jan 27 15:32:19 crc kubenswrapper[4697]: I0127 15:32:19.317959 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03c328cb-43ac-46ab-8677-80be0dde18e3-config-data\") pod \"ceilometer-0\" (UID: \"03c328cb-43ac-46ab-8677-80be0dde18e3\") " pod="openstack/ceilometer-0" Jan 27 15:32:19 crc kubenswrapper[4697]: I0127 15:32:19.318050 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dab4d694-1486-4154-89dd-5f2f04639abe-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-8qn4m\" (UID: \"dab4d694-1486-4154-89dd-5f2f04639abe\") " pod="openstack/nova-cell1-conductor-db-sync-8qn4m" Jan 27 15:32:19 crc kubenswrapper[4697]: I0127 15:32:19.318124 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/03c328cb-43ac-46ab-8677-80be0dde18e3-run-httpd\") pod \"ceilometer-0\" (UID: \"03c328cb-43ac-46ab-8677-80be0dde18e3\") " pod="openstack/ceilometer-0" Jan 27 15:32:19 crc kubenswrapper[4697]: I0127 15:32:19.318165 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-27hl5\" (UniqueName: \"kubernetes.io/projected/dab4d694-1486-4154-89dd-5f2f04639abe-kube-api-access-27hl5\") pod \"nova-cell1-conductor-db-sync-8qn4m\" (UID: \"dab4d694-1486-4154-89dd-5f2f04639abe\") " pod="openstack/nova-cell1-conductor-db-sync-8qn4m" Jan 27 15:32:19 crc kubenswrapper[4697]: I0127 15:32:19.318201 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/03c328cb-43ac-46ab-8677-80be0dde18e3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"03c328cb-43ac-46ab-8677-80be0dde18e3\") " pod="openstack/ceilometer-0" Jan 27 15:32:19 crc kubenswrapper[4697]: I0127 15:32:19.318262 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dab4d694-1486-4154-89dd-5f2f04639abe-config-data\") pod \"nova-cell1-conductor-db-sync-8qn4m\" (UID: \"dab4d694-1486-4154-89dd-5f2f04639abe\") " pod="openstack/nova-cell1-conductor-db-sync-8qn4m" Jan 27 15:32:19 crc kubenswrapper[4697]: I0127 15:32:19.318294 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mxqd\" (UniqueName: \"kubernetes.io/projected/03c328cb-43ac-46ab-8677-80be0dde18e3-kube-api-access-9mxqd\") pod \"ceilometer-0\" (UID: \"03c328cb-43ac-46ab-8677-80be0dde18e3\") " pod="openstack/ceilometer-0" Jan 27 15:32:19 crc kubenswrapper[4697]: I0127 15:32:19.318339 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/03c328cb-43ac-46ab-8677-80be0dde18e3-log-httpd\") pod \"ceilometer-0\" (UID: \"03c328cb-43ac-46ab-8677-80be0dde18e3\") " pod="openstack/ceilometer-0" Jan 27 15:32:19 crc kubenswrapper[4697]: I0127 15:32:19.318365 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dab4d694-1486-4154-89dd-5f2f04639abe-scripts\") pod \"nova-cell1-conductor-db-sync-8qn4m\" (UID: \"dab4d694-1486-4154-89dd-5f2f04639abe\") " pod="openstack/nova-cell1-conductor-db-sync-8qn4m" Jan 27 15:32:19 crc kubenswrapper[4697]: I0127 15:32:19.326734 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dab4d694-1486-4154-89dd-5f2f04639abe-scripts\") pod \"nova-cell1-conductor-db-sync-8qn4m\" (UID: \"dab4d694-1486-4154-89dd-5f2f04639abe\") " pod="openstack/nova-cell1-conductor-db-sync-8qn4m" Jan 27 15:32:19 crc kubenswrapper[4697]: I0127 15:32:19.328167 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dab4d694-1486-4154-89dd-5f2f04639abe-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-8qn4m\" (UID: \"dab4d694-1486-4154-89dd-5f2f04639abe\") " pod="openstack/nova-cell1-conductor-db-sync-8qn4m" Jan 27 15:32:19 crc kubenswrapper[4697]: I0127 15:32:19.335392 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dab4d694-1486-4154-89dd-5f2f04639abe-config-data\") pod \"nova-cell1-conductor-db-sync-8qn4m\" (UID: \"dab4d694-1486-4154-89dd-5f2f04639abe\") " pod="openstack/nova-cell1-conductor-db-sync-8qn4m" Jan 27 15:32:19 crc kubenswrapper[4697]: I0127 15:32:19.335678 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 15:32:19 crc kubenswrapper[4697]: I0127 15:32:19.354547 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 15:32:19 crc kubenswrapper[4697]: I0127 15:32:19.371447 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-27hl5\" (UniqueName: \"kubernetes.io/projected/dab4d694-1486-4154-89dd-5f2f04639abe-kube-api-access-27hl5\") pod \"nova-cell1-conductor-db-sync-8qn4m\" (UID: \"dab4d694-1486-4154-89dd-5f2f04639abe\") " pod="openstack/nova-cell1-conductor-db-sync-8qn4m" Jan 27 15:32:19 crc kubenswrapper[4697]: I0127 15:32:19.425771 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mxqd\" (UniqueName: \"kubernetes.io/projected/03c328cb-43ac-46ab-8677-80be0dde18e3-kube-api-access-9mxqd\") pod \"ceilometer-0\" (UID: \"03c328cb-43ac-46ab-8677-80be0dde18e3\") " pod="openstack/ceilometer-0" Jan 27 15:32:19 crc kubenswrapper[4697]: I0127 15:32:19.425886 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/03c328cb-43ac-46ab-8677-80be0dde18e3-log-httpd\") pod \"ceilometer-0\" (UID: \"03c328cb-43ac-46ab-8677-80be0dde18e3\") " pod="openstack/ceilometer-0" Jan 27 15:32:19 crc kubenswrapper[4697]: I0127 15:32:19.425934 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03c328cb-43ac-46ab-8677-80be0dde18e3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"03c328cb-43ac-46ab-8677-80be0dde18e3\") " pod="openstack/ceilometer-0" Jan 27 15:32:19 crc kubenswrapper[4697]: I0127 15:32:19.425954 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/03c328cb-43ac-46ab-8677-80be0dde18e3-scripts\") pod \"ceilometer-0\" (UID: \"03c328cb-43ac-46ab-8677-80be0dde18e3\") " pod="openstack/ceilometer-0" Jan 27 15:32:19 crc kubenswrapper[4697]: I0127 15:32:19.425975 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/03c328cb-43ac-46ab-8677-80be0dde18e3-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"03c328cb-43ac-46ab-8677-80be0dde18e3\") " pod="openstack/ceilometer-0" Jan 27 15:32:19 crc kubenswrapper[4697]: I0127 15:32:19.425993 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03c328cb-43ac-46ab-8677-80be0dde18e3-config-data\") pod \"ceilometer-0\" (UID: \"03c328cb-43ac-46ab-8677-80be0dde18e3\") " pod="openstack/ceilometer-0" Jan 27 15:32:19 crc kubenswrapper[4697]: I0127 15:32:19.426047 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/03c328cb-43ac-46ab-8677-80be0dde18e3-run-httpd\") pod \"ceilometer-0\" (UID: \"03c328cb-43ac-46ab-8677-80be0dde18e3\") " pod="openstack/ceilometer-0" Jan 27 15:32:19 crc kubenswrapper[4697]: I0127 15:32:19.426076 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/03c328cb-43ac-46ab-8677-80be0dde18e3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"03c328cb-43ac-46ab-8677-80be0dde18e3\") " pod="openstack/ceilometer-0" Jan 27 15:32:19 crc kubenswrapper[4697]: I0127 15:32:19.431409 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/03c328cb-43ac-46ab-8677-80be0dde18e3-log-httpd\") pod \"ceilometer-0\" (UID: \"03c328cb-43ac-46ab-8677-80be0dde18e3\") " pod="openstack/ceilometer-0" Jan 27 15:32:19 crc kubenswrapper[4697]: I0127 15:32:19.432223 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/03c328cb-43ac-46ab-8677-80be0dde18e3-run-httpd\") pod \"ceilometer-0\" (UID: \"03c328cb-43ac-46ab-8677-80be0dde18e3\") " pod="openstack/ceilometer-0" Jan 27 15:32:19 crc kubenswrapper[4697]: I0127 15:32:19.442846 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/03c328cb-43ac-46ab-8677-80be0dde18e3-scripts\") pod \"ceilometer-0\" (UID: \"03c328cb-43ac-46ab-8677-80be0dde18e3\") " pod="openstack/ceilometer-0" Jan 27 15:32:19 crc kubenswrapper[4697]: I0127 15:32:19.456981 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03c328cb-43ac-46ab-8677-80be0dde18e3-config-data\") pod \"ceilometer-0\" (UID: \"03c328cb-43ac-46ab-8677-80be0dde18e3\") " pod="openstack/ceilometer-0" Jan 27 15:32:19 crc kubenswrapper[4697]: I0127 15:32:19.459266 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/03c328cb-43ac-46ab-8677-80be0dde18e3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"03c328cb-43ac-46ab-8677-80be0dde18e3\") " pod="openstack/ceilometer-0" Jan 27 15:32:19 crc kubenswrapper[4697]: I0127 15:32:19.459655 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-8qn4m" Jan 27 15:32:19 crc kubenswrapper[4697]: I0127 15:32:19.465753 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03c328cb-43ac-46ab-8677-80be0dde18e3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"03c328cb-43ac-46ab-8677-80be0dde18e3\") " pod="openstack/ceilometer-0" Jan 27 15:32:19 crc kubenswrapper[4697]: I0127 15:32:19.470929 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/03c328cb-43ac-46ab-8677-80be0dde18e3-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"03c328cb-43ac-46ab-8677-80be0dde18e3\") " pod="openstack/ceilometer-0" Jan 27 15:32:19 crc kubenswrapper[4697]: I0127 15:32:19.508799 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9mxqd\" (UniqueName: \"kubernetes.io/projected/03c328cb-43ac-46ab-8677-80be0dde18e3-kube-api-access-9mxqd\") pod \"ceilometer-0\" (UID: \"03c328cb-43ac-46ab-8677-80be0dde18e3\") " pod="openstack/ceilometer-0" Jan 27 15:32:19 crc kubenswrapper[4697]: I0127 15:32:19.676065 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 15:32:19 crc kubenswrapper[4697]: I0127 15:32:19.823462 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 15:32:19 crc kubenswrapper[4697]: I0127 15:32:19.906240 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-7j258" Jan 27 15:32:19 crc kubenswrapper[4697]: I0127 15:32:19.906282 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-7j258" Jan 27 15:32:20 crc kubenswrapper[4697]: I0127 15:32:20.109059 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"87e9d4dd-cc6a-4809-b4bb-51bc9f3b5443","Type":"ContainerStarted","Data":"62d3ea6b449e68d02e0335486cf7671effc89f5be223e37f88b7dba47631a354"} Jan 27 15:32:20 crc kubenswrapper[4697]: I0127 15:32:20.115096 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"32decd79-6c0c-400e-8749-d78720c44d38","Type":"ContainerStarted","Data":"469e0185a7527682d04226e94717f2e22e2e1be99055aa0a8d0966c6f87432ea"} Jan 27 15:32:20 crc kubenswrapper[4697]: I0127 15:32:20.118003 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"00db5b9d-f113-4bc3-b628-ba42b53388b1","Type":"ContainerStarted","Data":"f2bd466fa07ff768425f206770a41ca72c749a6800cd4f554541d67dcb0395c7"} Jan 27 15:32:20 crc kubenswrapper[4697]: I0127 15:32:20.119309 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-szx5d" event={"ID":"b5544478-ac7d-47a8-a27f-8da131efb0fd","Type":"ContainerStarted","Data":"1bcbeaaf6eb253bb46ca641faacb97dbbeb6a74dd9da50d0b7167b10271fb699"} Jan 27 15:32:20 crc kubenswrapper[4697]: I0127 15:32:20.132373 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-czt4r" event={"ID":"2ff89669-e519-40aa-bf6e-93e0d6ebced7","Type":"ContainerStarted","Data":"4aff731f7f170e72ef347567b2ac58245f47fdb51e0677cf01073f4767e3674f"} Jan 27 15:32:20 crc kubenswrapper[4697]: I0127 15:32:20.134101 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"3c43bdb1-1b62-4407-87e6-14184c7d7ea2","Type":"ContainerStarted","Data":"0350b40a8da5672879f5de1928e1d81333c0cbc0b3dccaa3a738d9d6ef8b75c7"} Jan 27 15:32:20 crc kubenswrapper[4697]: I0127 15:32:20.158517 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-szx5d" podStartSLOduration=3.158497543 podStartE2EDuration="3.158497543s" podCreationTimestamp="2026-01-27 15:32:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:32:20.156084204 +0000 UTC m=+1436.328483985" watchObservedRunningTime="2026-01-27 15:32:20.158497543 +0000 UTC m=+1436.330897324" Jan 27 15:32:20 crc kubenswrapper[4697]: I0127 15:32:20.213759 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-8qn4m"] Jan 27 15:32:20 crc kubenswrapper[4697]: I0127 15:32:20.412046 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 15:32:20 crc kubenswrapper[4697]: I0127 15:32:20.613755 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ce006f9-2ea0-4e6a-886a-e6564c9bcb38" path="/var/lib/kubelet/pods/1ce006f9-2ea0-4e6a-886a-e6564c9bcb38/volumes" Jan 27 15:32:21 crc kubenswrapper[4697]: I0127 15:32:21.022594 4697 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-7j258" podUID="b4da5f97-06a4-452e-9f5e-87c97c5bf1f5" containerName="registry-server" probeResult="failure" output=< Jan 27 15:32:21 crc kubenswrapper[4697]: timeout: failed to connect service ":50051" within 1s Jan 27 15:32:21 crc kubenswrapper[4697]: > Jan 27 15:32:21 crc kubenswrapper[4697]: I0127 15:32:21.159797 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-8qn4m" event={"ID":"dab4d694-1486-4154-89dd-5f2f04639abe","Type":"ContainerStarted","Data":"1ff89c29f96f550b0c007c37fa86cbd840a71abf4f2d281576e8d3720d9458cf"} Jan 27 15:32:21 crc kubenswrapper[4697]: I0127 15:32:21.159834 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-8qn4m" event={"ID":"dab4d694-1486-4154-89dd-5f2f04639abe","Type":"ContainerStarted","Data":"8a98e38db222906f254668540946323990857febb99da5ba8e338231894f78f9"} Jan 27 15:32:21 crc kubenswrapper[4697]: I0127 15:32:21.166059 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"03c328cb-43ac-46ab-8677-80be0dde18e3","Type":"ContainerStarted","Data":"5adb0afe24fc40aa7e95ea7a016e88f1734a987abe493fd3cdd1c5884963d390"} Jan 27 15:32:21 crc kubenswrapper[4697]: I0127 15:32:21.167916 4697 generic.go:334] "Generic (PLEG): container finished" podID="2ff89669-e519-40aa-bf6e-93e0d6ebced7" containerID="4aff731f7f170e72ef347567b2ac58245f47fdb51e0677cf01073f4767e3674f" exitCode=0 Jan 27 15:32:21 crc kubenswrapper[4697]: I0127 15:32:21.169109 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-czt4r" event={"ID":"2ff89669-e519-40aa-bf6e-93e0d6ebced7","Type":"ContainerDied","Data":"4aff731f7f170e72ef347567b2ac58245f47fdb51e0677cf01073f4767e3674f"} Jan 27 15:32:21 crc kubenswrapper[4697]: I0127 15:32:21.169134 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-czt4r" event={"ID":"2ff89669-e519-40aa-bf6e-93e0d6ebced7","Type":"ContainerStarted","Data":"26d509663ef0b579cee9235ec1ed1efb1c8fabf44b0eefc9b39c6b9ae718c769"} Jan 27 15:32:21 crc kubenswrapper[4697]: I0127 15:32:21.169149 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-757b4f8459-czt4r" Jan 27 15:32:21 crc kubenswrapper[4697]: I0127 15:32:21.178058 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-8qn4m" podStartSLOduration=2.178022322 podStartE2EDuration="2.178022322s" podCreationTimestamp="2026-01-27 15:32:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:32:21.174125088 +0000 UTC m=+1437.346524869" watchObservedRunningTime="2026-01-27 15:32:21.178022322 +0000 UTC m=+1437.350422103" Jan 27 15:32:21 crc kubenswrapper[4697]: I0127 15:32:21.202009 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-757b4f8459-czt4r" podStartSLOduration=4.201990677 podStartE2EDuration="4.201990677s" podCreationTimestamp="2026-01-27 15:32:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:32:21.196614666 +0000 UTC m=+1437.369014447" watchObservedRunningTime="2026-01-27 15:32:21.201990677 +0000 UTC m=+1437.374390458" Jan 27 15:32:21 crc kubenswrapper[4697]: I0127 15:32:21.342399 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 27 15:32:21 crc kubenswrapper[4697]: I0127 15:32:21.810870 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 15:32:21 crc kubenswrapper[4697]: I0127 15:32:21.822816 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 15:32:22 crc kubenswrapper[4697]: I0127 15:32:22.199030 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"03c328cb-43ac-46ab-8677-80be0dde18e3","Type":"ContainerStarted","Data":"9cf10d48eaa376374bcd61c81e74dba04a1a50dca2ca6df1ef81d081ecc07923"} Jan 27 15:32:26 crc kubenswrapper[4697]: I0127 15:32:26.257319 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"32decd79-6c0c-400e-8749-d78720c44d38","Type":"ContainerStarted","Data":"5ee84b0a03fdbfe022fe4cef18b735c36a804fc92369b610a7034af2899103b6"} Jan 27 15:32:26 crc kubenswrapper[4697]: I0127 15:32:26.257430 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="32decd79-6c0c-400e-8749-d78720c44d38" containerName="nova-metadata-log" containerID="cri-o://954c72f85d95fd26b90e55fe90407ab92d0f7a12595b0cf9030c308b0535e73f" gracePeriod=30 Jan 27 15:32:26 crc kubenswrapper[4697]: I0127 15:32:26.257683 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="32decd79-6c0c-400e-8749-d78720c44d38" containerName="nova-metadata-metadata" containerID="cri-o://5ee84b0a03fdbfe022fe4cef18b735c36a804fc92369b610a7034af2899103b6" gracePeriod=30 Jan 27 15:32:26 crc kubenswrapper[4697]: I0127 15:32:26.260675 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"32decd79-6c0c-400e-8749-d78720c44d38","Type":"ContainerStarted","Data":"954c72f85d95fd26b90e55fe90407ab92d0f7a12595b0cf9030c308b0535e73f"} Jan 27 15:32:26 crc kubenswrapper[4697]: I0127 15:32:26.262277 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"03c328cb-43ac-46ab-8677-80be0dde18e3","Type":"ContainerStarted","Data":"47d755a0e7983898742460fed87e0d4f008c7020d44a017d1473810362d9e5c6"} Jan 27 15:32:26 crc kubenswrapper[4697]: I0127 15:32:26.268118 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"3c43bdb1-1b62-4407-87e6-14184c7d7ea2","Type":"ContainerStarted","Data":"867d4560f7712d03fc5b944983a3ffa47a0c21fe5bbbfa7cf347a295f8b2a68f"} Jan 27 15:32:26 crc kubenswrapper[4697]: I0127 15:32:26.268219 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="3c43bdb1-1b62-4407-87e6-14184c7d7ea2" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://867d4560f7712d03fc5b944983a3ffa47a0c21fe5bbbfa7cf347a295f8b2a68f" gracePeriod=30 Jan 27 15:32:26 crc kubenswrapper[4697]: I0127 15:32:26.271192 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"87e9d4dd-cc6a-4809-b4bb-51bc9f3b5443","Type":"ContainerStarted","Data":"21672ab3ed72933291e5ce037922a289dcc964fd891a508cce64f62db0547781"} Jan 27 15:32:26 crc kubenswrapper[4697]: I0127 15:32:26.271383 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"87e9d4dd-cc6a-4809-b4bb-51bc9f3b5443","Type":"ContainerStarted","Data":"253fb0d2b6adbd491ce9a17bc37b4cdfd16e34e98e7bd8a455d7bcc94372583e"} Jan 27 15:32:26 crc kubenswrapper[4697]: I0127 15:32:26.274427 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"00db5b9d-f113-4bc3-b628-ba42b53388b1","Type":"ContainerStarted","Data":"536cb9785ab4bbdece4e248afa0eb4936af6c8913295be9b405da9b7885aa2d5"} Jan 27 15:32:26 crc kubenswrapper[4697]: I0127 15:32:26.288904 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.795334065 podStartE2EDuration="9.288879754s" podCreationTimestamp="2026-01-27 15:32:17 +0000 UTC" firstStartedPulling="2026-01-27 15:32:19.279278713 +0000 UTC m=+1435.451678494" lastFinishedPulling="2026-01-27 15:32:24.772824402 +0000 UTC m=+1440.945224183" observedRunningTime="2026-01-27 15:32:26.282194431 +0000 UTC m=+1442.454594222" watchObservedRunningTime="2026-01-27 15:32:26.288879754 +0000 UTC m=+1442.461279535" Jan 27 15:32:26 crc kubenswrapper[4697]: I0127 15:32:26.304757 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.681879409 podStartE2EDuration="9.30473824s" podCreationTimestamp="2026-01-27 15:32:17 +0000 UTC" firstStartedPulling="2026-01-27 15:32:19.14791465 +0000 UTC m=+1435.320314431" lastFinishedPulling="2026-01-27 15:32:24.770773471 +0000 UTC m=+1440.943173262" observedRunningTime="2026-01-27 15:32:26.302313101 +0000 UTC m=+1442.474712882" watchObservedRunningTime="2026-01-27 15:32:26.30473824 +0000 UTC m=+1442.477138021" Jan 27 15:32:26 crc kubenswrapper[4697]: I0127 15:32:26.331134 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.946997541 podStartE2EDuration="9.331116123s" podCreationTimestamp="2026-01-27 15:32:17 +0000 UTC" firstStartedPulling="2026-01-27 15:32:19.357861488 +0000 UTC m=+1435.530261269" lastFinishedPulling="2026-01-27 15:32:24.74198007 +0000 UTC m=+1440.914379851" observedRunningTime="2026-01-27 15:32:26.326224613 +0000 UTC m=+1442.498624394" watchObservedRunningTime="2026-01-27 15:32:26.331116123 +0000 UTC m=+1442.503515904" Jan 27 15:32:27 crc kubenswrapper[4697]: I0127 15:32:27.287938 4697 generic.go:334] "Generic (PLEG): container finished" podID="32decd79-6c0c-400e-8749-d78720c44d38" containerID="5ee84b0a03fdbfe022fe4cef18b735c36a804fc92369b610a7034af2899103b6" exitCode=0 Jan 27 15:32:27 crc kubenswrapper[4697]: I0127 15:32:27.288288 4697 generic.go:334] "Generic (PLEG): container finished" podID="32decd79-6c0c-400e-8749-d78720c44d38" containerID="954c72f85d95fd26b90e55fe90407ab92d0f7a12595b0cf9030c308b0535e73f" exitCode=143 Jan 27 15:32:27 crc kubenswrapper[4697]: I0127 15:32:27.288017 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"32decd79-6c0c-400e-8749-d78720c44d38","Type":"ContainerDied","Data":"5ee84b0a03fdbfe022fe4cef18b735c36a804fc92369b610a7034af2899103b6"} Jan 27 15:32:27 crc kubenswrapper[4697]: I0127 15:32:27.288336 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"32decd79-6c0c-400e-8749-d78720c44d38","Type":"ContainerDied","Data":"954c72f85d95fd26b90e55fe90407ab92d0f7a12595b0cf9030c308b0535e73f"} Jan 27 15:32:27 crc kubenswrapper[4697]: I0127 15:32:27.835766 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 15:32:27 crc kubenswrapper[4697]: I0127 15:32:27.882904 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=5.781904646 podStartE2EDuration="10.882855465s" podCreationTimestamp="2026-01-27 15:32:17 +0000 UTC" firstStartedPulling="2026-01-27 15:32:19.671647197 +0000 UTC m=+1435.844046978" lastFinishedPulling="2026-01-27 15:32:24.772598016 +0000 UTC m=+1440.944997797" observedRunningTime="2026-01-27 15:32:26.350569287 +0000 UTC m=+1442.522969068" watchObservedRunningTime="2026-01-27 15:32:27.882855465 +0000 UTC m=+1444.055255236" Jan 27 15:32:27 crc kubenswrapper[4697]: I0127 15:32:27.949279 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 27 15:32:27 crc kubenswrapper[4697]: I0127 15:32:27.949334 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 27 15:32:27 crc kubenswrapper[4697]: I0127 15:32:27.969886 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lt9zp\" (UniqueName: \"kubernetes.io/projected/32decd79-6c0c-400e-8749-d78720c44d38-kube-api-access-lt9zp\") pod \"32decd79-6c0c-400e-8749-d78720c44d38\" (UID: \"32decd79-6c0c-400e-8749-d78720c44d38\") " Jan 27 15:32:27 crc kubenswrapper[4697]: I0127 15:32:27.970098 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32decd79-6c0c-400e-8749-d78720c44d38-config-data\") pod \"32decd79-6c0c-400e-8749-d78720c44d38\" (UID: \"32decd79-6c0c-400e-8749-d78720c44d38\") " Jan 27 15:32:27 crc kubenswrapper[4697]: I0127 15:32:27.970123 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32decd79-6c0c-400e-8749-d78720c44d38-combined-ca-bundle\") pod \"32decd79-6c0c-400e-8749-d78720c44d38\" (UID: \"32decd79-6c0c-400e-8749-d78720c44d38\") " Jan 27 15:32:27 crc kubenswrapper[4697]: I0127 15:32:27.971063 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/32decd79-6c0c-400e-8749-d78720c44d38-logs\") pod \"32decd79-6c0c-400e-8749-d78720c44d38\" (UID: \"32decd79-6c0c-400e-8749-d78720c44d38\") " Jan 27 15:32:27 crc kubenswrapper[4697]: I0127 15:32:27.971616 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/32decd79-6c0c-400e-8749-d78720c44d38-logs" (OuterVolumeSpecName: "logs") pod "32decd79-6c0c-400e-8749-d78720c44d38" (UID: "32decd79-6c0c-400e-8749-d78720c44d38"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:32:27 crc kubenswrapper[4697]: I0127 15:32:27.977250 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32decd79-6c0c-400e-8749-d78720c44d38-kube-api-access-lt9zp" (OuterVolumeSpecName: "kube-api-access-lt9zp") pod "32decd79-6c0c-400e-8749-d78720c44d38" (UID: "32decd79-6c0c-400e-8749-d78720c44d38"). InnerVolumeSpecName "kube-api-access-lt9zp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:32:27 crc kubenswrapper[4697]: I0127 15:32:27.984917 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-757b4f8459-czt4r" Jan 27 15:32:27 crc kubenswrapper[4697]: I0127 15:32:27.999267 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32decd79-6c0c-400e-8749-d78720c44d38-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "32decd79-6c0c-400e-8749-d78720c44d38" (UID: "32decd79-6c0c-400e-8749-d78720c44d38"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:32:28 crc kubenswrapper[4697]: I0127 15:32:28.003920 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32decd79-6c0c-400e-8749-d78720c44d38-config-data" (OuterVolumeSpecName: "config-data") pod "32decd79-6c0c-400e-8749-d78720c44d38" (UID: "32decd79-6c0c-400e-8749-d78720c44d38"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:32:28 crc kubenswrapper[4697]: I0127 15:32:28.076314 4697 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32decd79-6c0c-400e-8749-d78720c44d38-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 15:32:28 crc kubenswrapper[4697]: I0127 15:32:28.076359 4697 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/32decd79-6c0c-400e-8749-d78720c44d38-logs\") on node \"crc\" DevicePath \"\"" Jan 27 15:32:28 crc kubenswrapper[4697]: I0127 15:32:28.076374 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lt9zp\" (UniqueName: \"kubernetes.io/projected/32decd79-6c0c-400e-8749-d78720c44d38-kube-api-access-lt9zp\") on node \"crc\" DevicePath \"\"" Jan 27 15:32:28 crc kubenswrapper[4697]: I0127 15:32:28.076387 4697 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32decd79-6c0c-400e-8749-d78720c44d38-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 15:32:28 crc kubenswrapper[4697]: I0127 15:32:28.104106 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-4ngx7"] Jan 27 15:32:28 crc kubenswrapper[4697]: I0127 15:32:28.104360 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c9776ccc5-4ngx7" podUID="29e0ecdc-1d14-468a-bc68-d6cfaf89ffa6" containerName="dnsmasq-dns" containerID="cri-o://7663e6286d9a601286c916efa6c870f5976476bfc96b99faf55dbe8f92d4a34c" gracePeriod=10 Jan 27 15:32:28 crc kubenswrapper[4697]: I0127 15:32:28.319307 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"32decd79-6c0c-400e-8749-d78720c44d38","Type":"ContainerDied","Data":"469e0185a7527682d04226e94717f2e22e2e1be99055aa0a8d0966c6f87432ea"} Jan 27 15:32:28 crc kubenswrapper[4697]: I0127 15:32:28.320958 4697 scope.go:117] "RemoveContainer" containerID="5ee84b0a03fdbfe022fe4cef18b735c36a804fc92369b610a7034af2899103b6" Jan 27 15:32:28 crc kubenswrapper[4697]: I0127 15:32:28.321167 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 15:32:28 crc kubenswrapper[4697]: I0127 15:32:28.331444 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"03c328cb-43ac-46ab-8677-80be0dde18e3","Type":"ContainerStarted","Data":"92cca5b7726601daf3b480d90c65a4f71a815ceae54639d6abe846bb8fb869fc"} Jan 27 15:32:28 crc kubenswrapper[4697]: I0127 15:32:28.336921 4697 generic.go:334] "Generic (PLEG): container finished" podID="29e0ecdc-1d14-468a-bc68-d6cfaf89ffa6" containerID="7663e6286d9a601286c916efa6c870f5976476bfc96b99faf55dbe8f92d4a34c" exitCode=0 Jan 27 15:32:28 crc kubenswrapper[4697]: I0127 15:32:28.337321 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-4ngx7" event={"ID":"29e0ecdc-1d14-468a-bc68-d6cfaf89ffa6","Type":"ContainerDied","Data":"7663e6286d9a601286c916efa6c870f5976476bfc96b99faf55dbe8f92d4a34c"} Jan 27 15:32:28 crc kubenswrapper[4697]: I0127 15:32:28.376670 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 15:32:28 crc kubenswrapper[4697]: I0127 15:32:28.395522 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 27 15:32:28 crc kubenswrapper[4697]: I0127 15:32:28.395556 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 27 15:32:28 crc kubenswrapper[4697]: I0127 15:32:28.423438 4697 scope.go:117] "RemoveContainer" containerID="954c72f85d95fd26b90e55fe90407ab92d0f7a12595b0cf9030c308b0535e73f" Jan 27 15:32:28 crc kubenswrapper[4697]: I0127 15:32:28.437720 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 15:32:28 crc kubenswrapper[4697]: I0127 15:32:28.460001 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 27 15:32:28 crc kubenswrapper[4697]: E0127 15:32:28.460750 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32decd79-6c0c-400e-8749-d78720c44d38" containerName="nova-metadata-metadata" Jan 27 15:32:28 crc kubenswrapper[4697]: I0127 15:32:28.460898 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="32decd79-6c0c-400e-8749-d78720c44d38" containerName="nova-metadata-metadata" Jan 27 15:32:28 crc kubenswrapper[4697]: E0127 15:32:28.461024 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32decd79-6c0c-400e-8749-d78720c44d38" containerName="nova-metadata-log" Jan 27 15:32:28 crc kubenswrapper[4697]: I0127 15:32:28.461117 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="32decd79-6c0c-400e-8749-d78720c44d38" containerName="nova-metadata-log" Jan 27 15:32:28 crc kubenswrapper[4697]: I0127 15:32:28.461446 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="32decd79-6c0c-400e-8749-d78720c44d38" containerName="nova-metadata-log" Jan 27 15:32:28 crc kubenswrapper[4697]: I0127 15:32:28.461576 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="32decd79-6c0c-400e-8749-d78720c44d38" containerName="nova-metadata-metadata" Jan 27 15:32:28 crc kubenswrapper[4697]: I0127 15:32:28.462861 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 27 15:32:28 crc kubenswrapper[4697]: I0127 15:32:28.463065 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 15:32:28 crc kubenswrapper[4697]: I0127 15:32:28.466050 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 27 15:32:28 crc kubenswrapper[4697]: I0127 15:32:28.467238 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 27 15:32:28 crc kubenswrapper[4697]: I0127 15:32:28.482120 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 15:32:28 crc kubenswrapper[4697]: I0127 15:32:28.562991 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 27 15:32:28 crc kubenswrapper[4697]: I0127 15:32:28.579513 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32decd79-6c0c-400e-8749-d78720c44d38" path="/var/lib/kubelet/pods/32decd79-6c0c-400e-8749-d78720c44d38/volumes" Jan 27 15:32:28 crc kubenswrapper[4697]: I0127 15:32:28.581887 4697 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5c9776ccc5-4ngx7" podUID="29e0ecdc-1d14-468a-bc68-d6cfaf89ffa6" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.168:5353: connect: connection refused" Jan 27 15:32:28 crc kubenswrapper[4697]: I0127 15:32:28.600077 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trgzd\" (UniqueName: \"kubernetes.io/projected/c4b14b86-6816-484a-a804-53a00f8f9b6b-kube-api-access-trgzd\") pod \"nova-metadata-0\" (UID: \"c4b14b86-6816-484a-a804-53a00f8f9b6b\") " pod="openstack/nova-metadata-0" Jan 27 15:32:28 crc kubenswrapper[4697]: I0127 15:32:28.600160 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4b14b86-6816-484a-a804-53a00f8f9b6b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"c4b14b86-6816-484a-a804-53a00f8f9b6b\") " pod="openstack/nova-metadata-0" Jan 27 15:32:28 crc kubenswrapper[4697]: I0127 15:32:28.600211 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c4b14b86-6816-484a-a804-53a00f8f9b6b-logs\") pod \"nova-metadata-0\" (UID: \"c4b14b86-6816-484a-a804-53a00f8f9b6b\") " pod="openstack/nova-metadata-0" Jan 27 15:32:28 crc kubenswrapper[4697]: I0127 15:32:28.600231 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4b14b86-6816-484a-a804-53a00f8f9b6b-config-data\") pod \"nova-metadata-0\" (UID: \"c4b14b86-6816-484a-a804-53a00f8f9b6b\") " pod="openstack/nova-metadata-0" Jan 27 15:32:28 crc kubenswrapper[4697]: I0127 15:32:28.600256 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4b14b86-6816-484a-a804-53a00f8f9b6b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c4b14b86-6816-484a-a804-53a00f8f9b6b\") " pod="openstack/nova-metadata-0" Jan 27 15:32:28 crc kubenswrapper[4697]: I0127 15:32:28.709529 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-trgzd\" (UniqueName: \"kubernetes.io/projected/c4b14b86-6816-484a-a804-53a00f8f9b6b-kube-api-access-trgzd\") pod \"nova-metadata-0\" (UID: \"c4b14b86-6816-484a-a804-53a00f8f9b6b\") " pod="openstack/nova-metadata-0" Jan 27 15:32:28 crc kubenswrapper[4697]: I0127 15:32:28.709605 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4b14b86-6816-484a-a804-53a00f8f9b6b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"c4b14b86-6816-484a-a804-53a00f8f9b6b\") " pod="openstack/nova-metadata-0" Jan 27 15:32:28 crc kubenswrapper[4697]: I0127 15:32:28.709851 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c4b14b86-6816-484a-a804-53a00f8f9b6b-logs\") pod \"nova-metadata-0\" (UID: \"c4b14b86-6816-484a-a804-53a00f8f9b6b\") " pod="openstack/nova-metadata-0" Jan 27 15:32:28 crc kubenswrapper[4697]: I0127 15:32:28.709897 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4b14b86-6816-484a-a804-53a00f8f9b6b-config-data\") pod \"nova-metadata-0\" (UID: \"c4b14b86-6816-484a-a804-53a00f8f9b6b\") " pod="openstack/nova-metadata-0" Jan 27 15:32:28 crc kubenswrapper[4697]: I0127 15:32:28.709970 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4b14b86-6816-484a-a804-53a00f8f9b6b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c4b14b86-6816-484a-a804-53a00f8f9b6b\") " pod="openstack/nova-metadata-0" Jan 27 15:32:28 crc kubenswrapper[4697]: I0127 15:32:28.710668 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c4b14b86-6816-484a-a804-53a00f8f9b6b-logs\") pod \"nova-metadata-0\" (UID: \"c4b14b86-6816-484a-a804-53a00f8f9b6b\") " pod="openstack/nova-metadata-0" Jan 27 15:32:28 crc kubenswrapper[4697]: I0127 15:32:28.744090 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4b14b86-6816-484a-a804-53a00f8f9b6b-config-data\") pod \"nova-metadata-0\" (UID: \"c4b14b86-6816-484a-a804-53a00f8f9b6b\") " pod="openstack/nova-metadata-0" Jan 27 15:32:28 crc kubenswrapper[4697]: I0127 15:32:28.745617 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-trgzd\" (UniqueName: \"kubernetes.io/projected/c4b14b86-6816-484a-a804-53a00f8f9b6b-kube-api-access-trgzd\") pod \"nova-metadata-0\" (UID: \"c4b14b86-6816-484a-a804-53a00f8f9b6b\") " pod="openstack/nova-metadata-0" Jan 27 15:32:28 crc kubenswrapper[4697]: I0127 15:32:28.747570 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4b14b86-6816-484a-a804-53a00f8f9b6b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"c4b14b86-6816-484a-a804-53a00f8f9b6b\") " pod="openstack/nova-metadata-0" Jan 27 15:32:28 crc kubenswrapper[4697]: I0127 15:32:28.751799 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4b14b86-6816-484a-a804-53a00f8f9b6b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c4b14b86-6816-484a-a804-53a00f8f9b6b\") " pod="openstack/nova-metadata-0" Jan 27 15:32:28 crc kubenswrapper[4697]: I0127 15:32:28.793707 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 15:32:29 crc kubenswrapper[4697]: I0127 15:32:29.033691 4697 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="87e9d4dd-cc6a-4809-b4bb-51bc9f3b5443" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.192:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 15:32:29 crc kubenswrapper[4697]: I0127 15:32:29.034250 4697 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="87e9d4dd-cc6a-4809-b4bb-51bc9f3b5443" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.192:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 15:32:29 crc kubenswrapper[4697]: I0127 15:32:29.404242 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 27 15:32:29 crc kubenswrapper[4697]: I0127 15:32:29.501215 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 15:32:29 crc kubenswrapper[4697]: I0127 15:32:29.652469 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-4ngx7" Jan 27 15:32:29 crc kubenswrapper[4697]: I0127 15:32:29.730975 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/29e0ecdc-1d14-468a-bc68-d6cfaf89ffa6-ovsdbserver-sb\") pod \"29e0ecdc-1d14-468a-bc68-d6cfaf89ffa6\" (UID: \"29e0ecdc-1d14-468a-bc68-d6cfaf89ffa6\") " Jan 27 15:32:29 crc kubenswrapper[4697]: I0127 15:32:29.731027 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/29e0ecdc-1d14-468a-bc68-d6cfaf89ffa6-ovsdbserver-nb\") pod \"29e0ecdc-1d14-468a-bc68-d6cfaf89ffa6\" (UID: \"29e0ecdc-1d14-468a-bc68-d6cfaf89ffa6\") " Jan 27 15:32:29 crc kubenswrapper[4697]: I0127 15:32:29.731173 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29e0ecdc-1d14-468a-bc68-d6cfaf89ffa6-config\") pod \"29e0ecdc-1d14-468a-bc68-d6cfaf89ffa6\" (UID: \"29e0ecdc-1d14-468a-bc68-d6cfaf89ffa6\") " Jan 27 15:32:29 crc kubenswrapper[4697]: I0127 15:32:29.731209 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/29e0ecdc-1d14-468a-bc68-d6cfaf89ffa6-dns-svc\") pod \"29e0ecdc-1d14-468a-bc68-d6cfaf89ffa6\" (UID: \"29e0ecdc-1d14-468a-bc68-d6cfaf89ffa6\") " Jan 27 15:32:29 crc kubenswrapper[4697]: I0127 15:32:29.731226 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/29e0ecdc-1d14-468a-bc68-d6cfaf89ffa6-dns-swift-storage-0\") pod \"29e0ecdc-1d14-468a-bc68-d6cfaf89ffa6\" (UID: \"29e0ecdc-1d14-468a-bc68-d6cfaf89ffa6\") " Jan 27 15:32:29 crc kubenswrapper[4697]: I0127 15:32:29.731294 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzvzt\" (UniqueName: \"kubernetes.io/projected/29e0ecdc-1d14-468a-bc68-d6cfaf89ffa6-kube-api-access-lzvzt\") pod \"29e0ecdc-1d14-468a-bc68-d6cfaf89ffa6\" (UID: \"29e0ecdc-1d14-468a-bc68-d6cfaf89ffa6\") " Jan 27 15:32:29 crc kubenswrapper[4697]: I0127 15:32:29.746434 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29e0ecdc-1d14-468a-bc68-d6cfaf89ffa6-kube-api-access-lzvzt" (OuterVolumeSpecName: "kube-api-access-lzvzt") pod "29e0ecdc-1d14-468a-bc68-d6cfaf89ffa6" (UID: "29e0ecdc-1d14-468a-bc68-d6cfaf89ffa6"). InnerVolumeSpecName "kube-api-access-lzvzt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:32:29 crc kubenswrapper[4697]: I0127 15:32:29.829376 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29e0ecdc-1d14-468a-bc68-d6cfaf89ffa6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "29e0ecdc-1d14-468a-bc68-d6cfaf89ffa6" (UID: "29e0ecdc-1d14-468a-bc68-d6cfaf89ffa6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:32:29 crc kubenswrapper[4697]: I0127 15:32:29.831934 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29e0ecdc-1d14-468a-bc68-d6cfaf89ffa6-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "29e0ecdc-1d14-468a-bc68-d6cfaf89ffa6" (UID: "29e0ecdc-1d14-468a-bc68-d6cfaf89ffa6"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:32:29 crc kubenswrapper[4697]: I0127 15:32:29.833139 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzvzt\" (UniqueName: \"kubernetes.io/projected/29e0ecdc-1d14-468a-bc68-d6cfaf89ffa6-kube-api-access-lzvzt\") on node \"crc\" DevicePath \"\"" Jan 27 15:32:29 crc kubenswrapper[4697]: I0127 15:32:29.833160 4697 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/29e0ecdc-1d14-468a-bc68-d6cfaf89ffa6-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 15:32:29 crc kubenswrapper[4697]: I0127 15:32:29.833169 4697 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/29e0ecdc-1d14-468a-bc68-d6cfaf89ffa6-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 27 15:32:29 crc kubenswrapper[4697]: I0127 15:32:29.839593 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29e0ecdc-1d14-468a-bc68-d6cfaf89ffa6-config" (OuterVolumeSpecName: "config") pod "29e0ecdc-1d14-468a-bc68-d6cfaf89ffa6" (UID: "29e0ecdc-1d14-468a-bc68-d6cfaf89ffa6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:32:29 crc kubenswrapper[4697]: I0127 15:32:29.843225 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29e0ecdc-1d14-468a-bc68-d6cfaf89ffa6-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "29e0ecdc-1d14-468a-bc68-d6cfaf89ffa6" (UID: "29e0ecdc-1d14-468a-bc68-d6cfaf89ffa6"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:32:29 crc kubenswrapper[4697]: I0127 15:32:29.875267 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29e0ecdc-1d14-468a-bc68-d6cfaf89ffa6-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "29e0ecdc-1d14-468a-bc68-d6cfaf89ffa6" (UID: "29e0ecdc-1d14-468a-bc68-d6cfaf89ffa6"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:32:29 crc kubenswrapper[4697]: I0127 15:32:29.937549 4697 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/29e0ecdc-1d14-468a-bc68-d6cfaf89ffa6-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 15:32:29 crc kubenswrapper[4697]: I0127 15:32:29.937578 4697 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/29e0ecdc-1d14-468a-bc68-d6cfaf89ffa6-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 15:32:29 crc kubenswrapper[4697]: I0127 15:32:29.937586 4697 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29e0ecdc-1d14-468a-bc68-d6cfaf89ffa6-config\") on node \"crc\" DevicePath \"\"" Jan 27 15:32:30 crc kubenswrapper[4697]: I0127 15:32:30.367958 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-4ngx7" Jan 27 15:32:30 crc kubenswrapper[4697]: I0127 15:32:30.367970 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-4ngx7" event={"ID":"29e0ecdc-1d14-468a-bc68-d6cfaf89ffa6","Type":"ContainerDied","Data":"ca46a54d19431532778657582500876d846e87db9aee269d3302d14ccd62d6b4"} Jan 27 15:32:30 crc kubenswrapper[4697]: I0127 15:32:30.368025 4697 scope.go:117] "RemoveContainer" containerID="7663e6286d9a601286c916efa6c870f5976476bfc96b99faf55dbe8f92d4a34c" Jan 27 15:32:30 crc kubenswrapper[4697]: I0127 15:32:30.381191 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c4b14b86-6816-484a-a804-53a00f8f9b6b","Type":"ContainerStarted","Data":"08107453ea09bdc90aced10ca36f4ecd43c07a58b2009ff27de40757edab2ee6"} Jan 27 15:32:30 crc kubenswrapper[4697]: I0127 15:32:30.381235 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c4b14b86-6816-484a-a804-53a00f8f9b6b","Type":"ContainerStarted","Data":"dfa9a2d317a3d3152a3d108e8c85a177b0cd870c837753f7451b45f231cfd84f"} Jan 27 15:32:30 crc kubenswrapper[4697]: I0127 15:32:30.381247 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c4b14b86-6816-484a-a804-53a00f8f9b6b","Type":"ContainerStarted","Data":"4ec76e073c83b358eb6cad39e03afb90d4642415af3bb33af7c6955761cacfc2"} Jan 27 15:32:30 crc kubenswrapper[4697]: I0127 15:32:30.405163 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-4ngx7"] Jan 27 15:32:30 crc kubenswrapper[4697]: I0127 15:32:30.416190 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-4ngx7"] Jan 27 15:32:30 crc kubenswrapper[4697]: I0127 15:32:30.584312 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29e0ecdc-1d14-468a-bc68-d6cfaf89ffa6" path="/var/lib/kubelet/pods/29e0ecdc-1d14-468a-bc68-d6cfaf89ffa6/volumes" Jan 27 15:32:30 crc kubenswrapper[4697]: I0127 15:32:30.883997 4697 scope.go:117] "RemoveContainer" containerID="ff13ec4d904d0e03bae142bc4046b95950c7bee1bc8777a0f17a13b9476540fa" Jan 27 15:32:30 crc kubenswrapper[4697]: I0127 15:32:30.962372 4697 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-7j258" podUID="b4da5f97-06a4-452e-9f5e-87c97c5bf1f5" containerName="registry-server" probeResult="failure" output=< Jan 27 15:32:30 crc kubenswrapper[4697]: timeout: failed to connect service ":50051" within 1s Jan 27 15:32:30 crc kubenswrapper[4697]: > Jan 27 15:32:31 crc kubenswrapper[4697]: I0127 15:32:31.426432 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.426412315 podStartE2EDuration="3.426412315s" podCreationTimestamp="2026-01-27 15:32:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:32:31.421738621 +0000 UTC m=+1447.594138402" watchObservedRunningTime="2026-01-27 15:32:31.426412315 +0000 UTC m=+1447.598812096" Jan 27 15:32:33 crc kubenswrapper[4697]: I0127 15:32:33.416920 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"03c328cb-43ac-46ab-8677-80be0dde18e3","Type":"ContainerStarted","Data":"0cb882a3b482e879b610e80c95682d218d24e192067e0b4bf5109cf036a0c992"} Jan 27 15:32:33 crc kubenswrapper[4697]: I0127 15:32:33.417559 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 27 15:32:33 crc kubenswrapper[4697]: I0127 15:32:33.447279 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.671182952 podStartE2EDuration="14.44726021s" podCreationTimestamp="2026-01-27 15:32:19 +0000 UTC" firstStartedPulling="2026-01-27 15:32:20.422721493 +0000 UTC m=+1436.595121284" lastFinishedPulling="2026-01-27 15:32:32.198798761 +0000 UTC m=+1448.371198542" observedRunningTime="2026-01-27 15:32:33.440364432 +0000 UTC m=+1449.612764213" watchObservedRunningTime="2026-01-27 15:32:33.44726021 +0000 UTC m=+1449.619659991" Jan 27 15:32:33 crc kubenswrapper[4697]: I0127 15:32:33.793969 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 27 15:32:33 crc kubenswrapper[4697]: I0127 15:32:33.794033 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 27 15:32:34 crc kubenswrapper[4697]: I0127 15:32:34.430680 4697 generic.go:334] "Generic (PLEG): container finished" podID="b5544478-ac7d-47a8-a27f-8da131efb0fd" containerID="1bcbeaaf6eb253bb46ca641faacb97dbbeb6a74dd9da50d0b7167b10271fb699" exitCode=0 Jan 27 15:32:34 crc kubenswrapper[4697]: I0127 15:32:34.430895 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-szx5d" event={"ID":"b5544478-ac7d-47a8-a27f-8da131efb0fd","Type":"ContainerDied","Data":"1bcbeaaf6eb253bb46ca641faacb97dbbeb6a74dd9da50d0b7167b10271fb699"} Jan 27 15:32:35 crc kubenswrapper[4697]: I0127 15:32:35.884280 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-szx5d" Jan 27 15:32:35 crc kubenswrapper[4697]: I0127 15:32:35.954611 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b5544478-ac7d-47a8-a27f-8da131efb0fd-scripts\") pod \"b5544478-ac7d-47a8-a27f-8da131efb0fd\" (UID: \"b5544478-ac7d-47a8-a27f-8da131efb0fd\") " Jan 27 15:32:35 crc kubenswrapper[4697]: I0127 15:32:35.954657 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5544478-ac7d-47a8-a27f-8da131efb0fd-config-data\") pod \"b5544478-ac7d-47a8-a27f-8da131efb0fd\" (UID: \"b5544478-ac7d-47a8-a27f-8da131efb0fd\") " Jan 27 15:32:35 crc kubenswrapper[4697]: I0127 15:32:35.954772 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2hlhl\" (UniqueName: \"kubernetes.io/projected/b5544478-ac7d-47a8-a27f-8da131efb0fd-kube-api-access-2hlhl\") pod \"b5544478-ac7d-47a8-a27f-8da131efb0fd\" (UID: \"b5544478-ac7d-47a8-a27f-8da131efb0fd\") " Jan 27 15:32:35 crc kubenswrapper[4697]: I0127 15:32:35.954856 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5544478-ac7d-47a8-a27f-8da131efb0fd-combined-ca-bundle\") pod \"b5544478-ac7d-47a8-a27f-8da131efb0fd\" (UID: \"b5544478-ac7d-47a8-a27f-8da131efb0fd\") " Jan 27 15:32:35 crc kubenswrapper[4697]: I0127 15:32:35.974253 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5544478-ac7d-47a8-a27f-8da131efb0fd-kube-api-access-2hlhl" (OuterVolumeSpecName: "kube-api-access-2hlhl") pod "b5544478-ac7d-47a8-a27f-8da131efb0fd" (UID: "b5544478-ac7d-47a8-a27f-8da131efb0fd"). InnerVolumeSpecName "kube-api-access-2hlhl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:32:35 crc kubenswrapper[4697]: I0127 15:32:35.978006 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5544478-ac7d-47a8-a27f-8da131efb0fd-scripts" (OuterVolumeSpecName: "scripts") pod "b5544478-ac7d-47a8-a27f-8da131efb0fd" (UID: "b5544478-ac7d-47a8-a27f-8da131efb0fd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:32:35 crc kubenswrapper[4697]: I0127 15:32:35.993804 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5544478-ac7d-47a8-a27f-8da131efb0fd-config-data" (OuterVolumeSpecName: "config-data") pod "b5544478-ac7d-47a8-a27f-8da131efb0fd" (UID: "b5544478-ac7d-47a8-a27f-8da131efb0fd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:32:35 crc kubenswrapper[4697]: I0127 15:32:35.999428 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5544478-ac7d-47a8-a27f-8da131efb0fd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b5544478-ac7d-47a8-a27f-8da131efb0fd" (UID: "b5544478-ac7d-47a8-a27f-8da131efb0fd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:32:36 crc kubenswrapper[4697]: I0127 15:32:36.057180 4697 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b5544478-ac7d-47a8-a27f-8da131efb0fd-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 15:32:36 crc kubenswrapper[4697]: I0127 15:32:36.057254 4697 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5544478-ac7d-47a8-a27f-8da131efb0fd-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 15:32:36 crc kubenswrapper[4697]: I0127 15:32:36.057266 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2hlhl\" (UniqueName: \"kubernetes.io/projected/b5544478-ac7d-47a8-a27f-8da131efb0fd-kube-api-access-2hlhl\") on node \"crc\" DevicePath \"\"" Jan 27 15:32:36 crc kubenswrapper[4697]: I0127 15:32:36.057278 4697 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5544478-ac7d-47a8-a27f-8da131efb0fd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 15:32:36 crc kubenswrapper[4697]: I0127 15:32:36.461508 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-szx5d" event={"ID":"b5544478-ac7d-47a8-a27f-8da131efb0fd","Type":"ContainerDied","Data":"036616cf2287a923c88116e8e34f22b440ae3bb17546e83ab23da44c33a71745"} Jan 27 15:32:36 crc kubenswrapper[4697]: I0127 15:32:36.461546 4697 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="036616cf2287a923c88116e8e34f22b440ae3bb17546e83ab23da44c33a71745" Jan 27 15:32:36 crc kubenswrapper[4697]: I0127 15:32:36.461558 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-szx5d" Jan 27 15:32:36 crc kubenswrapper[4697]: I0127 15:32:36.651990 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 27 15:32:36 crc kubenswrapper[4697]: I0127 15:32:36.652238 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="87e9d4dd-cc6a-4809-b4bb-51bc9f3b5443" containerName="nova-api-log" containerID="cri-o://253fb0d2b6adbd491ce9a17bc37b4cdfd16e34e98e7bd8a455d7bcc94372583e" gracePeriod=30 Jan 27 15:32:36 crc kubenswrapper[4697]: I0127 15:32:36.652657 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="87e9d4dd-cc6a-4809-b4bb-51bc9f3b5443" containerName="nova-api-api" containerID="cri-o://21672ab3ed72933291e5ce037922a289dcc964fd891a508cce64f62db0547781" gracePeriod=30 Jan 27 15:32:36 crc kubenswrapper[4697]: I0127 15:32:36.671831 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 15:32:36 crc kubenswrapper[4697]: I0127 15:32:36.672046 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="00db5b9d-f113-4bc3-b628-ba42b53388b1" containerName="nova-scheduler-scheduler" containerID="cri-o://536cb9785ab4bbdece4e248afa0eb4936af6c8913295be9b405da9b7885aa2d5" gracePeriod=30 Jan 27 15:32:36 crc kubenswrapper[4697]: I0127 15:32:36.683323 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 15:32:36 crc kubenswrapper[4697]: I0127 15:32:36.683572 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="c4b14b86-6816-484a-a804-53a00f8f9b6b" containerName="nova-metadata-log" containerID="cri-o://dfa9a2d317a3d3152a3d108e8c85a177b0cd870c837753f7451b45f231cfd84f" gracePeriod=30 Jan 27 15:32:36 crc kubenswrapper[4697]: I0127 15:32:36.684124 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="c4b14b86-6816-484a-a804-53a00f8f9b6b" containerName="nova-metadata-metadata" containerID="cri-o://08107453ea09bdc90aced10ca36f4ecd43c07a58b2009ff27de40757edab2ee6" gracePeriod=30 Jan 27 15:32:37 crc kubenswrapper[4697]: I0127 15:32:37.234597 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 15:32:37 crc kubenswrapper[4697]: I0127 15:32:37.392397 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-trgzd\" (UniqueName: \"kubernetes.io/projected/c4b14b86-6816-484a-a804-53a00f8f9b6b-kube-api-access-trgzd\") pod \"c4b14b86-6816-484a-a804-53a00f8f9b6b\" (UID: \"c4b14b86-6816-484a-a804-53a00f8f9b6b\") " Jan 27 15:32:37 crc kubenswrapper[4697]: I0127 15:32:37.392477 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4b14b86-6816-484a-a804-53a00f8f9b6b-nova-metadata-tls-certs\") pod \"c4b14b86-6816-484a-a804-53a00f8f9b6b\" (UID: \"c4b14b86-6816-484a-a804-53a00f8f9b6b\") " Jan 27 15:32:37 crc kubenswrapper[4697]: I0127 15:32:37.392604 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4b14b86-6816-484a-a804-53a00f8f9b6b-config-data\") pod \"c4b14b86-6816-484a-a804-53a00f8f9b6b\" (UID: \"c4b14b86-6816-484a-a804-53a00f8f9b6b\") " Jan 27 15:32:37 crc kubenswrapper[4697]: I0127 15:32:37.392636 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4b14b86-6816-484a-a804-53a00f8f9b6b-combined-ca-bundle\") pod \"c4b14b86-6816-484a-a804-53a00f8f9b6b\" (UID: \"c4b14b86-6816-484a-a804-53a00f8f9b6b\") " Jan 27 15:32:37 crc kubenswrapper[4697]: I0127 15:32:37.392672 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c4b14b86-6816-484a-a804-53a00f8f9b6b-logs\") pod \"c4b14b86-6816-484a-a804-53a00f8f9b6b\" (UID: \"c4b14b86-6816-484a-a804-53a00f8f9b6b\") " Jan 27 15:32:37 crc kubenswrapper[4697]: I0127 15:32:37.393346 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c4b14b86-6816-484a-a804-53a00f8f9b6b-logs" (OuterVolumeSpecName: "logs") pod "c4b14b86-6816-484a-a804-53a00f8f9b6b" (UID: "c4b14b86-6816-484a-a804-53a00f8f9b6b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:32:37 crc kubenswrapper[4697]: I0127 15:32:37.393569 4697 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c4b14b86-6816-484a-a804-53a00f8f9b6b-logs\") on node \"crc\" DevicePath \"\"" Jan 27 15:32:37 crc kubenswrapper[4697]: I0127 15:32:37.420365 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4b14b86-6816-484a-a804-53a00f8f9b6b-kube-api-access-trgzd" (OuterVolumeSpecName: "kube-api-access-trgzd") pod "c4b14b86-6816-484a-a804-53a00f8f9b6b" (UID: "c4b14b86-6816-484a-a804-53a00f8f9b6b"). InnerVolumeSpecName "kube-api-access-trgzd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:32:37 crc kubenswrapper[4697]: I0127 15:32:37.429183 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4b14b86-6816-484a-a804-53a00f8f9b6b-config-data" (OuterVolumeSpecName: "config-data") pod "c4b14b86-6816-484a-a804-53a00f8f9b6b" (UID: "c4b14b86-6816-484a-a804-53a00f8f9b6b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:32:37 crc kubenswrapper[4697]: I0127 15:32:37.432995 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4b14b86-6816-484a-a804-53a00f8f9b6b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c4b14b86-6816-484a-a804-53a00f8f9b6b" (UID: "c4b14b86-6816-484a-a804-53a00f8f9b6b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:32:37 crc kubenswrapper[4697]: I0127 15:32:37.467210 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4b14b86-6816-484a-a804-53a00f8f9b6b-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "c4b14b86-6816-484a-a804-53a00f8f9b6b" (UID: "c4b14b86-6816-484a-a804-53a00f8f9b6b"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:32:37 crc kubenswrapper[4697]: I0127 15:32:37.471361 4697 generic.go:334] "Generic (PLEG): container finished" podID="87e9d4dd-cc6a-4809-b4bb-51bc9f3b5443" containerID="253fb0d2b6adbd491ce9a17bc37b4cdfd16e34e98e7bd8a455d7bcc94372583e" exitCode=143 Jan 27 15:32:37 crc kubenswrapper[4697]: I0127 15:32:37.471431 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"87e9d4dd-cc6a-4809-b4bb-51bc9f3b5443","Type":"ContainerDied","Data":"253fb0d2b6adbd491ce9a17bc37b4cdfd16e34e98e7bd8a455d7bcc94372583e"} Jan 27 15:32:37 crc kubenswrapper[4697]: I0127 15:32:37.474746 4697 generic.go:334] "Generic (PLEG): container finished" podID="c4b14b86-6816-484a-a804-53a00f8f9b6b" containerID="08107453ea09bdc90aced10ca36f4ecd43c07a58b2009ff27de40757edab2ee6" exitCode=0 Jan 27 15:32:37 crc kubenswrapper[4697]: I0127 15:32:37.474774 4697 generic.go:334] "Generic (PLEG): container finished" podID="c4b14b86-6816-484a-a804-53a00f8f9b6b" containerID="dfa9a2d317a3d3152a3d108e8c85a177b0cd870c837753f7451b45f231cfd84f" exitCode=143 Jan 27 15:32:37 crc kubenswrapper[4697]: I0127 15:32:37.474812 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c4b14b86-6816-484a-a804-53a00f8f9b6b","Type":"ContainerDied","Data":"08107453ea09bdc90aced10ca36f4ecd43c07a58b2009ff27de40757edab2ee6"} Jan 27 15:32:37 crc kubenswrapper[4697]: I0127 15:32:37.474840 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c4b14b86-6816-484a-a804-53a00f8f9b6b","Type":"ContainerDied","Data":"dfa9a2d317a3d3152a3d108e8c85a177b0cd870c837753f7451b45f231cfd84f"} Jan 27 15:32:37 crc kubenswrapper[4697]: I0127 15:32:37.474850 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c4b14b86-6816-484a-a804-53a00f8f9b6b","Type":"ContainerDied","Data":"4ec76e073c83b358eb6cad39e03afb90d4642415af3bb33af7c6955761cacfc2"} Jan 27 15:32:37 crc kubenswrapper[4697]: I0127 15:32:37.474865 4697 scope.go:117] "RemoveContainer" containerID="08107453ea09bdc90aced10ca36f4ecd43c07a58b2009ff27de40757edab2ee6" Jan 27 15:32:37 crc kubenswrapper[4697]: I0127 15:32:37.474815 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 15:32:37 crc kubenswrapper[4697]: I0127 15:32:37.495453 4697 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4b14b86-6816-484a-a804-53a00f8f9b6b-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 15:32:37 crc kubenswrapper[4697]: I0127 15:32:37.495481 4697 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4b14b86-6816-484a-a804-53a00f8f9b6b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 15:32:37 crc kubenswrapper[4697]: I0127 15:32:37.495493 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-trgzd\" (UniqueName: \"kubernetes.io/projected/c4b14b86-6816-484a-a804-53a00f8f9b6b-kube-api-access-trgzd\") on node \"crc\" DevicePath \"\"" Jan 27 15:32:37 crc kubenswrapper[4697]: I0127 15:32:37.495501 4697 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4b14b86-6816-484a-a804-53a00f8f9b6b-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 15:32:37 crc kubenswrapper[4697]: I0127 15:32:37.505453 4697 scope.go:117] "RemoveContainer" containerID="dfa9a2d317a3d3152a3d108e8c85a177b0cd870c837753f7451b45f231cfd84f" Jan 27 15:32:37 crc kubenswrapper[4697]: I0127 15:32:37.520031 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 15:32:37 crc kubenswrapper[4697]: I0127 15:32:37.528489 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 15:32:37 crc kubenswrapper[4697]: I0127 15:32:37.540428 4697 scope.go:117] "RemoveContainer" containerID="08107453ea09bdc90aced10ca36f4ecd43c07a58b2009ff27de40757edab2ee6" Jan 27 15:32:37 crc kubenswrapper[4697]: E0127 15:32:37.540762 4697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"08107453ea09bdc90aced10ca36f4ecd43c07a58b2009ff27de40757edab2ee6\": container with ID starting with 08107453ea09bdc90aced10ca36f4ecd43c07a58b2009ff27de40757edab2ee6 not found: ID does not exist" containerID="08107453ea09bdc90aced10ca36f4ecd43c07a58b2009ff27de40757edab2ee6" Jan 27 15:32:37 crc kubenswrapper[4697]: I0127 15:32:37.540816 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08107453ea09bdc90aced10ca36f4ecd43c07a58b2009ff27de40757edab2ee6"} err="failed to get container status \"08107453ea09bdc90aced10ca36f4ecd43c07a58b2009ff27de40757edab2ee6\": rpc error: code = NotFound desc = could not find container \"08107453ea09bdc90aced10ca36f4ecd43c07a58b2009ff27de40757edab2ee6\": container with ID starting with 08107453ea09bdc90aced10ca36f4ecd43c07a58b2009ff27de40757edab2ee6 not found: ID does not exist" Jan 27 15:32:37 crc kubenswrapper[4697]: I0127 15:32:37.540840 4697 scope.go:117] "RemoveContainer" containerID="dfa9a2d317a3d3152a3d108e8c85a177b0cd870c837753f7451b45f231cfd84f" Jan 27 15:32:37 crc kubenswrapper[4697]: E0127 15:32:37.541140 4697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dfa9a2d317a3d3152a3d108e8c85a177b0cd870c837753f7451b45f231cfd84f\": container with ID starting with dfa9a2d317a3d3152a3d108e8c85a177b0cd870c837753f7451b45f231cfd84f not found: ID does not exist" containerID="dfa9a2d317a3d3152a3d108e8c85a177b0cd870c837753f7451b45f231cfd84f" Jan 27 15:32:37 crc kubenswrapper[4697]: I0127 15:32:37.541183 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dfa9a2d317a3d3152a3d108e8c85a177b0cd870c837753f7451b45f231cfd84f"} err="failed to get container status \"dfa9a2d317a3d3152a3d108e8c85a177b0cd870c837753f7451b45f231cfd84f\": rpc error: code = NotFound desc = could not find container \"dfa9a2d317a3d3152a3d108e8c85a177b0cd870c837753f7451b45f231cfd84f\": container with ID starting with dfa9a2d317a3d3152a3d108e8c85a177b0cd870c837753f7451b45f231cfd84f not found: ID does not exist" Jan 27 15:32:37 crc kubenswrapper[4697]: I0127 15:32:37.541211 4697 scope.go:117] "RemoveContainer" containerID="08107453ea09bdc90aced10ca36f4ecd43c07a58b2009ff27de40757edab2ee6" Jan 27 15:32:37 crc kubenswrapper[4697]: I0127 15:32:37.541525 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08107453ea09bdc90aced10ca36f4ecd43c07a58b2009ff27de40757edab2ee6"} err="failed to get container status \"08107453ea09bdc90aced10ca36f4ecd43c07a58b2009ff27de40757edab2ee6\": rpc error: code = NotFound desc = could not find container \"08107453ea09bdc90aced10ca36f4ecd43c07a58b2009ff27de40757edab2ee6\": container with ID starting with 08107453ea09bdc90aced10ca36f4ecd43c07a58b2009ff27de40757edab2ee6 not found: ID does not exist" Jan 27 15:32:37 crc kubenswrapper[4697]: I0127 15:32:37.541575 4697 scope.go:117] "RemoveContainer" containerID="dfa9a2d317a3d3152a3d108e8c85a177b0cd870c837753f7451b45f231cfd84f" Jan 27 15:32:37 crc kubenswrapper[4697]: I0127 15:32:37.541859 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dfa9a2d317a3d3152a3d108e8c85a177b0cd870c837753f7451b45f231cfd84f"} err="failed to get container status \"dfa9a2d317a3d3152a3d108e8c85a177b0cd870c837753f7451b45f231cfd84f\": rpc error: code = NotFound desc = could not find container \"dfa9a2d317a3d3152a3d108e8c85a177b0cd870c837753f7451b45f231cfd84f\": container with ID starting with dfa9a2d317a3d3152a3d108e8c85a177b0cd870c837753f7451b45f231cfd84f not found: ID does not exist" Jan 27 15:32:37 crc kubenswrapper[4697]: I0127 15:32:37.554883 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 27 15:32:37 crc kubenswrapper[4697]: E0127 15:32:37.555281 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4b14b86-6816-484a-a804-53a00f8f9b6b" containerName="nova-metadata-metadata" Jan 27 15:32:37 crc kubenswrapper[4697]: I0127 15:32:37.555297 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4b14b86-6816-484a-a804-53a00f8f9b6b" containerName="nova-metadata-metadata" Jan 27 15:32:37 crc kubenswrapper[4697]: E0127 15:32:37.555328 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29e0ecdc-1d14-468a-bc68-d6cfaf89ffa6" containerName="dnsmasq-dns" Jan 27 15:32:37 crc kubenswrapper[4697]: I0127 15:32:37.555335 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="29e0ecdc-1d14-468a-bc68-d6cfaf89ffa6" containerName="dnsmasq-dns" Jan 27 15:32:37 crc kubenswrapper[4697]: E0127 15:32:37.555368 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4b14b86-6816-484a-a804-53a00f8f9b6b" containerName="nova-metadata-log" Jan 27 15:32:37 crc kubenswrapper[4697]: I0127 15:32:37.555375 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4b14b86-6816-484a-a804-53a00f8f9b6b" containerName="nova-metadata-log" Jan 27 15:32:37 crc kubenswrapper[4697]: E0127 15:32:37.555384 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5544478-ac7d-47a8-a27f-8da131efb0fd" containerName="nova-manage" Jan 27 15:32:37 crc kubenswrapper[4697]: I0127 15:32:37.555391 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5544478-ac7d-47a8-a27f-8da131efb0fd" containerName="nova-manage" Jan 27 15:32:37 crc kubenswrapper[4697]: E0127 15:32:37.555404 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29e0ecdc-1d14-468a-bc68-d6cfaf89ffa6" containerName="init" Jan 27 15:32:37 crc kubenswrapper[4697]: I0127 15:32:37.555413 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="29e0ecdc-1d14-468a-bc68-d6cfaf89ffa6" containerName="init" Jan 27 15:32:37 crc kubenswrapper[4697]: I0127 15:32:37.555578 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="29e0ecdc-1d14-468a-bc68-d6cfaf89ffa6" containerName="dnsmasq-dns" Jan 27 15:32:37 crc kubenswrapper[4697]: I0127 15:32:37.555594 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4b14b86-6816-484a-a804-53a00f8f9b6b" containerName="nova-metadata-metadata" Jan 27 15:32:37 crc kubenswrapper[4697]: I0127 15:32:37.555607 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5544478-ac7d-47a8-a27f-8da131efb0fd" containerName="nova-manage" Jan 27 15:32:37 crc kubenswrapper[4697]: I0127 15:32:37.555637 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4b14b86-6816-484a-a804-53a00f8f9b6b" containerName="nova-metadata-log" Jan 27 15:32:37 crc kubenswrapper[4697]: I0127 15:32:37.556636 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 15:32:37 crc kubenswrapper[4697]: I0127 15:32:37.561127 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 27 15:32:37 crc kubenswrapper[4697]: I0127 15:32:37.561375 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 27 15:32:37 crc kubenswrapper[4697]: I0127 15:32:37.570882 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 15:32:37 crc kubenswrapper[4697]: I0127 15:32:37.700214 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0d44e3f-5773-4f1d-98bd-6ee63096a361-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c0d44e3f-5773-4f1d-98bd-6ee63096a361\") " pod="openstack/nova-metadata-0" Jan 27 15:32:37 crc kubenswrapper[4697]: I0127 15:32:37.700649 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0d44e3f-5773-4f1d-98bd-6ee63096a361-config-data\") pod \"nova-metadata-0\" (UID: \"c0d44e3f-5773-4f1d-98bd-6ee63096a361\") " pod="openstack/nova-metadata-0" Jan 27 15:32:37 crc kubenswrapper[4697]: I0127 15:32:37.700836 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p28nn\" (UniqueName: \"kubernetes.io/projected/c0d44e3f-5773-4f1d-98bd-6ee63096a361-kube-api-access-p28nn\") pod \"nova-metadata-0\" (UID: \"c0d44e3f-5773-4f1d-98bd-6ee63096a361\") " pod="openstack/nova-metadata-0" Jan 27 15:32:37 crc kubenswrapper[4697]: I0127 15:32:37.700956 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c0d44e3f-5773-4f1d-98bd-6ee63096a361-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"c0d44e3f-5773-4f1d-98bd-6ee63096a361\") " pod="openstack/nova-metadata-0" Jan 27 15:32:37 crc kubenswrapper[4697]: I0127 15:32:37.701062 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c0d44e3f-5773-4f1d-98bd-6ee63096a361-logs\") pod \"nova-metadata-0\" (UID: \"c0d44e3f-5773-4f1d-98bd-6ee63096a361\") " pod="openstack/nova-metadata-0" Jan 27 15:32:37 crc kubenswrapper[4697]: I0127 15:32:37.802793 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0d44e3f-5773-4f1d-98bd-6ee63096a361-config-data\") pod \"nova-metadata-0\" (UID: \"c0d44e3f-5773-4f1d-98bd-6ee63096a361\") " pod="openstack/nova-metadata-0" Jan 27 15:32:37 crc kubenswrapper[4697]: I0127 15:32:37.803020 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p28nn\" (UniqueName: \"kubernetes.io/projected/c0d44e3f-5773-4f1d-98bd-6ee63096a361-kube-api-access-p28nn\") pod \"nova-metadata-0\" (UID: \"c0d44e3f-5773-4f1d-98bd-6ee63096a361\") " pod="openstack/nova-metadata-0" Jan 27 15:32:37 crc kubenswrapper[4697]: I0127 15:32:37.803116 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c0d44e3f-5773-4f1d-98bd-6ee63096a361-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"c0d44e3f-5773-4f1d-98bd-6ee63096a361\") " pod="openstack/nova-metadata-0" Jan 27 15:32:37 crc kubenswrapper[4697]: I0127 15:32:37.803216 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c0d44e3f-5773-4f1d-98bd-6ee63096a361-logs\") pod \"nova-metadata-0\" (UID: \"c0d44e3f-5773-4f1d-98bd-6ee63096a361\") " pod="openstack/nova-metadata-0" Jan 27 15:32:37 crc kubenswrapper[4697]: I0127 15:32:37.803310 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0d44e3f-5773-4f1d-98bd-6ee63096a361-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c0d44e3f-5773-4f1d-98bd-6ee63096a361\") " pod="openstack/nova-metadata-0" Jan 27 15:32:37 crc kubenswrapper[4697]: I0127 15:32:37.803689 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c0d44e3f-5773-4f1d-98bd-6ee63096a361-logs\") pod \"nova-metadata-0\" (UID: \"c0d44e3f-5773-4f1d-98bd-6ee63096a361\") " pod="openstack/nova-metadata-0" Jan 27 15:32:37 crc kubenswrapper[4697]: I0127 15:32:37.806673 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c0d44e3f-5773-4f1d-98bd-6ee63096a361-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"c0d44e3f-5773-4f1d-98bd-6ee63096a361\") " pod="openstack/nova-metadata-0" Jan 27 15:32:37 crc kubenswrapper[4697]: I0127 15:32:37.808309 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0d44e3f-5773-4f1d-98bd-6ee63096a361-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c0d44e3f-5773-4f1d-98bd-6ee63096a361\") " pod="openstack/nova-metadata-0" Jan 27 15:32:37 crc kubenswrapper[4697]: I0127 15:32:37.810307 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0d44e3f-5773-4f1d-98bd-6ee63096a361-config-data\") pod \"nova-metadata-0\" (UID: \"c0d44e3f-5773-4f1d-98bd-6ee63096a361\") " pod="openstack/nova-metadata-0" Jan 27 15:32:37 crc kubenswrapper[4697]: I0127 15:32:37.832161 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p28nn\" (UniqueName: \"kubernetes.io/projected/c0d44e3f-5773-4f1d-98bd-6ee63096a361-kube-api-access-p28nn\") pod \"nova-metadata-0\" (UID: \"c0d44e3f-5773-4f1d-98bd-6ee63096a361\") " pod="openstack/nova-metadata-0" Jan 27 15:32:37 crc kubenswrapper[4697]: I0127 15:32:37.927566 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 15:32:38 crc kubenswrapper[4697]: I0127 15:32:38.226938 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 15:32:38 crc kubenswrapper[4697]: I0127 15:32:38.312177 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-msgb5\" (UniqueName: \"kubernetes.io/projected/00db5b9d-f113-4bc3-b628-ba42b53388b1-kube-api-access-msgb5\") pod \"00db5b9d-f113-4bc3-b628-ba42b53388b1\" (UID: \"00db5b9d-f113-4bc3-b628-ba42b53388b1\") " Jan 27 15:32:38 crc kubenswrapper[4697]: I0127 15:32:38.312338 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00db5b9d-f113-4bc3-b628-ba42b53388b1-combined-ca-bundle\") pod \"00db5b9d-f113-4bc3-b628-ba42b53388b1\" (UID: \"00db5b9d-f113-4bc3-b628-ba42b53388b1\") " Jan 27 15:32:38 crc kubenswrapper[4697]: I0127 15:32:38.312373 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00db5b9d-f113-4bc3-b628-ba42b53388b1-config-data\") pod \"00db5b9d-f113-4bc3-b628-ba42b53388b1\" (UID: \"00db5b9d-f113-4bc3-b628-ba42b53388b1\") " Jan 27 15:32:38 crc kubenswrapper[4697]: I0127 15:32:38.317005 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00db5b9d-f113-4bc3-b628-ba42b53388b1-kube-api-access-msgb5" (OuterVolumeSpecName: "kube-api-access-msgb5") pod "00db5b9d-f113-4bc3-b628-ba42b53388b1" (UID: "00db5b9d-f113-4bc3-b628-ba42b53388b1"). InnerVolumeSpecName "kube-api-access-msgb5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:32:38 crc kubenswrapper[4697]: I0127 15:32:38.338561 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00db5b9d-f113-4bc3-b628-ba42b53388b1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "00db5b9d-f113-4bc3-b628-ba42b53388b1" (UID: "00db5b9d-f113-4bc3-b628-ba42b53388b1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:32:38 crc kubenswrapper[4697]: I0127 15:32:38.344616 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00db5b9d-f113-4bc3-b628-ba42b53388b1-config-data" (OuterVolumeSpecName: "config-data") pod "00db5b9d-f113-4bc3-b628-ba42b53388b1" (UID: "00db5b9d-f113-4bc3-b628-ba42b53388b1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:32:38 crc kubenswrapper[4697]: I0127 15:32:38.414143 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-msgb5\" (UniqueName: \"kubernetes.io/projected/00db5b9d-f113-4bc3-b628-ba42b53388b1-kube-api-access-msgb5\") on node \"crc\" DevicePath \"\"" Jan 27 15:32:38 crc kubenswrapper[4697]: I0127 15:32:38.414178 4697 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00db5b9d-f113-4bc3-b628-ba42b53388b1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 15:32:38 crc kubenswrapper[4697]: I0127 15:32:38.414186 4697 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00db5b9d-f113-4bc3-b628-ba42b53388b1-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 15:32:38 crc kubenswrapper[4697]: I0127 15:32:38.486105 4697 generic.go:334] "Generic (PLEG): container finished" podID="00db5b9d-f113-4bc3-b628-ba42b53388b1" containerID="536cb9785ab4bbdece4e248afa0eb4936af6c8913295be9b405da9b7885aa2d5" exitCode=0 Jan 27 15:32:38 crc kubenswrapper[4697]: I0127 15:32:38.486156 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"00db5b9d-f113-4bc3-b628-ba42b53388b1","Type":"ContainerDied","Data":"536cb9785ab4bbdece4e248afa0eb4936af6c8913295be9b405da9b7885aa2d5"} Jan 27 15:32:38 crc kubenswrapper[4697]: I0127 15:32:38.486201 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 15:32:38 crc kubenswrapper[4697]: I0127 15:32:38.486220 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"00db5b9d-f113-4bc3-b628-ba42b53388b1","Type":"ContainerDied","Data":"f2bd466fa07ff768425f206770a41ca72c749a6800cd4f554541d67dcb0395c7"} Jan 27 15:32:38 crc kubenswrapper[4697]: I0127 15:32:38.486251 4697 scope.go:117] "RemoveContainer" containerID="536cb9785ab4bbdece4e248afa0eb4936af6c8913295be9b405da9b7885aa2d5" Jan 27 15:32:38 crc kubenswrapper[4697]: I0127 15:32:38.526933 4697 scope.go:117] "RemoveContainer" containerID="536cb9785ab4bbdece4e248afa0eb4936af6c8913295be9b405da9b7885aa2d5" Jan 27 15:32:38 crc kubenswrapper[4697]: E0127 15:32:38.527367 4697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"536cb9785ab4bbdece4e248afa0eb4936af6c8913295be9b405da9b7885aa2d5\": container with ID starting with 536cb9785ab4bbdece4e248afa0eb4936af6c8913295be9b405da9b7885aa2d5 not found: ID does not exist" containerID="536cb9785ab4bbdece4e248afa0eb4936af6c8913295be9b405da9b7885aa2d5" Jan 27 15:32:38 crc kubenswrapper[4697]: I0127 15:32:38.527398 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"536cb9785ab4bbdece4e248afa0eb4936af6c8913295be9b405da9b7885aa2d5"} err="failed to get container status \"536cb9785ab4bbdece4e248afa0eb4936af6c8913295be9b405da9b7885aa2d5\": rpc error: code = NotFound desc = could not find container \"536cb9785ab4bbdece4e248afa0eb4936af6c8913295be9b405da9b7885aa2d5\": container with ID starting with 536cb9785ab4bbdece4e248afa0eb4936af6c8913295be9b405da9b7885aa2d5 not found: ID does not exist" Jan 27 15:32:38 crc kubenswrapper[4697]: I0127 15:32:38.530235 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 15:32:38 crc kubenswrapper[4697]: I0127 15:32:38.541877 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 15:32:38 crc kubenswrapper[4697]: I0127 15:32:38.552310 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 15:32:38 crc kubenswrapper[4697]: E0127 15:32:38.552725 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00db5b9d-f113-4bc3-b628-ba42b53388b1" containerName="nova-scheduler-scheduler" Jan 27 15:32:38 crc kubenswrapper[4697]: I0127 15:32:38.552742 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="00db5b9d-f113-4bc3-b628-ba42b53388b1" containerName="nova-scheduler-scheduler" Jan 27 15:32:38 crc kubenswrapper[4697]: I0127 15:32:38.552961 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="00db5b9d-f113-4bc3-b628-ba42b53388b1" containerName="nova-scheduler-scheduler" Jan 27 15:32:38 crc kubenswrapper[4697]: I0127 15:32:38.553571 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 15:32:38 crc kubenswrapper[4697]: I0127 15:32:38.557099 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 27 15:32:38 crc kubenswrapper[4697]: I0127 15:32:38.581764 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="00db5b9d-f113-4bc3-b628-ba42b53388b1" path="/var/lib/kubelet/pods/00db5b9d-f113-4bc3-b628-ba42b53388b1/volumes" Jan 27 15:32:38 crc kubenswrapper[4697]: I0127 15:32:38.582321 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4b14b86-6816-484a-a804-53a00f8f9b6b" path="/var/lib/kubelet/pods/c4b14b86-6816-484a-a804-53a00f8f9b6b/volumes" Jan 27 15:32:38 crc kubenswrapper[4697]: I0127 15:32:38.582884 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 15:32:38 crc kubenswrapper[4697]: I0127 15:32:38.609667 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 15:32:38 crc kubenswrapper[4697]: W0127 15:32:38.611459 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc0d44e3f_5773_4f1d_98bd_6ee63096a361.slice/crio-9246ca622c5df714c4b34a53adf5e148dbe9b77906eb11d7f650ae403a8883e0 WatchSource:0}: Error finding container 9246ca622c5df714c4b34a53adf5e148dbe9b77906eb11d7f650ae403a8883e0: Status 404 returned error can't find the container with id 9246ca622c5df714c4b34a53adf5e148dbe9b77906eb11d7f650ae403a8883e0 Jan 27 15:32:38 crc kubenswrapper[4697]: I0127 15:32:38.722356 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/501c9575-b5b8-4c4a-a225-f1853eec3ea8-config-data\") pod \"nova-scheduler-0\" (UID: \"501c9575-b5b8-4c4a-a225-f1853eec3ea8\") " pod="openstack/nova-scheduler-0" Jan 27 15:32:38 crc kubenswrapper[4697]: I0127 15:32:38.726663 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/501c9575-b5b8-4c4a-a225-f1853eec3ea8-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"501c9575-b5b8-4c4a-a225-f1853eec3ea8\") " pod="openstack/nova-scheduler-0" Jan 27 15:32:38 crc kubenswrapper[4697]: I0127 15:32:38.726814 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mghhh\" (UniqueName: \"kubernetes.io/projected/501c9575-b5b8-4c4a-a225-f1853eec3ea8-kube-api-access-mghhh\") pod \"nova-scheduler-0\" (UID: \"501c9575-b5b8-4c4a-a225-f1853eec3ea8\") " pod="openstack/nova-scheduler-0" Jan 27 15:32:38 crc kubenswrapper[4697]: I0127 15:32:38.828540 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mghhh\" (UniqueName: \"kubernetes.io/projected/501c9575-b5b8-4c4a-a225-f1853eec3ea8-kube-api-access-mghhh\") pod \"nova-scheduler-0\" (UID: \"501c9575-b5b8-4c4a-a225-f1853eec3ea8\") " pod="openstack/nova-scheduler-0" Jan 27 15:32:38 crc kubenswrapper[4697]: I0127 15:32:38.828716 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/501c9575-b5b8-4c4a-a225-f1853eec3ea8-config-data\") pod \"nova-scheduler-0\" (UID: \"501c9575-b5b8-4c4a-a225-f1853eec3ea8\") " pod="openstack/nova-scheduler-0" Jan 27 15:32:38 crc kubenswrapper[4697]: I0127 15:32:38.828808 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/501c9575-b5b8-4c4a-a225-f1853eec3ea8-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"501c9575-b5b8-4c4a-a225-f1853eec3ea8\") " pod="openstack/nova-scheduler-0" Jan 27 15:32:38 crc kubenswrapper[4697]: I0127 15:32:38.832557 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/501c9575-b5b8-4c4a-a225-f1853eec3ea8-config-data\") pod \"nova-scheduler-0\" (UID: \"501c9575-b5b8-4c4a-a225-f1853eec3ea8\") " pod="openstack/nova-scheduler-0" Jan 27 15:32:38 crc kubenswrapper[4697]: I0127 15:32:38.834530 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/501c9575-b5b8-4c4a-a225-f1853eec3ea8-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"501c9575-b5b8-4c4a-a225-f1853eec3ea8\") " pod="openstack/nova-scheduler-0" Jan 27 15:32:38 crc kubenswrapper[4697]: I0127 15:32:38.846099 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mghhh\" (UniqueName: \"kubernetes.io/projected/501c9575-b5b8-4c4a-a225-f1853eec3ea8-kube-api-access-mghhh\") pod \"nova-scheduler-0\" (UID: \"501c9575-b5b8-4c4a-a225-f1853eec3ea8\") " pod="openstack/nova-scheduler-0" Jan 27 15:32:38 crc kubenswrapper[4697]: I0127 15:32:38.873646 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 15:32:39 crc kubenswrapper[4697]: I0127 15:32:39.362766 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 15:32:39 crc kubenswrapper[4697]: W0127 15:32:39.363349 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod501c9575_b5b8_4c4a_a225_f1853eec3ea8.slice/crio-f88979a433a5e2a0fc3fce1d84be78e51fd5dd6eda5e7c9667699c23b61c1d72 WatchSource:0}: Error finding container f88979a433a5e2a0fc3fce1d84be78e51fd5dd6eda5e7c9667699c23b61c1d72: Status 404 returned error can't find the container with id f88979a433a5e2a0fc3fce1d84be78e51fd5dd6eda5e7c9667699c23b61c1d72 Jan 27 15:32:39 crc kubenswrapper[4697]: I0127 15:32:39.501115 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c0d44e3f-5773-4f1d-98bd-6ee63096a361","Type":"ContainerStarted","Data":"ed8fc3f91330cf9c756171832c32313774b0325c7eb210f00e86b4a60338e6a5"} Jan 27 15:32:39 crc kubenswrapper[4697]: I0127 15:32:39.501159 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c0d44e3f-5773-4f1d-98bd-6ee63096a361","Type":"ContainerStarted","Data":"b62e58371635562f614aa3b510293c65bf2615dee1aaeb18d11aa77f37d36b53"} Jan 27 15:32:39 crc kubenswrapper[4697]: I0127 15:32:39.501168 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c0d44e3f-5773-4f1d-98bd-6ee63096a361","Type":"ContainerStarted","Data":"9246ca622c5df714c4b34a53adf5e148dbe9b77906eb11d7f650ae403a8883e0"} Jan 27 15:32:39 crc kubenswrapper[4697]: I0127 15:32:39.503815 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"501c9575-b5b8-4c4a-a225-f1853eec3ea8","Type":"ContainerStarted","Data":"f88979a433a5e2a0fc3fce1d84be78e51fd5dd6eda5e7c9667699c23b61c1d72"} Jan 27 15:32:39 crc kubenswrapper[4697]: I0127 15:32:39.531071 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.531051955 podStartE2EDuration="2.531051955s" podCreationTimestamp="2026-01-27 15:32:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:32:39.521762119 +0000 UTC m=+1455.694161920" watchObservedRunningTime="2026-01-27 15:32:39.531051955 +0000 UTC m=+1455.703451736" Jan 27 15:32:40 crc kubenswrapper[4697]: I0127 15:32:40.326394 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 15:32:40 crc kubenswrapper[4697]: I0127 15:32:40.468198 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/87e9d4dd-cc6a-4809-b4bb-51bc9f3b5443-logs\") pod \"87e9d4dd-cc6a-4809-b4bb-51bc9f3b5443\" (UID: \"87e9d4dd-cc6a-4809-b4bb-51bc9f3b5443\") " Jan 27 15:32:40 crc kubenswrapper[4697]: I0127 15:32:40.468292 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jmk4n\" (UniqueName: \"kubernetes.io/projected/87e9d4dd-cc6a-4809-b4bb-51bc9f3b5443-kube-api-access-jmk4n\") pod \"87e9d4dd-cc6a-4809-b4bb-51bc9f3b5443\" (UID: \"87e9d4dd-cc6a-4809-b4bb-51bc9f3b5443\") " Jan 27 15:32:40 crc kubenswrapper[4697]: I0127 15:32:40.468348 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87e9d4dd-cc6a-4809-b4bb-51bc9f3b5443-combined-ca-bundle\") pod \"87e9d4dd-cc6a-4809-b4bb-51bc9f3b5443\" (UID: \"87e9d4dd-cc6a-4809-b4bb-51bc9f3b5443\") " Jan 27 15:32:40 crc kubenswrapper[4697]: I0127 15:32:40.468387 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87e9d4dd-cc6a-4809-b4bb-51bc9f3b5443-config-data\") pod \"87e9d4dd-cc6a-4809-b4bb-51bc9f3b5443\" (UID: \"87e9d4dd-cc6a-4809-b4bb-51bc9f3b5443\") " Jan 27 15:32:40 crc kubenswrapper[4697]: I0127 15:32:40.468624 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/87e9d4dd-cc6a-4809-b4bb-51bc9f3b5443-logs" (OuterVolumeSpecName: "logs") pod "87e9d4dd-cc6a-4809-b4bb-51bc9f3b5443" (UID: "87e9d4dd-cc6a-4809-b4bb-51bc9f3b5443"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:32:40 crc kubenswrapper[4697]: I0127 15:32:40.468919 4697 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/87e9d4dd-cc6a-4809-b4bb-51bc9f3b5443-logs\") on node \"crc\" DevicePath \"\"" Jan 27 15:32:40 crc kubenswrapper[4697]: I0127 15:32:40.493588 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87e9d4dd-cc6a-4809-b4bb-51bc9f3b5443-kube-api-access-jmk4n" (OuterVolumeSpecName: "kube-api-access-jmk4n") pod "87e9d4dd-cc6a-4809-b4bb-51bc9f3b5443" (UID: "87e9d4dd-cc6a-4809-b4bb-51bc9f3b5443"). InnerVolumeSpecName "kube-api-access-jmk4n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:32:40 crc kubenswrapper[4697]: I0127 15:32:40.541055 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87e9d4dd-cc6a-4809-b4bb-51bc9f3b5443-config-data" (OuterVolumeSpecName: "config-data") pod "87e9d4dd-cc6a-4809-b4bb-51bc9f3b5443" (UID: "87e9d4dd-cc6a-4809-b4bb-51bc9f3b5443"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:32:40 crc kubenswrapper[4697]: I0127 15:32:40.541446 4697 generic.go:334] "Generic (PLEG): container finished" podID="87e9d4dd-cc6a-4809-b4bb-51bc9f3b5443" containerID="21672ab3ed72933291e5ce037922a289dcc964fd891a508cce64f62db0547781" exitCode=0 Jan 27 15:32:40 crc kubenswrapper[4697]: I0127 15:32:40.541650 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 15:32:40 crc kubenswrapper[4697]: I0127 15:32:40.542201 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"87e9d4dd-cc6a-4809-b4bb-51bc9f3b5443","Type":"ContainerDied","Data":"21672ab3ed72933291e5ce037922a289dcc964fd891a508cce64f62db0547781"} Jan 27 15:32:40 crc kubenswrapper[4697]: I0127 15:32:40.542268 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"87e9d4dd-cc6a-4809-b4bb-51bc9f3b5443","Type":"ContainerDied","Data":"62d3ea6b449e68d02e0335486cf7671effc89f5be223e37f88b7dba47631a354"} Jan 27 15:32:40 crc kubenswrapper[4697]: I0127 15:32:40.542294 4697 scope.go:117] "RemoveContainer" containerID="21672ab3ed72933291e5ce037922a289dcc964fd891a508cce64f62db0547781" Jan 27 15:32:40 crc kubenswrapper[4697]: I0127 15:32:40.546073 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87e9d4dd-cc6a-4809-b4bb-51bc9f3b5443-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "87e9d4dd-cc6a-4809-b4bb-51bc9f3b5443" (UID: "87e9d4dd-cc6a-4809-b4bb-51bc9f3b5443"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:32:40 crc kubenswrapper[4697]: I0127 15:32:40.548294 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"501c9575-b5b8-4c4a-a225-f1853eec3ea8","Type":"ContainerStarted","Data":"7646f676d62a2b950c52eadb999f0729d3653a85b166248081dd1d209dbf2644"} Jan 27 15:32:40 crc kubenswrapper[4697]: I0127 15:32:40.572955 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jmk4n\" (UniqueName: \"kubernetes.io/projected/87e9d4dd-cc6a-4809-b4bb-51bc9f3b5443-kube-api-access-jmk4n\") on node \"crc\" DevicePath \"\"" Jan 27 15:32:40 crc kubenswrapper[4697]: I0127 15:32:40.572988 4697 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87e9d4dd-cc6a-4809-b4bb-51bc9f3b5443-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 15:32:40 crc kubenswrapper[4697]: I0127 15:32:40.572997 4697 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87e9d4dd-cc6a-4809-b4bb-51bc9f3b5443-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 15:32:40 crc kubenswrapper[4697]: I0127 15:32:40.604874 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.604851378 podStartE2EDuration="2.604851378s" podCreationTimestamp="2026-01-27 15:32:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:32:40.591322999 +0000 UTC m=+1456.763722780" watchObservedRunningTime="2026-01-27 15:32:40.604851378 +0000 UTC m=+1456.777251159" Jan 27 15:32:40 crc kubenswrapper[4697]: I0127 15:32:40.621557 4697 scope.go:117] "RemoveContainer" containerID="253fb0d2b6adbd491ce9a17bc37b4cdfd16e34e98e7bd8a455d7bcc94372583e" Jan 27 15:32:40 crc kubenswrapper[4697]: I0127 15:32:40.654318 4697 scope.go:117] "RemoveContainer" containerID="21672ab3ed72933291e5ce037922a289dcc964fd891a508cce64f62db0547781" Jan 27 15:32:40 crc kubenswrapper[4697]: E0127 15:32:40.655107 4697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"21672ab3ed72933291e5ce037922a289dcc964fd891a508cce64f62db0547781\": container with ID starting with 21672ab3ed72933291e5ce037922a289dcc964fd891a508cce64f62db0547781 not found: ID does not exist" containerID="21672ab3ed72933291e5ce037922a289dcc964fd891a508cce64f62db0547781" Jan 27 15:32:40 crc kubenswrapper[4697]: I0127 15:32:40.655216 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21672ab3ed72933291e5ce037922a289dcc964fd891a508cce64f62db0547781"} err="failed to get container status \"21672ab3ed72933291e5ce037922a289dcc964fd891a508cce64f62db0547781\": rpc error: code = NotFound desc = could not find container \"21672ab3ed72933291e5ce037922a289dcc964fd891a508cce64f62db0547781\": container with ID starting with 21672ab3ed72933291e5ce037922a289dcc964fd891a508cce64f62db0547781 not found: ID does not exist" Jan 27 15:32:40 crc kubenswrapper[4697]: I0127 15:32:40.655353 4697 scope.go:117] "RemoveContainer" containerID="253fb0d2b6adbd491ce9a17bc37b4cdfd16e34e98e7bd8a455d7bcc94372583e" Jan 27 15:32:40 crc kubenswrapper[4697]: E0127 15:32:40.655838 4697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"253fb0d2b6adbd491ce9a17bc37b4cdfd16e34e98e7bd8a455d7bcc94372583e\": container with ID starting with 253fb0d2b6adbd491ce9a17bc37b4cdfd16e34e98e7bd8a455d7bcc94372583e not found: ID does not exist" containerID="253fb0d2b6adbd491ce9a17bc37b4cdfd16e34e98e7bd8a455d7bcc94372583e" Jan 27 15:32:40 crc kubenswrapper[4697]: I0127 15:32:40.655933 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"253fb0d2b6adbd491ce9a17bc37b4cdfd16e34e98e7bd8a455d7bcc94372583e"} err="failed to get container status \"253fb0d2b6adbd491ce9a17bc37b4cdfd16e34e98e7bd8a455d7bcc94372583e\": rpc error: code = NotFound desc = could not find container \"253fb0d2b6adbd491ce9a17bc37b4cdfd16e34e98e7bd8a455d7bcc94372583e\": container with ID starting with 253fb0d2b6adbd491ce9a17bc37b4cdfd16e34e98e7bd8a455d7bcc94372583e not found: ID does not exist" Jan 27 15:32:40 crc kubenswrapper[4697]: I0127 15:32:40.904841 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 27 15:32:40 crc kubenswrapper[4697]: I0127 15:32:40.912965 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 27 15:32:40 crc kubenswrapper[4697]: I0127 15:32:40.938947 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 27 15:32:40 crc kubenswrapper[4697]: E0127 15:32:40.941027 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87e9d4dd-cc6a-4809-b4bb-51bc9f3b5443" containerName="nova-api-api" Jan 27 15:32:40 crc kubenswrapper[4697]: I0127 15:32:40.941665 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="87e9d4dd-cc6a-4809-b4bb-51bc9f3b5443" containerName="nova-api-api" Jan 27 15:32:40 crc kubenswrapper[4697]: E0127 15:32:40.941804 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87e9d4dd-cc6a-4809-b4bb-51bc9f3b5443" containerName="nova-api-log" Jan 27 15:32:40 crc kubenswrapper[4697]: I0127 15:32:40.941887 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="87e9d4dd-cc6a-4809-b4bb-51bc9f3b5443" containerName="nova-api-log" Jan 27 15:32:40 crc kubenswrapper[4697]: I0127 15:32:40.942214 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="87e9d4dd-cc6a-4809-b4bb-51bc9f3b5443" containerName="nova-api-log" Jan 27 15:32:40 crc kubenswrapper[4697]: I0127 15:32:40.942340 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="87e9d4dd-cc6a-4809-b4bb-51bc9f3b5443" containerName="nova-api-api" Jan 27 15:32:40 crc kubenswrapper[4697]: I0127 15:32:40.944595 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 15:32:40 crc kubenswrapper[4697]: I0127 15:32:40.947347 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 27 15:32:40 crc kubenswrapper[4697]: I0127 15:32:40.959110 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 27 15:32:40 crc kubenswrapper[4697]: I0127 15:32:40.961046 4697 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-7j258" podUID="b4da5f97-06a4-452e-9f5e-87c97c5bf1f5" containerName="registry-server" probeResult="failure" output=< Jan 27 15:32:40 crc kubenswrapper[4697]: timeout: failed to connect service ":50051" within 1s Jan 27 15:32:40 crc kubenswrapper[4697]: > Jan 27 15:32:41 crc kubenswrapper[4697]: I0127 15:32:41.081171 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gv5wm\" (UniqueName: \"kubernetes.io/projected/a03e6dcb-fc88-458b-9f8d-36aa045a14ca-kube-api-access-gv5wm\") pod \"nova-api-0\" (UID: \"a03e6dcb-fc88-458b-9f8d-36aa045a14ca\") " pod="openstack/nova-api-0" Jan 27 15:32:41 crc kubenswrapper[4697]: I0127 15:32:41.081235 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a03e6dcb-fc88-458b-9f8d-36aa045a14ca-logs\") pod \"nova-api-0\" (UID: \"a03e6dcb-fc88-458b-9f8d-36aa045a14ca\") " pod="openstack/nova-api-0" Jan 27 15:32:41 crc kubenswrapper[4697]: I0127 15:32:41.081296 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a03e6dcb-fc88-458b-9f8d-36aa045a14ca-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a03e6dcb-fc88-458b-9f8d-36aa045a14ca\") " pod="openstack/nova-api-0" Jan 27 15:32:41 crc kubenswrapper[4697]: I0127 15:32:41.081484 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a03e6dcb-fc88-458b-9f8d-36aa045a14ca-config-data\") pod \"nova-api-0\" (UID: \"a03e6dcb-fc88-458b-9f8d-36aa045a14ca\") " pod="openstack/nova-api-0" Jan 27 15:32:41 crc kubenswrapper[4697]: I0127 15:32:41.183026 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gv5wm\" (UniqueName: \"kubernetes.io/projected/a03e6dcb-fc88-458b-9f8d-36aa045a14ca-kube-api-access-gv5wm\") pod \"nova-api-0\" (UID: \"a03e6dcb-fc88-458b-9f8d-36aa045a14ca\") " pod="openstack/nova-api-0" Jan 27 15:32:41 crc kubenswrapper[4697]: I0127 15:32:41.183100 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a03e6dcb-fc88-458b-9f8d-36aa045a14ca-logs\") pod \"nova-api-0\" (UID: \"a03e6dcb-fc88-458b-9f8d-36aa045a14ca\") " pod="openstack/nova-api-0" Jan 27 15:32:41 crc kubenswrapper[4697]: I0127 15:32:41.183151 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a03e6dcb-fc88-458b-9f8d-36aa045a14ca-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a03e6dcb-fc88-458b-9f8d-36aa045a14ca\") " pod="openstack/nova-api-0" Jan 27 15:32:41 crc kubenswrapper[4697]: I0127 15:32:41.183295 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a03e6dcb-fc88-458b-9f8d-36aa045a14ca-config-data\") pod \"nova-api-0\" (UID: \"a03e6dcb-fc88-458b-9f8d-36aa045a14ca\") " pod="openstack/nova-api-0" Jan 27 15:32:41 crc kubenswrapper[4697]: I0127 15:32:41.184565 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a03e6dcb-fc88-458b-9f8d-36aa045a14ca-logs\") pod \"nova-api-0\" (UID: \"a03e6dcb-fc88-458b-9f8d-36aa045a14ca\") " pod="openstack/nova-api-0" Jan 27 15:32:41 crc kubenswrapper[4697]: I0127 15:32:41.189236 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a03e6dcb-fc88-458b-9f8d-36aa045a14ca-config-data\") pod \"nova-api-0\" (UID: \"a03e6dcb-fc88-458b-9f8d-36aa045a14ca\") " pod="openstack/nova-api-0" Jan 27 15:32:41 crc kubenswrapper[4697]: I0127 15:32:41.193510 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a03e6dcb-fc88-458b-9f8d-36aa045a14ca-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a03e6dcb-fc88-458b-9f8d-36aa045a14ca\") " pod="openstack/nova-api-0" Jan 27 15:32:41 crc kubenswrapper[4697]: I0127 15:32:41.204610 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gv5wm\" (UniqueName: \"kubernetes.io/projected/a03e6dcb-fc88-458b-9f8d-36aa045a14ca-kube-api-access-gv5wm\") pod \"nova-api-0\" (UID: \"a03e6dcb-fc88-458b-9f8d-36aa045a14ca\") " pod="openstack/nova-api-0" Jan 27 15:32:41 crc kubenswrapper[4697]: I0127 15:32:41.271844 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 15:32:41 crc kubenswrapper[4697]: I0127 15:32:41.761001 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 27 15:32:41 crc kubenswrapper[4697]: W0127 15:32:41.766077 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda03e6dcb_fc88_458b_9f8d_36aa045a14ca.slice/crio-bc602f4376c6f50fab82de25aeb1761167909ee3981d44125cf0b923fdf160fc WatchSource:0}: Error finding container bc602f4376c6f50fab82de25aeb1761167909ee3981d44125cf0b923fdf160fc: Status 404 returned error can't find the container with id bc602f4376c6f50fab82de25aeb1761167909ee3981d44125cf0b923fdf160fc Jan 27 15:32:42 crc kubenswrapper[4697]: I0127 15:32:42.582737 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87e9d4dd-cc6a-4809-b4bb-51bc9f3b5443" path="/var/lib/kubelet/pods/87e9d4dd-cc6a-4809-b4bb-51bc9f3b5443/volumes" Jan 27 15:32:42 crc kubenswrapper[4697]: I0127 15:32:42.583766 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a03e6dcb-fc88-458b-9f8d-36aa045a14ca","Type":"ContainerStarted","Data":"194e8851134b9bc0ab672fde5428d363e4d7f538ce6ca04f1637809eba62ce80"} Jan 27 15:32:42 crc kubenswrapper[4697]: I0127 15:32:42.583855 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a03e6dcb-fc88-458b-9f8d-36aa045a14ca","Type":"ContainerStarted","Data":"fd6ea4e0b4b8cd12606cf1a5309a5147a5ac6f2e68d99b3cd48cbbc957af7903"} Jan 27 15:32:42 crc kubenswrapper[4697]: I0127 15:32:42.583870 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a03e6dcb-fc88-458b-9f8d-36aa045a14ca","Type":"ContainerStarted","Data":"bc602f4376c6f50fab82de25aeb1761167909ee3981d44125cf0b923fdf160fc"} Jan 27 15:32:42 crc kubenswrapper[4697]: I0127 15:32:42.592829 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.5928035720000002 podStartE2EDuration="2.592803572s" podCreationTimestamp="2026-01-27 15:32:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:32:42.588825825 +0000 UTC m=+1458.761225616" watchObservedRunningTime="2026-01-27 15:32:42.592803572 +0000 UTC m=+1458.765203353" Jan 27 15:32:42 crc kubenswrapper[4697]: I0127 15:32:42.928443 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 27 15:32:42 crc kubenswrapper[4697]: I0127 15:32:42.930013 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 27 15:32:43 crc kubenswrapper[4697]: I0127 15:32:43.581470 4697 generic.go:334] "Generic (PLEG): container finished" podID="dab4d694-1486-4154-89dd-5f2f04639abe" containerID="1ff89c29f96f550b0c007c37fa86cbd840a71abf4f2d281576e8d3720d9458cf" exitCode=0 Jan 27 15:32:43 crc kubenswrapper[4697]: I0127 15:32:43.581544 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-8qn4m" event={"ID":"dab4d694-1486-4154-89dd-5f2f04639abe","Type":"ContainerDied","Data":"1ff89c29f96f550b0c007c37fa86cbd840a71abf4f2d281576e8d3720d9458cf"} Jan 27 15:32:43 crc kubenswrapper[4697]: I0127 15:32:43.874642 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 27 15:32:44 crc kubenswrapper[4697]: I0127 15:32:44.957588 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-8qn4m" Jan 27 15:32:45 crc kubenswrapper[4697]: I0127 15:32:45.059459 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dab4d694-1486-4154-89dd-5f2f04639abe-config-data\") pod \"dab4d694-1486-4154-89dd-5f2f04639abe\" (UID: \"dab4d694-1486-4154-89dd-5f2f04639abe\") " Jan 27 15:32:45 crc kubenswrapper[4697]: I0127 15:32:45.059628 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dab4d694-1486-4154-89dd-5f2f04639abe-combined-ca-bundle\") pod \"dab4d694-1486-4154-89dd-5f2f04639abe\" (UID: \"dab4d694-1486-4154-89dd-5f2f04639abe\") " Jan 27 15:32:45 crc kubenswrapper[4697]: I0127 15:32:45.059753 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dab4d694-1486-4154-89dd-5f2f04639abe-scripts\") pod \"dab4d694-1486-4154-89dd-5f2f04639abe\" (UID: \"dab4d694-1486-4154-89dd-5f2f04639abe\") " Jan 27 15:32:45 crc kubenswrapper[4697]: I0127 15:32:45.059793 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-27hl5\" (UniqueName: \"kubernetes.io/projected/dab4d694-1486-4154-89dd-5f2f04639abe-kube-api-access-27hl5\") pod \"dab4d694-1486-4154-89dd-5f2f04639abe\" (UID: \"dab4d694-1486-4154-89dd-5f2f04639abe\") " Jan 27 15:32:45 crc kubenswrapper[4697]: I0127 15:32:45.065404 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dab4d694-1486-4154-89dd-5f2f04639abe-scripts" (OuterVolumeSpecName: "scripts") pod "dab4d694-1486-4154-89dd-5f2f04639abe" (UID: "dab4d694-1486-4154-89dd-5f2f04639abe"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:32:45 crc kubenswrapper[4697]: I0127 15:32:45.069933 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dab4d694-1486-4154-89dd-5f2f04639abe-kube-api-access-27hl5" (OuterVolumeSpecName: "kube-api-access-27hl5") pod "dab4d694-1486-4154-89dd-5f2f04639abe" (UID: "dab4d694-1486-4154-89dd-5f2f04639abe"). InnerVolumeSpecName "kube-api-access-27hl5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:32:45 crc kubenswrapper[4697]: I0127 15:32:45.086688 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dab4d694-1486-4154-89dd-5f2f04639abe-config-data" (OuterVolumeSpecName: "config-data") pod "dab4d694-1486-4154-89dd-5f2f04639abe" (UID: "dab4d694-1486-4154-89dd-5f2f04639abe"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:32:45 crc kubenswrapper[4697]: I0127 15:32:45.089300 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dab4d694-1486-4154-89dd-5f2f04639abe-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dab4d694-1486-4154-89dd-5f2f04639abe" (UID: "dab4d694-1486-4154-89dd-5f2f04639abe"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:32:45 crc kubenswrapper[4697]: I0127 15:32:45.162473 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-27hl5\" (UniqueName: \"kubernetes.io/projected/dab4d694-1486-4154-89dd-5f2f04639abe-kube-api-access-27hl5\") on node \"crc\" DevicePath \"\"" Jan 27 15:32:45 crc kubenswrapper[4697]: I0127 15:32:45.162558 4697 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dab4d694-1486-4154-89dd-5f2f04639abe-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 15:32:45 crc kubenswrapper[4697]: I0127 15:32:45.162580 4697 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dab4d694-1486-4154-89dd-5f2f04639abe-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 15:32:45 crc kubenswrapper[4697]: I0127 15:32:45.162601 4697 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dab4d694-1486-4154-89dd-5f2f04639abe-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 15:32:45 crc kubenswrapper[4697]: I0127 15:32:45.602523 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-8qn4m" event={"ID":"dab4d694-1486-4154-89dd-5f2f04639abe","Type":"ContainerDied","Data":"8a98e38db222906f254668540946323990857febb99da5ba8e338231894f78f9"} Jan 27 15:32:45 crc kubenswrapper[4697]: I0127 15:32:45.602558 4697 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8a98e38db222906f254668540946323990857febb99da5ba8e338231894f78f9" Jan 27 15:32:45 crc kubenswrapper[4697]: I0127 15:32:45.602659 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-8qn4m" Jan 27 15:32:45 crc kubenswrapper[4697]: I0127 15:32:45.694150 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 27 15:32:45 crc kubenswrapper[4697]: E0127 15:32:45.694522 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dab4d694-1486-4154-89dd-5f2f04639abe" containerName="nova-cell1-conductor-db-sync" Jan 27 15:32:45 crc kubenswrapper[4697]: I0127 15:32:45.694534 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="dab4d694-1486-4154-89dd-5f2f04639abe" containerName="nova-cell1-conductor-db-sync" Jan 27 15:32:45 crc kubenswrapper[4697]: I0127 15:32:45.694730 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="dab4d694-1486-4154-89dd-5f2f04639abe" containerName="nova-cell1-conductor-db-sync" Jan 27 15:32:45 crc kubenswrapper[4697]: I0127 15:32:45.695321 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 27 15:32:45 crc kubenswrapper[4697]: I0127 15:32:45.697188 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 27 15:32:45 crc kubenswrapper[4697]: I0127 15:32:45.711738 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 27 15:32:45 crc kubenswrapper[4697]: I0127 15:32:45.774674 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rww9r\" (UniqueName: \"kubernetes.io/projected/41233cf8-f273-4cae-a02d-9e0fb56b2f1d-kube-api-access-rww9r\") pod \"nova-cell1-conductor-0\" (UID: \"41233cf8-f273-4cae-a02d-9e0fb56b2f1d\") " pod="openstack/nova-cell1-conductor-0" Jan 27 15:32:45 crc kubenswrapper[4697]: I0127 15:32:45.775041 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41233cf8-f273-4cae-a02d-9e0fb56b2f1d-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"41233cf8-f273-4cae-a02d-9e0fb56b2f1d\") " pod="openstack/nova-cell1-conductor-0" Jan 27 15:32:45 crc kubenswrapper[4697]: I0127 15:32:45.775103 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41233cf8-f273-4cae-a02d-9e0fb56b2f1d-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"41233cf8-f273-4cae-a02d-9e0fb56b2f1d\") " pod="openstack/nova-cell1-conductor-0" Jan 27 15:32:45 crc kubenswrapper[4697]: I0127 15:32:45.877114 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rww9r\" (UniqueName: \"kubernetes.io/projected/41233cf8-f273-4cae-a02d-9e0fb56b2f1d-kube-api-access-rww9r\") pod \"nova-cell1-conductor-0\" (UID: \"41233cf8-f273-4cae-a02d-9e0fb56b2f1d\") " pod="openstack/nova-cell1-conductor-0" Jan 27 15:32:45 crc kubenswrapper[4697]: I0127 15:32:45.877209 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41233cf8-f273-4cae-a02d-9e0fb56b2f1d-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"41233cf8-f273-4cae-a02d-9e0fb56b2f1d\") " pod="openstack/nova-cell1-conductor-0" Jan 27 15:32:45 crc kubenswrapper[4697]: I0127 15:32:45.877297 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41233cf8-f273-4cae-a02d-9e0fb56b2f1d-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"41233cf8-f273-4cae-a02d-9e0fb56b2f1d\") " pod="openstack/nova-cell1-conductor-0" Jan 27 15:32:45 crc kubenswrapper[4697]: I0127 15:32:45.884725 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41233cf8-f273-4cae-a02d-9e0fb56b2f1d-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"41233cf8-f273-4cae-a02d-9e0fb56b2f1d\") " pod="openstack/nova-cell1-conductor-0" Jan 27 15:32:45 crc kubenswrapper[4697]: I0127 15:32:45.894624 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41233cf8-f273-4cae-a02d-9e0fb56b2f1d-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"41233cf8-f273-4cae-a02d-9e0fb56b2f1d\") " pod="openstack/nova-cell1-conductor-0" Jan 27 15:32:45 crc kubenswrapper[4697]: I0127 15:32:45.941534 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rww9r\" (UniqueName: \"kubernetes.io/projected/41233cf8-f273-4cae-a02d-9e0fb56b2f1d-kube-api-access-rww9r\") pod \"nova-cell1-conductor-0\" (UID: \"41233cf8-f273-4cae-a02d-9e0fb56b2f1d\") " pod="openstack/nova-cell1-conductor-0" Jan 27 15:32:46 crc kubenswrapper[4697]: I0127 15:32:46.052907 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 27 15:32:46 crc kubenswrapper[4697]: I0127 15:32:46.614502 4697 generic.go:334] "Generic (PLEG): container finished" podID="d6ad161d-fe95-4ad3-8f60-1f1310b2974c" containerID="2df99ec05f3e11b983a2be572f2382aaa1f7adad62f6fff7f6f6a32e40fc7f05" exitCode=137 Jan 27 15:32:46 crc kubenswrapper[4697]: I0127 15:32:46.614816 4697 generic.go:334] "Generic (PLEG): container finished" podID="d6ad161d-fe95-4ad3-8f60-1f1310b2974c" containerID="d1e760cbe02185bc38a0ab3d68834dd5be89159d85d23e6c2893a23d0cd8eff0" exitCode=137 Jan 27 15:32:46 crc kubenswrapper[4697]: I0127 15:32:46.614734 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5965fc65fb-dvhzz" event={"ID":"d6ad161d-fe95-4ad3-8f60-1f1310b2974c","Type":"ContainerDied","Data":"2df99ec05f3e11b983a2be572f2382aaa1f7adad62f6fff7f6f6a32e40fc7f05"} Jan 27 15:32:46 crc kubenswrapper[4697]: I0127 15:32:46.614857 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5965fc65fb-dvhzz" event={"ID":"d6ad161d-fe95-4ad3-8f60-1f1310b2974c","Type":"ContainerDied","Data":"d1e760cbe02185bc38a0ab3d68834dd5be89159d85d23e6c2893a23d0cd8eff0"} Jan 27 15:32:46 crc kubenswrapper[4697]: I0127 15:32:46.614879 4697 scope.go:117] "RemoveContainer" containerID="5482b300f2fd66a56a6e5b2b822ea47551b3a8a006085c8bcf8f05ac76f7cc29" Jan 27 15:32:46 crc kubenswrapper[4697]: I0127 15:32:46.615561 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 27 15:32:46 crc kubenswrapper[4697]: W0127 15:32:46.627614 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod41233cf8_f273_4cae_a02d_9e0fb56b2f1d.slice/crio-71a7a261d2b1073df956bb2b568f409a7dd495d43b18e2fe7860d16897576f17 WatchSource:0}: Error finding container 71a7a261d2b1073df956bb2b568f409a7dd495d43b18e2fe7860d16897576f17: Status 404 returned error can't find the container with id 71a7a261d2b1073df956bb2b568f409a7dd495d43b18e2fe7860d16897576f17 Jan 27 15:32:47 crc kubenswrapper[4697]: I0127 15:32:47.254779 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5965fc65fb-dvhzz" Jan 27 15:32:47 crc kubenswrapper[4697]: I0127 15:32:47.306194 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6ad161d-fe95-4ad3-8f60-1f1310b2974c-combined-ca-bundle\") pod \"d6ad161d-fe95-4ad3-8f60-1f1310b2974c\" (UID: \"d6ad161d-fe95-4ad3-8f60-1f1310b2974c\") " Jan 27 15:32:47 crc kubenswrapper[4697]: I0127 15:32:47.306505 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mt6v5\" (UniqueName: \"kubernetes.io/projected/d6ad161d-fe95-4ad3-8f60-1f1310b2974c-kube-api-access-mt6v5\") pod \"d6ad161d-fe95-4ad3-8f60-1f1310b2974c\" (UID: \"d6ad161d-fe95-4ad3-8f60-1f1310b2974c\") " Jan 27 15:32:47 crc kubenswrapper[4697]: I0127 15:32:47.307132 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d6ad161d-fe95-4ad3-8f60-1f1310b2974c-config-data\") pod \"d6ad161d-fe95-4ad3-8f60-1f1310b2974c\" (UID: \"d6ad161d-fe95-4ad3-8f60-1f1310b2974c\") " Jan 27 15:32:47 crc kubenswrapper[4697]: I0127 15:32:47.307394 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d6ad161d-fe95-4ad3-8f60-1f1310b2974c-horizon-secret-key\") pod \"d6ad161d-fe95-4ad3-8f60-1f1310b2974c\" (UID: \"d6ad161d-fe95-4ad3-8f60-1f1310b2974c\") " Jan 27 15:32:47 crc kubenswrapper[4697]: I0127 15:32:47.307550 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d6ad161d-fe95-4ad3-8f60-1f1310b2974c-scripts\") pod \"d6ad161d-fe95-4ad3-8f60-1f1310b2974c\" (UID: \"d6ad161d-fe95-4ad3-8f60-1f1310b2974c\") " Jan 27 15:32:47 crc kubenswrapper[4697]: I0127 15:32:47.307817 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/d6ad161d-fe95-4ad3-8f60-1f1310b2974c-horizon-tls-certs\") pod \"d6ad161d-fe95-4ad3-8f60-1f1310b2974c\" (UID: \"d6ad161d-fe95-4ad3-8f60-1f1310b2974c\") " Jan 27 15:32:47 crc kubenswrapper[4697]: I0127 15:32:47.307946 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d6ad161d-fe95-4ad3-8f60-1f1310b2974c-logs\") pod \"d6ad161d-fe95-4ad3-8f60-1f1310b2974c\" (UID: \"d6ad161d-fe95-4ad3-8f60-1f1310b2974c\") " Jan 27 15:32:47 crc kubenswrapper[4697]: I0127 15:32:47.375594 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d6ad161d-fe95-4ad3-8f60-1f1310b2974c-config-data" (OuterVolumeSpecName: "config-data") pod "d6ad161d-fe95-4ad3-8f60-1f1310b2974c" (UID: "d6ad161d-fe95-4ad3-8f60-1f1310b2974c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:32:47 crc kubenswrapper[4697]: I0127 15:32:47.383494 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d6ad161d-fe95-4ad3-8f60-1f1310b2974c-scripts" (OuterVolumeSpecName: "scripts") pod "d6ad161d-fe95-4ad3-8f60-1f1310b2974c" (UID: "d6ad161d-fe95-4ad3-8f60-1f1310b2974c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:32:47 crc kubenswrapper[4697]: I0127 15:32:47.404105 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d6ad161d-fe95-4ad3-8f60-1f1310b2974c-logs" (OuterVolumeSpecName: "logs") pod "d6ad161d-fe95-4ad3-8f60-1f1310b2974c" (UID: "d6ad161d-fe95-4ad3-8f60-1f1310b2974c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:32:47 crc kubenswrapper[4697]: I0127 15:32:47.410200 4697 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d6ad161d-fe95-4ad3-8f60-1f1310b2974c-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 15:32:47 crc kubenswrapper[4697]: I0127 15:32:47.410236 4697 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d6ad161d-fe95-4ad3-8f60-1f1310b2974c-logs\") on node \"crc\" DevicePath \"\"" Jan 27 15:32:47 crc kubenswrapper[4697]: I0127 15:32:47.410247 4697 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d6ad161d-fe95-4ad3-8f60-1f1310b2974c-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 15:32:47 crc kubenswrapper[4697]: I0127 15:32:47.413397 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6ad161d-fe95-4ad3-8f60-1f1310b2974c-kube-api-access-mt6v5" (OuterVolumeSpecName: "kube-api-access-mt6v5") pod "d6ad161d-fe95-4ad3-8f60-1f1310b2974c" (UID: "d6ad161d-fe95-4ad3-8f60-1f1310b2974c"). InnerVolumeSpecName "kube-api-access-mt6v5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:32:47 crc kubenswrapper[4697]: I0127 15:32:47.414801 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6ad161d-fe95-4ad3-8f60-1f1310b2974c-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "d6ad161d-fe95-4ad3-8f60-1f1310b2974c" (UID: "d6ad161d-fe95-4ad3-8f60-1f1310b2974c"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:32:47 crc kubenswrapper[4697]: I0127 15:32:47.414884 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6ad161d-fe95-4ad3-8f60-1f1310b2974c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d6ad161d-fe95-4ad3-8f60-1f1310b2974c" (UID: "d6ad161d-fe95-4ad3-8f60-1f1310b2974c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:32:47 crc kubenswrapper[4697]: I0127 15:32:47.427028 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6ad161d-fe95-4ad3-8f60-1f1310b2974c-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "d6ad161d-fe95-4ad3-8f60-1f1310b2974c" (UID: "d6ad161d-fe95-4ad3-8f60-1f1310b2974c"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:32:47 crc kubenswrapper[4697]: I0127 15:32:47.512069 4697 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/d6ad161d-fe95-4ad3-8f60-1f1310b2974c-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 15:32:47 crc kubenswrapper[4697]: I0127 15:32:47.512099 4697 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6ad161d-fe95-4ad3-8f60-1f1310b2974c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 15:32:47 crc kubenswrapper[4697]: I0127 15:32:47.512108 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mt6v5\" (UniqueName: \"kubernetes.io/projected/d6ad161d-fe95-4ad3-8f60-1f1310b2974c-kube-api-access-mt6v5\") on node \"crc\" DevicePath \"\"" Jan 27 15:32:47 crc kubenswrapper[4697]: I0127 15:32:47.512119 4697 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d6ad161d-fe95-4ad3-8f60-1f1310b2974c-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 27 15:32:47 crc kubenswrapper[4697]: I0127 15:32:47.634105 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"41233cf8-f273-4cae-a02d-9e0fb56b2f1d","Type":"ContainerStarted","Data":"f9387b0050b27dab4c2ace422d5b27e802fe5a59ba36f4b0731e30189384d719"} Jan 27 15:32:47 crc kubenswrapper[4697]: I0127 15:32:47.634177 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"41233cf8-f273-4cae-a02d-9e0fb56b2f1d","Type":"ContainerStarted","Data":"71a7a261d2b1073df956bb2b568f409a7dd495d43b18e2fe7860d16897576f17"} Jan 27 15:32:47 crc kubenswrapper[4697]: I0127 15:32:47.634364 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 27 15:32:47 crc kubenswrapper[4697]: I0127 15:32:47.636736 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5965fc65fb-dvhzz" event={"ID":"d6ad161d-fe95-4ad3-8f60-1f1310b2974c","Type":"ContainerDied","Data":"df9f412f4a46fd18e30b83b99e2150f3a7fcbe89d808c1d409cff5a479e9d5e1"} Jan 27 15:32:47 crc kubenswrapper[4697]: I0127 15:32:47.636826 4697 scope.go:117] "RemoveContainer" containerID="2df99ec05f3e11b983a2be572f2382aaa1f7adad62f6fff7f6f6a32e40fc7f05" Jan 27 15:32:47 crc kubenswrapper[4697]: I0127 15:32:47.636973 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5965fc65fb-dvhzz" Jan 27 15:32:47 crc kubenswrapper[4697]: I0127 15:32:47.676104 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.6760578820000003 podStartE2EDuration="2.676057882s" podCreationTimestamp="2026-01-27 15:32:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:32:47.657669744 +0000 UTC m=+1463.830069555" watchObservedRunningTime="2026-01-27 15:32:47.676057882 +0000 UTC m=+1463.848457673" Jan 27 15:32:47 crc kubenswrapper[4697]: I0127 15:32:47.704928 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5965fc65fb-dvhzz"] Jan 27 15:32:47 crc kubenswrapper[4697]: I0127 15:32:47.713763 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-5965fc65fb-dvhzz"] Jan 27 15:32:47 crc kubenswrapper[4697]: I0127 15:32:47.913212 4697 scope.go:117] "RemoveContainer" containerID="d1e760cbe02185bc38a0ab3d68834dd5be89159d85d23e6c2893a23d0cd8eff0" Jan 27 15:32:47 crc kubenswrapper[4697]: I0127 15:32:47.928155 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 27 15:32:47 crc kubenswrapper[4697]: I0127 15:32:47.929121 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 27 15:32:48 crc kubenswrapper[4697]: I0127 15:32:48.585480 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d6ad161d-fe95-4ad3-8f60-1f1310b2974c" path="/var/lib/kubelet/pods/d6ad161d-fe95-4ad3-8f60-1f1310b2974c/volumes" Jan 27 15:32:48 crc kubenswrapper[4697]: I0127 15:32:48.874339 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 27 15:32:48 crc kubenswrapper[4697]: I0127 15:32:48.908304 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 27 15:32:48 crc kubenswrapper[4697]: I0127 15:32:48.940947 4697 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="c0d44e3f-5773-4f1d-98bd-6ee63096a361" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.199:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 15:32:48 crc kubenswrapper[4697]: I0127 15:32:48.940998 4697 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="c0d44e3f-5773-4f1d-98bd-6ee63096a361" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.199:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 15:32:49 crc kubenswrapper[4697]: I0127 15:32:49.687438 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 27 15:32:49 crc kubenswrapper[4697]: I0127 15:32:49.987377 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 27 15:32:50 crc kubenswrapper[4697]: I0127 15:32:50.954015 4697 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-7j258" podUID="b4da5f97-06a4-452e-9f5e-87c97c5bf1f5" containerName="registry-server" probeResult="failure" output=< Jan 27 15:32:50 crc kubenswrapper[4697]: timeout: failed to connect service ":50051" within 1s Jan 27 15:32:50 crc kubenswrapper[4697]: > Jan 27 15:32:51 crc kubenswrapper[4697]: I0127 15:32:51.272159 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 27 15:32:51 crc kubenswrapper[4697]: I0127 15:32:51.272590 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 27 15:32:52 crc kubenswrapper[4697]: I0127 15:32:52.355091 4697 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="a03e6dcb-fc88-458b-9f8d-36aa045a14ca" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.201:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 15:32:52 crc kubenswrapper[4697]: I0127 15:32:52.355122 4697 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="a03e6dcb-fc88-458b-9f8d-36aa045a14ca" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.201:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 15:32:55 crc kubenswrapper[4697]: I0127 15:32:55.108480 4697 patch_prober.go:28] interesting pod/machine-config-daemon-wz495 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 15:32:55 crc kubenswrapper[4697]: I0127 15:32:55.108814 4697 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 15:32:56 crc kubenswrapper[4697]: I0127 15:32:56.088622 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 27 15:32:56 crc kubenswrapper[4697]: I0127 15:32:56.723757 4697 generic.go:334] "Generic (PLEG): container finished" podID="3c43bdb1-1b62-4407-87e6-14184c7d7ea2" containerID="867d4560f7712d03fc5b944983a3ffa47a0c21fe5bbbfa7cf347a295f8b2a68f" exitCode=137 Jan 27 15:32:56 crc kubenswrapper[4697]: I0127 15:32:56.723809 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"3c43bdb1-1b62-4407-87e6-14184c7d7ea2","Type":"ContainerDied","Data":"867d4560f7712d03fc5b944983a3ffa47a0c21fe5bbbfa7cf347a295f8b2a68f"} Jan 27 15:32:57 crc kubenswrapper[4697]: I0127 15:32:57.211363 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 27 15:32:57 crc kubenswrapper[4697]: I0127 15:32:57.313863 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c43bdb1-1b62-4407-87e6-14184c7d7ea2-combined-ca-bundle\") pod \"3c43bdb1-1b62-4407-87e6-14184c7d7ea2\" (UID: \"3c43bdb1-1b62-4407-87e6-14184c7d7ea2\") " Jan 27 15:32:57 crc kubenswrapper[4697]: I0127 15:32:57.314186 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sx9tb\" (UniqueName: \"kubernetes.io/projected/3c43bdb1-1b62-4407-87e6-14184c7d7ea2-kube-api-access-sx9tb\") pod \"3c43bdb1-1b62-4407-87e6-14184c7d7ea2\" (UID: \"3c43bdb1-1b62-4407-87e6-14184c7d7ea2\") " Jan 27 15:32:57 crc kubenswrapper[4697]: I0127 15:32:57.314395 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c43bdb1-1b62-4407-87e6-14184c7d7ea2-config-data\") pod \"3c43bdb1-1b62-4407-87e6-14184c7d7ea2\" (UID: \"3c43bdb1-1b62-4407-87e6-14184c7d7ea2\") " Jan 27 15:32:57 crc kubenswrapper[4697]: I0127 15:32:57.330601 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c43bdb1-1b62-4407-87e6-14184c7d7ea2-kube-api-access-sx9tb" (OuterVolumeSpecName: "kube-api-access-sx9tb") pod "3c43bdb1-1b62-4407-87e6-14184c7d7ea2" (UID: "3c43bdb1-1b62-4407-87e6-14184c7d7ea2"). InnerVolumeSpecName "kube-api-access-sx9tb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:32:57 crc kubenswrapper[4697]: I0127 15:32:57.347982 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c43bdb1-1b62-4407-87e6-14184c7d7ea2-config-data" (OuterVolumeSpecName: "config-data") pod "3c43bdb1-1b62-4407-87e6-14184c7d7ea2" (UID: "3c43bdb1-1b62-4407-87e6-14184c7d7ea2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:32:57 crc kubenswrapper[4697]: I0127 15:32:57.349407 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c43bdb1-1b62-4407-87e6-14184c7d7ea2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3c43bdb1-1b62-4407-87e6-14184c7d7ea2" (UID: "3c43bdb1-1b62-4407-87e6-14184c7d7ea2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:32:57 crc kubenswrapper[4697]: I0127 15:32:57.416588 4697 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c43bdb1-1b62-4407-87e6-14184c7d7ea2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 15:32:57 crc kubenswrapper[4697]: I0127 15:32:57.416633 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sx9tb\" (UniqueName: \"kubernetes.io/projected/3c43bdb1-1b62-4407-87e6-14184c7d7ea2-kube-api-access-sx9tb\") on node \"crc\" DevicePath \"\"" Jan 27 15:32:57 crc kubenswrapper[4697]: I0127 15:32:57.416645 4697 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c43bdb1-1b62-4407-87e6-14184c7d7ea2-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 15:32:57 crc kubenswrapper[4697]: I0127 15:32:57.733351 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"3c43bdb1-1b62-4407-87e6-14184c7d7ea2","Type":"ContainerDied","Data":"0350b40a8da5672879f5de1928e1d81333c0cbc0b3dccaa3a738d9d6ef8b75c7"} Jan 27 15:32:57 crc kubenswrapper[4697]: I0127 15:32:57.733390 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 27 15:32:57 crc kubenswrapper[4697]: I0127 15:32:57.733404 4697 scope.go:117] "RemoveContainer" containerID="867d4560f7712d03fc5b944983a3ffa47a0c21fe5bbbfa7cf347a295f8b2a68f" Jan 27 15:32:57 crc kubenswrapper[4697]: I0127 15:32:57.771535 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 15:32:57 crc kubenswrapper[4697]: I0127 15:32:57.782347 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 15:32:57 crc kubenswrapper[4697]: I0127 15:32:57.796396 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 15:32:57 crc kubenswrapper[4697]: E0127 15:32:57.796758 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6ad161d-fe95-4ad3-8f60-1f1310b2974c" containerName="horizon" Jan 27 15:32:57 crc kubenswrapper[4697]: I0127 15:32:57.796772 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6ad161d-fe95-4ad3-8f60-1f1310b2974c" containerName="horizon" Jan 27 15:32:57 crc kubenswrapper[4697]: E0127 15:32:57.796802 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6ad161d-fe95-4ad3-8f60-1f1310b2974c" containerName="horizon" Jan 27 15:32:57 crc kubenswrapper[4697]: I0127 15:32:57.796810 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6ad161d-fe95-4ad3-8f60-1f1310b2974c" containerName="horizon" Jan 27 15:32:57 crc kubenswrapper[4697]: E0127 15:32:57.796823 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6ad161d-fe95-4ad3-8f60-1f1310b2974c" containerName="horizon" Jan 27 15:32:57 crc kubenswrapper[4697]: I0127 15:32:57.796831 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6ad161d-fe95-4ad3-8f60-1f1310b2974c" containerName="horizon" Jan 27 15:32:57 crc kubenswrapper[4697]: E0127 15:32:57.796849 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c43bdb1-1b62-4407-87e6-14184c7d7ea2" containerName="nova-cell1-novncproxy-novncproxy" Jan 27 15:32:57 crc kubenswrapper[4697]: I0127 15:32:57.796857 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c43bdb1-1b62-4407-87e6-14184c7d7ea2" containerName="nova-cell1-novncproxy-novncproxy" Jan 27 15:32:57 crc kubenswrapper[4697]: E0127 15:32:57.796874 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6ad161d-fe95-4ad3-8f60-1f1310b2974c" containerName="horizon-log" Jan 27 15:32:57 crc kubenswrapper[4697]: I0127 15:32:57.796879 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6ad161d-fe95-4ad3-8f60-1f1310b2974c" containerName="horizon-log" Jan 27 15:32:57 crc kubenswrapper[4697]: I0127 15:32:57.797062 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c43bdb1-1b62-4407-87e6-14184c7d7ea2" containerName="nova-cell1-novncproxy-novncproxy" Jan 27 15:32:57 crc kubenswrapper[4697]: I0127 15:32:57.797078 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6ad161d-fe95-4ad3-8f60-1f1310b2974c" containerName="horizon" Jan 27 15:32:57 crc kubenswrapper[4697]: I0127 15:32:57.797086 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6ad161d-fe95-4ad3-8f60-1f1310b2974c" containerName="horizon" Jan 27 15:32:57 crc kubenswrapper[4697]: I0127 15:32:57.797095 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6ad161d-fe95-4ad3-8f60-1f1310b2974c" containerName="horizon" Jan 27 15:32:57 crc kubenswrapper[4697]: I0127 15:32:57.797105 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6ad161d-fe95-4ad3-8f60-1f1310b2974c" containerName="horizon" Jan 27 15:32:57 crc kubenswrapper[4697]: I0127 15:32:57.797121 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6ad161d-fe95-4ad3-8f60-1f1310b2974c" containerName="horizon-log" Jan 27 15:32:57 crc kubenswrapper[4697]: I0127 15:32:57.797853 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 27 15:32:57 crc kubenswrapper[4697]: I0127 15:32:57.802349 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Jan 27 15:32:57 crc kubenswrapper[4697]: I0127 15:32:57.802576 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Jan 27 15:32:57 crc kubenswrapper[4697]: I0127 15:32:57.802760 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 27 15:32:57 crc kubenswrapper[4697]: I0127 15:32:57.819705 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 15:32:57 crc kubenswrapper[4697]: I0127 15:32:57.924574 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ac3c287-657e-4e2a-be91-50e9fbce6ea0-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"6ac3c287-657e-4e2a-be91-50e9fbce6ea0\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 15:32:57 crc kubenswrapper[4697]: I0127 15:32:57.925297 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-flpws\" (UniqueName: \"kubernetes.io/projected/6ac3c287-657e-4e2a-be91-50e9fbce6ea0-kube-api-access-flpws\") pod \"nova-cell1-novncproxy-0\" (UID: \"6ac3c287-657e-4e2a-be91-50e9fbce6ea0\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 15:32:57 crc kubenswrapper[4697]: I0127 15:32:57.925441 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/6ac3c287-657e-4e2a-be91-50e9fbce6ea0-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"6ac3c287-657e-4e2a-be91-50e9fbce6ea0\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 15:32:57 crc kubenswrapper[4697]: I0127 15:32:57.925534 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ac3c287-657e-4e2a-be91-50e9fbce6ea0-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"6ac3c287-657e-4e2a-be91-50e9fbce6ea0\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 15:32:57 crc kubenswrapper[4697]: I0127 15:32:57.925654 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/6ac3c287-657e-4e2a-be91-50e9fbce6ea0-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"6ac3c287-657e-4e2a-be91-50e9fbce6ea0\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 15:32:57 crc kubenswrapper[4697]: I0127 15:32:57.932246 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 27 15:32:57 crc kubenswrapper[4697]: I0127 15:32:57.932698 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 27 15:32:57 crc kubenswrapper[4697]: I0127 15:32:57.937553 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 27 15:32:57 crc kubenswrapper[4697]: I0127 15:32:57.939452 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 27 15:32:58 crc kubenswrapper[4697]: I0127 15:32:58.030363 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ac3c287-657e-4e2a-be91-50e9fbce6ea0-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"6ac3c287-657e-4e2a-be91-50e9fbce6ea0\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 15:32:58 crc kubenswrapper[4697]: I0127 15:32:58.030499 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-flpws\" (UniqueName: \"kubernetes.io/projected/6ac3c287-657e-4e2a-be91-50e9fbce6ea0-kube-api-access-flpws\") pod \"nova-cell1-novncproxy-0\" (UID: \"6ac3c287-657e-4e2a-be91-50e9fbce6ea0\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 15:32:58 crc kubenswrapper[4697]: I0127 15:32:58.030674 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/6ac3c287-657e-4e2a-be91-50e9fbce6ea0-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"6ac3c287-657e-4e2a-be91-50e9fbce6ea0\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 15:32:58 crc kubenswrapper[4697]: I0127 15:32:58.030754 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ac3c287-657e-4e2a-be91-50e9fbce6ea0-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"6ac3c287-657e-4e2a-be91-50e9fbce6ea0\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 15:32:58 crc kubenswrapper[4697]: I0127 15:32:58.030922 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/6ac3c287-657e-4e2a-be91-50e9fbce6ea0-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"6ac3c287-657e-4e2a-be91-50e9fbce6ea0\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 15:32:58 crc kubenswrapper[4697]: I0127 15:32:58.036421 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ac3c287-657e-4e2a-be91-50e9fbce6ea0-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"6ac3c287-657e-4e2a-be91-50e9fbce6ea0\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 15:32:58 crc kubenswrapper[4697]: I0127 15:32:58.038764 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/6ac3c287-657e-4e2a-be91-50e9fbce6ea0-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"6ac3c287-657e-4e2a-be91-50e9fbce6ea0\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 15:32:58 crc kubenswrapper[4697]: I0127 15:32:58.040673 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/6ac3c287-657e-4e2a-be91-50e9fbce6ea0-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"6ac3c287-657e-4e2a-be91-50e9fbce6ea0\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 15:32:58 crc kubenswrapper[4697]: I0127 15:32:58.047459 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ac3c287-657e-4e2a-be91-50e9fbce6ea0-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"6ac3c287-657e-4e2a-be91-50e9fbce6ea0\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 15:32:58 crc kubenswrapper[4697]: I0127 15:32:58.062617 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-flpws\" (UniqueName: \"kubernetes.io/projected/6ac3c287-657e-4e2a-be91-50e9fbce6ea0-kube-api-access-flpws\") pod \"nova-cell1-novncproxy-0\" (UID: \"6ac3c287-657e-4e2a-be91-50e9fbce6ea0\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 15:32:58 crc kubenswrapper[4697]: I0127 15:32:58.124569 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 27 15:32:58 crc kubenswrapper[4697]: I0127 15:32:58.578087 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c43bdb1-1b62-4407-87e6-14184c7d7ea2" path="/var/lib/kubelet/pods/3c43bdb1-1b62-4407-87e6-14184c7d7ea2/volumes" Jan 27 15:32:58 crc kubenswrapper[4697]: I0127 15:32:58.647226 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 15:32:58 crc kubenswrapper[4697]: I0127 15:32:58.749399 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"6ac3c287-657e-4e2a-be91-50e9fbce6ea0","Type":"ContainerStarted","Data":"0ef45bbf15cd89196a435b757d246fd1c3852c51d964816360854583679e7ad7"} Jan 27 15:32:59 crc kubenswrapper[4697]: I0127 15:32:59.761195 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"6ac3c287-657e-4e2a-be91-50e9fbce6ea0","Type":"ContainerStarted","Data":"39f20a1654164c5bf6b6dd2dc3fc0ca6f3980d5d2e86fd533156aeccc97eea86"} Jan 27 15:33:00 crc kubenswrapper[4697]: I0127 15:33:00.956946 4697 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-7j258" podUID="b4da5f97-06a4-452e-9f5e-87c97c5bf1f5" containerName="registry-server" probeResult="failure" output=< Jan 27 15:33:00 crc kubenswrapper[4697]: timeout: failed to connect service ":50051" within 1s Jan 27 15:33:00 crc kubenswrapper[4697]: > Jan 27 15:33:01 crc kubenswrapper[4697]: I0127 15:33:01.332310 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 27 15:33:01 crc kubenswrapper[4697]: I0127 15:33:01.332869 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 27 15:33:01 crc kubenswrapper[4697]: I0127 15:33:01.344600 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 27 15:33:01 crc kubenswrapper[4697]: I0127 15:33:01.348861 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 27 15:33:01 crc kubenswrapper[4697]: I0127 15:33:01.361361 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=4.361339875 podStartE2EDuration="4.361339875s" podCreationTimestamp="2026-01-27 15:32:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:32:59.785181937 +0000 UTC m=+1475.957581718" watchObservedRunningTime="2026-01-27 15:33:01.361339875 +0000 UTC m=+1477.533739656" Jan 27 15:33:01 crc kubenswrapper[4697]: I0127 15:33:01.783316 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 27 15:33:01 crc kubenswrapper[4697]: I0127 15:33:01.789487 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 27 15:33:01 crc kubenswrapper[4697]: I0127 15:33:01.993007 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-5nhp7"] Jan 27 15:33:01 crc kubenswrapper[4697]: E0127 15:33:01.996093 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6ad161d-fe95-4ad3-8f60-1f1310b2974c" containerName="horizon" Jan 27 15:33:01 crc kubenswrapper[4697]: I0127 15:33:01.996301 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6ad161d-fe95-4ad3-8f60-1f1310b2974c" containerName="horizon" Jan 27 15:33:01 crc kubenswrapper[4697]: I0127 15:33:01.997742 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-5nhp7" Jan 27 15:33:02 crc kubenswrapper[4697]: I0127 15:33:02.022609 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-5nhp7"] Jan 27 15:33:02 crc kubenswrapper[4697]: I0127 15:33:02.118426 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64xht\" (UniqueName: \"kubernetes.io/projected/ad285a43-a79d-4383-acc4-208659eeffe1-kube-api-access-64xht\") pod \"dnsmasq-dns-89c5cd4d5-5nhp7\" (UID: \"ad285a43-a79d-4383-acc4-208659eeffe1\") " pod="openstack/dnsmasq-dns-89c5cd4d5-5nhp7" Jan 27 15:33:02 crc kubenswrapper[4697]: I0127 15:33:02.118742 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ad285a43-a79d-4383-acc4-208659eeffe1-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-5nhp7\" (UID: \"ad285a43-a79d-4383-acc4-208659eeffe1\") " pod="openstack/dnsmasq-dns-89c5cd4d5-5nhp7" Jan 27 15:33:02 crc kubenswrapper[4697]: I0127 15:33:02.118852 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ad285a43-a79d-4383-acc4-208659eeffe1-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-5nhp7\" (UID: \"ad285a43-a79d-4383-acc4-208659eeffe1\") " pod="openstack/dnsmasq-dns-89c5cd4d5-5nhp7" Jan 27 15:33:02 crc kubenswrapper[4697]: I0127 15:33:02.118882 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad285a43-a79d-4383-acc4-208659eeffe1-config\") pod \"dnsmasq-dns-89c5cd4d5-5nhp7\" (UID: \"ad285a43-a79d-4383-acc4-208659eeffe1\") " pod="openstack/dnsmasq-dns-89c5cd4d5-5nhp7" Jan 27 15:33:02 crc kubenswrapper[4697]: I0127 15:33:02.118998 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ad285a43-a79d-4383-acc4-208659eeffe1-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-5nhp7\" (UID: \"ad285a43-a79d-4383-acc4-208659eeffe1\") " pod="openstack/dnsmasq-dns-89c5cd4d5-5nhp7" Jan 27 15:33:02 crc kubenswrapper[4697]: I0127 15:33:02.119076 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ad285a43-a79d-4383-acc4-208659eeffe1-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-5nhp7\" (UID: \"ad285a43-a79d-4383-acc4-208659eeffe1\") " pod="openstack/dnsmasq-dns-89c5cd4d5-5nhp7" Jan 27 15:33:02 crc kubenswrapper[4697]: I0127 15:33:02.220945 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ad285a43-a79d-4383-acc4-208659eeffe1-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-5nhp7\" (UID: \"ad285a43-a79d-4383-acc4-208659eeffe1\") " pod="openstack/dnsmasq-dns-89c5cd4d5-5nhp7" Jan 27 15:33:02 crc kubenswrapper[4697]: I0127 15:33:02.221041 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad285a43-a79d-4383-acc4-208659eeffe1-config\") pod \"dnsmasq-dns-89c5cd4d5-5nhp7\" (UID: \"ad285a43-a79d-4383-acc4-208659eeffe1\") " pod="openstack/dnsmasq-dns-89c5cd4d5-5nhp7" Jan 27 15:33:02 crc kubenswrapper[4697]: I0127 15:33:02.221079 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ad285a43-a79d-4383-acc4-208659eeffe1-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-5nhp7\" (UID: \"ad285a43-a79d-4383-acc4-208659eeffe1\") " pod="openstack/dnsmasq-dns-89c5cd4d5-5nhp7" Jan 27 15:33:02 crc kubenswrapper[4697]: I0127 15:33:02.221121 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ad285a43-a79d-4383-acc4-208659eeffe1-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-5nhp7\" (UID: \"ad285a43-a79d-4383-acc4-208659eeffe1\") " pod="openstack/dnsmasq-dns-89c5cd4d5-5nhp7" Jan 27 15:33:02 crc kubenswrapper[4697]: I0127 15:33:02.221217 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64xht\" (UniqueName: \"kubernetes.io/projected/ad285a43-a79d-4383-acc4-208659eeffe1-kube-api-access-64xht\") pod \"dnsmasq-dns-89c5cd4d5-5nhp7\" (UID: \"ad285a43-a79d-4383-acc4-208659eeffe1\") " pod="openstack/dnsmasq-dns-89c5cd4d5-5nhp7" Jan 27 15:33:02 crc kubenswrapper[4697]: I0127 15:33:02.221248 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ad285a43-a79d-4383-acc4-208659eeffe1-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-5nhp7\" (UID: \"ad285a43-a79d-4383-acc4-208659eeffe1\") " pod="openstack/dnsmasq-dns-89c5cd4d5-5nhp7" Jan 27 15:33:02 crc kubenswrapper[4697]: I0127 15:33:02.222048 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ad285a43-a79d-4383-acc4-208659eeffe1-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-5nhp7\" (UID: \"ad285a43-a79d-4383-acc4-208659eeffe1\") " pod="openstack/dnsmasq-dns-89c5cd4d5-5nhp7" Jan 27 15:33:02 crc kubenswrapper[4697]: I0127 15:33:02.222097 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ad285a43-a79d-4383-acc4-208659eeffe1-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-5nhp7\" (UID: \"ad285a43-a79d-4383-acc4-208659eeffe1\") " pod="openstack/dnsmasq-dns-89c5cd4d5-5nhp7" Jan 27 15:33:02 crc kubenswrapper[4697]: I0127 15:33:02.222059 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ad285a43-a79d-4383-acc4-208659eeffe1-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-5nhp7\" (UID: \"ad285a43-a79d-4383-acc4-208659eeffe1\") " pod="openstack/dnsmasq-dns-89c5cd4d5-5nhp7" Jan 27 15:33:02 crc kubenswrapper[4697]: I0127 15:33:02.222695 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ad285a43-a79d-4383-acc4-208659eeffe1-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-5nhp7\" (UID: \"ad285a43-a79d-4383-acc4-208659eeffe1\") " pod="openstack/dnsmasq-dns-89c5cd4d5-5nhp7" Jan 27 15:33:02 crc kubenswrapper[4697]: I0127 15:33:02.222777 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad285a43-a79d-4383-acc4-208659eeffe1-config\") pod \"dnsmasq-dns-89c5cd4d5-5nhp7\" (UID: \"ad285a43-a79d-4383-acc4-208659eeffe1\") " pod="openstack/dnsmasq-dns-89c5cd4d5-5nhp7" Jan 27 15:33:02 crc kubenswrapper[4697]: I0127 15:33:02.244272 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64xht\" (UniqueName: \"kubernetes.io/projected/ad285a43-a79d-4383-acc4-208659eeffe1-kube-api-access-64xht\") pod \"dnsmasq-dns-89c5cd4d5-5nhp7\" (UID: \"ad285a43-a79d-4383-acc4-208659eeffe1\") " pod="openstack/dnsmasq-dns-89c5cd4d5-5nhp7" Jan 27 15:33:02 crc kubenswrapper[4697]: I0127 15:33:02.328928 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-5nhp7" Jan 27 15:33:03 crc kubenswrapper[4697]: I0127 15:33:03.078649 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-5nhp7"] Jan 27 15:33:03 crc kubenswrapper[4697]: I0127 15:33:03.125019 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 27 15:33:03 crc kubenswrapper[4697]: I0127 15:33:03.807699 4697 generic.go:334] "Generic (PLEG): container finished" podID="ad285a43-a79d-4383-acc4-208659eeffe1" containerID="e3766533ac8ecc34aafefc8a5e93e5930e56d4e278a4d15a8f7553c108db9431" exitCode=0 Jan 27 15:33:03 crc kubenswrapper[4697]: I0127 15:33:03.808020 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-5nhp7" event={"ID":"ad285a43-a79d-4383-acc4-208659eeffe1","Type":"ContainerDied","Data":"e3766533ac8ecc34aafefc8a5e93e5930e56d4e278a4d15a8f7553c108db9431"} Jan 27 15:33:03 crc kubenswrapper[4697]: I0127 15:33:03.808237 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-5nhp7" event={"ID":"ad285a43-a79d-4383-acc4-208659eeffe1","Type":"ContainerStarted","Data":"77c9d1b6d77316d3742d6f4ea316ba2e11b382dcf646f208db8aa4eb3407a2ae"} Jan 27 15:33:04 crc kubenswrapper[4697]: I0127 15:33:04.686505 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 27 15:33:04 crc kubenswrapper[4697]: I0127 15:33:04.818879 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-5nhp7" event={"ID":"ad285a43-a79d-4383-acc4-208659eeffe1","Type":"ContainerStarted","Data":"14a501270d6427fe8a09df1aca2ed85a57b25173f8511970b1f7b82a8efd75d3"} Jan 27 15:33:04 crc kubenswrapper[4697]: I0127 15:33:04.819057 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="a03e6dcb-fc88-458b-9f8d-36aa045a14ca" containerName="nova-api-log" containerID="cri-o://fd6ea4e0b4b8cd12606cf1a5309a5147a5ac6f2e68d99b3cd48cbbc957af7903" gracePeriod=30 Jan 27 15:33:04 crc kubenswrapper[4697]: I0127 15:33:04.819149 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="a03e6dcb-fc88-458b-9f8d-36aa045a14ca" containerName="nova-api-api" containerID="cri-o://194e8851134b9bc0ab672fde5428d363e4d7f538ce6ca04f1637809eba62ce80" gracePeriod=30 Jan 27 15:33:04 crc kubenswrapper[4697]: I0127 15:33:04.820242 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 15:33:04 crc kubenswrapper[4697]: I0127 15:33:04.820493 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="03c328cb-43ac-46ab-8677-80be0dde18e3" containerName="ceilometer-central-agent" containerID="cri-o://9cf10d48eaa376374bcd61c81e74dba04a1a50dca2ca6df1ef81d081ecc07923" gracePeriod=30 Jan 27 15:33:04 crc kubenswrapper[4697]: I0127 15:33:04.820559 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="03c328cb-43ac-46ab-8677-80be0dde18e3" containerName="proxy-httpd" containerID="cri-o://0cb882a3b482e879b610e80c95682d218d24e192067e0b4bf5109cf036a0c992" gracePeriod=30 Jan 27 15:33:04 crc kubenswrapper[4697]: I0127 15:33:04.820597 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="03c328cb-43ac-46ab-8677-80be0dde18e3" containerName="sg-core" containerID="cri-o://92cca5b7726601daf3b480d90c65a4f71a815ceae54639d6abe846bb8fb869fc" gracePeriod=30 Jan 27 15:33:04 crc kubenswrapper[4697]: I0127 15:33:04.820635 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="03c328cb-43ac-46ab-8677-80be0dde18e3" containerName="ceilometer-notification-agent" containerID="cri-o://47d755a0e7983898742460fed87e0d4f008c7020d44a017d1473810362d9e5c6" gracePeriod=30 Jan 27 15:33:04 crc kubenswrapper[4697]: I0127 15:33:04.863709 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-89c5cd4d5-5nhp7" podStartSLOduration=3.86369372 podStartE2EDuration="3.86369372s" podCreationTimestamp="2026-01-27 15:33:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:33:04.860637505 +0000 UTC m=+1481.033037306" watchObservedRunningTime="2026-01-27 15:33:04.86369372 +0000 UTC m=+1481.036093501" Jan 27 15:33:05 crc kubenswrapper[4697]: I0127 15:33:05.832178 4697 generic.go:334] "Generic (PLEG): container finished" podID="03c328cb-43ac-46ab-8677-80be0dde18e3" containerID="0cb882a3b482e879b610e80c95682d218d24e192067e0b4bf5109cf036a0c992" exitCode=0 Jan 27 15:33:05 crc kubenswrapper[4697]: I0127 15:33:05.832219 4697 generic.go:334] "Generic (PLEG): container finished" podID="03c328cb-43ac-46ab-8677-80be0dde18e3" containerID="92cca5b7726601daf3b480d90c65a4f71a815ceae54639d6abe846bb8fb869fc" exitCode=2 Jan 27 15:33:05 crc kubenswrapper[4697]: I0127 15:33:05.832231 4697 generic.go:334] "Generic (PLEG): container finished" podID="03c328cb-43ac-46ab-8677-80be0dde18e3" containerID="9cf10d48eaa376374bcd61c81e74dba04a1a50dca2ca6df1ef81d081ecc07923" exitCode=0 Jan 27 15:33:05 crc kubenswrapper[4697]: I0127 15:33:05.832254 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"03c328cb-43ac-46ab-8677-80be0dde18e3","Type":"ContainerDied","Data":"0cb882a3b482e879b610e80c95682d218d24e192067e0b4bf5109cf036a0c992"} Jan 27 15:33:05 crc kubenswrapper[4697]: I0127 15:33:05.832331 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"03c328cb-43ac-46ab-8677-80be0dde18e3","Type":"ContainerDied","Data":"92cca5b7726601daf3b480d90c65a4f71a815ceae54639d6abe846bb8fb869fc"} Jan 27 15:33:05 crc kubenswrapper[4697]: I0127 15:33:05.832349 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"03c328cb-43ac-46ab-8677-80be0dde18e3","Type":"ContainerDied","Data":"9cf10d48eaa376374bcd61c81e74dba04a1a50dca2ca6df1ef81d081ecc07923"} Jan 27 15:33:05 crc kubenswrapper[4697]: I0127 15:33:05.834970 4697 generic.go:334] "Generic (PLEG): container finished" podID="a03e6dcb-fc88-458b-9f8d-36aa045a14ca" containerID="fd6ea4e0b4b8cd12606cf1a5309a5147a5ac6f2e68d99b3cd48cbbc957af7903" exitCode=143 Jan 27 15:33:05 crc kubenswrapper[4697]: I0127 15:33:05.835858 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a03e6dcb-fc88-458b-9f8d-36aa045a14ca","Type":"ContainerDied","Data":"fd6ea4e0b4b8cd12606cf1a5309a5147a5ac6f2e68d99b3cd48cbbc957af7903"} Jan 27 15:33:05 crc kubenswrapper[4697]: I0127 15:33:05.835952 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-89c5cd4d5-5nhp7" Jan 27 15:33:06 crc kubenswrapper[4697]: I0127 15:33:06.711574 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 15:33:06 crc kubenswrapper[4697]: I0127 15:33:06.773438 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/03c328cb-43ac-46ab-8677-80be0dde18e3-scripts\") pod \"03c328cb-43ac-46ab-8677-80be0dde18e3\" (UID: \"03c328cb-43ac-46ab-8677-80be0dde18e3\") " Jan 27 15:33:06 crc kubenswrapper[4697]: I0127 15:33:06.773628 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03c328cb-43ac-46ab-8677-80be0dde18e3-combined-ca-bundle\") pod \"03c328cb-43ac-46ab-8677-80be0dde18e3\" (UID: \"03c328cb-43ac-46ab-8677-80be0dde18e3\") " Jan 27 15:33:06 crc kubenswrapper[4697]: I0127 15:33:06.773656 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/03c328cb-43ac-46ab-8677-80be0dde18e3-ceilometer-tls-certs\") pod \"03c328cb-43ac-46ab-8677-80be0dde18e3\" (UID: \"03c328cb-43ac-46ab-8677-80be0dde18e3\") " Jan 27 15:33:06 crc kubenswrapper[4697]: I0127 15:33:06.773733 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/03c328cb-43ac-46ab-8677-80be0dde18e3-sg-core-conf-yaml\") pod \"03c328cb-43ac-46ab-8677-80be0dde18e3\" (UID: \"03c328cb-43ac-46ab-8677-80be0dde18e3\") " Jan 27 15:33:06 crc kubenswrapper[4697]: I0127 15:33:06.773762 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/03c328cb-43ac-46ab-8677-80be0dde18e3-log-httpd\") pod \"03c328cb-43ac-46ab-8677-80be0dde18e3\" (UID: \"03c328cb-43ac-46ab-8677-80be0dde18e3\") " Jan 27 15:33:06 crc kubenswrapper[4697]: I0127 15:33:06.773820 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9mxqd\" (UniqueName: \"kubernetes.io/projected/03c328cb-43ac-46ab-8677-80be0dde18e3-kube-api-access-9mxqd\") pod \"03c328cb-43ac-46ab-8677-80be0dde18e3\" (UID: \"03c328cb-43ac-46ab-8677-80be0dde18e3\") " Jan 27 15:33:06 crc kubenswrapper[4697]: I0127 15:33:06.773874 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/03c328cb-43ac-46ab-8677-80be0dde18e3-run-httpd\") pod \"03c328cb-43ac-46ab-8677-80be0dde18e3\" (UID: \"03c328cb-43ac-46ab-8677-80be0dde18e3\") " Jan 27 15:33:06 crc kubenswrapper[4697]: I0127 15:33:06.773939 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03c328cb-43ac-46ab-8677-80be0dde18e3-config-data\") pod \"03c328cb-43ac-46ab-8677-80be0dde18e3\" (UID: \"03c328cb-43ac-46ab-8677-80be0dde18e3\") " Jan 27 15:33:06 crc kubenswrapper[4697]: I0127 15:33:06.777069 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/03c328cb-43ac-46ab-8677-80be0dde18e3-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "03c328cb-43ac-46ab-8677-80be0dde18e3" (UID: "03c328cb-43ac-46ab-8677-80be0dde18e3"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:33:06 crc kubenswrapper[4697]: I0127 15:33:06.777324 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/03c328cb-43ac-46ab-8677-80be0dde18e3-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "03c328cb-43ac-46ab-8677-80be0dde18e3" (UID: "03c328cb-43ac-46ab-8677-80be0dde18e3"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:33:06 crc kubenswrapper[4697]: I0127 15:33:06.782841 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03c328cb-43ac-46ab-8677-80be0dde18e3-scripts" (OuterVolumeSpecName: "scripts") pod "03c328cb-43ac-46ab-8677-80be0dde18e3" (UID: "03c328cb-43ac-46ab-8677-80be0dde18e3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:33:06 crc kubenswrapper[4697]: I0127 15:33:06.800914 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03c328cb-43ac-46ab-8677-80be0dde18e3-kube-api-access-9mxqd" (OuterVolumeSpecName: "kube-api-access-9mxqd") pod "03c328cb-43ac-46ab-8677-80be0dde18e3" (UID: "03c328cb-43ac-46ab-8677-80be0dde18e3"). InnerVolumeSpecName "kube-api-access-9mxqd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:33:06 crc kubenswrapper[4697]: I0127 15:33:06.839039 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03c328cb-43ac-46ab-8677-80be0dde18e3-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "03c328cb-43ac-46ab-8677-80be0dde18e3" (UID: "03c328cb-43ac-46ab-8677-80be0dde18e3"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:33:06 crc kubenswrapper[4697]: I0127 15:33:06.877468 4697 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/03c328cb-43ac-46ab-8677-80be0dde18e3-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 27 15:33:06 crc kubenswrapper[4697]: I0127 15:33:06.877490 4697 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/03c328cb-43ac-46ab-8677-80be0dde18e3-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 15:33:06 crc kubenswrapper[4697]: I0127 15:33:06.877501 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9mxqd\" (UniqueName: \"kubernetes.io/projected/03c328cb-43ac-46ab-8677-80be0dde18e3-kube-api-access-9mxqd\") on node \"crc\" DevicePath \"\"" Jan 27 15:33:06 crc kubenswrapper[4697]: I0127 15:33:06.877510 4697 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/03c328cb-43ac-46ab-8677-80be0dde18e3-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 15:33:06 crc kubenswrapper[4697]: I0127 15:33:06.877518 4697 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/03c328cb-43ac-46ab-8677-80be0dde18e3-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 15:33:06 crc kubenswrapper[4697]: I0127 15:33:06.888161 4697 generic.go:334] "Generic (PLEG): container finished" podID="03c328cb-43ac-46ab-8677-80be0dde18e3" containerID="47d755a0e7983898742460fed87e0d4f008c7020d44a017d1473810362d9e5c6" exitCode=0 Jan 27 15:33:06 crc kubenswrapper[4697]: I0127 15:33:06.888223 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 15:33:06 crc kubenswrapper[4697]: I0127 15:33:06.888278 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"03c328cb-43ac-46ab-8677-80be0dde18e3","Type":"ContainerDied","Data":"47d755a0e7983898742460fed87e0d4f008c7020d44a017d1473810362d9e5c6"} Jan 27 15:33:06 crc kubenswrapper[4697]: I0127 15:33:06.888305 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"03c328cb-43ac-46ab-8677-80be0dde18e3","Type":"ContainerDied","Data":"5adb0afe24fc40aa7e95ea7a016e88f1734a987abe493fd3cdd1c5884963d390"} Jan 27 15:33:06 crc kubenswrapper[4697]: I0127 15:33:06.888320 4697 scope.go:117] "RemoveContainer" containerID="0cb882a3b482e879b610e80c95682d218d24e192067e0b4bf5109cf036a0c992" Jan 27 15:33:06 crc kubenswrapper[4697]: I0127 15:33:06.940527 4697 scope.go:117] "RemoveContainer" containerID="92cca5b7726601daf3b480d90c65a4f71a815ceae54639d6abe846bb8fb869fc" Jan 27 15:33:06 crc kubenswrapper[4697]: I0127 15:33:06.944937 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03c328cb-43ac-46ab-8677-80be0dde18e3-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "03c328cb-43ac-46ab-8677-80be0dde18e3" (UID: "03c328cb-43ac-46ab-8677-80be0dde18e3"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:33:06 crc kubenswrapper[4697]: I0127 15:33:06.953383 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03c328cb-43ac-46ab-8677-80be0dde18e3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "03c328cb-43ac-46ab-8677-80be0dde18e3" (UID: "03c328cb-43ac-46ab-8677-80be0dde18e3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:33:06 crc kubenswrapper[4697]: I0127 15:33:06.967041 4697 scope.go:117] "RemoveContainer" containerID="47d755a0e7983898742460fed87e0d4f008c7020d44a017d1473810362d9e5c6" Jan 27 15:33:06 crc kubenswrapper[4697]: I0127 15:33:06.979097 4697 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03c328cb-43ac-46ab-8677-80be0dde18e3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 15:33:06 crc kubenswrapper[4697]: I0127 15:33:06.979128 4697 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/03c328cb-43ac-46ab-8677-80be0dde18e3-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 15:33:06 crc kubenswrapper[4697]: I0127 15:33:06.990382 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03c328cb-43ac-46ab-8677-80be0dde18e3-config-data" (OuterVolumeSpecName: "config-data") pod "03c328cb-43ac-46ab-8677-80be0dde18e3" (UID: "03c328cb-43ac-46ab-8677-80be0dde18e3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:33:07 crc kubenswrapper[4697]: I0127 15:33:07.010720 4697 scope.go:117] "RemoveContainer" containerID="9cf10d48eaa376374bcd61c81e74dba04a1a50dca2ca6df1ef81d081ecc07923" Jan 27 15:33:07 crc kubenswrapper[4697]: I0127 15:33:07.042975 4697 scope.go:117] "RemoveContainer" containerID="0cb882a3b482e879b610e80c95682d218d24e192067e0b4bf5109cf036a0c992" Jan 27 15:33:07 crc kubenswrapper[4697]: E0127 15:33:07.043474 4697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0cb882a3b482e879b610e80c95682d218d24e192067e0b4bf5109cf036a0c992\": container with ID starting with 0cb882a3b482e879b610e80c95682d218d24e192067e0b4bf5109cf036a0c992 not found: ID does not exist" containerID="0cb882a3b482e879b610e80c95682d218d24e192067e0b4bf5109cf036a0c992" Jan 27 15:33:07 crc kubenswrapper[4697]: I0127 15:33:07.043537 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0cb882a3b482e879b610e80c95682d218d24e192067e0b4bf5109cf036a0c992"} err="failed to get container status \"0cb882a3b482e879b610e80c95682d218d24e192067e0b4bf5109cf036a0c992\": rpc error: code = NotFound desc = could not find container \"0cb882a3b482e879b610e80c95682d218d24e192067e0b4bf5109cf036a0c992\": container with ID starting with 0cb882a3b482e879b610e80c95682d218d24e192067e0b4bf5109cf036a0c992 not found: ID does not exist" Jan 27 15:33:07 crc kubenswrapper[4697]: I0127 15:33:07.043567 4697 scope.go:117] "RemoveContainer" containerID="92cca5b7726601daf3b480d90c65a4f71a815ceae54639d6abe846bb8fb869fc" Jan 27 15:33:07 crc kubenswrapper[4697]: E0127 15:33:07.044112 4697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"92cca5b7726601daf3b480d90c65a4f71a815ceae54639d6abe846bb8fb869fc\": container with ID starting with 92cca5b7726601daf3b480d90c65a4f71a815ceae54639d6abe846bb8fb869fc not found: ID does not exist" containerID="92cca5b7726601daf3b480d90c65a4f71a815ceae54639d6abe846bb8fb869fc" Jan 27 15:33:07 crc kubenswrapper[4697]: I0127 15:33:07.044168 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92cca5b7726601daf3b480d90c65a4f71a815ceae54639d6abe846bb8fb869fc"} err="failed to get container status \"92cca5b7726601daf3b480d90c65a4f71a815ceae54639d6abe846bb8fb869fc\": rpc error: code = NotFound desc = could not find container \"92cca5b7726601daf3b480d90c65a4f71a815ceae54639d6abe846bb8fb869fc\": container with ID starting with 92cca5b7726601daf3b480d90c65a4f71a815ceae54639d6abe846bb8fb869fc not found: ID does not exist" Jan 27 15:33:07 crc kubenswrapper[4697]: I0127 15:33:07.044204 4697 scope.go:117] "RemoveContainer" containerID="47d755a0e7983898742460fed87e0d4f008c7020d44a017d1473810362d9e5c6" Jan 27 15:33:07 crc kubenswrapper[4697]: E0127 15:33:07.044752 4697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"47d755a0e7983898742460fed87e0d4f008c7020d44a017d1473810362d9e5c6\": container with ID starting with 47d755a0e7983898742460fed87e0d4f008c7020d44a017d1473810362d9e5c6 not found: ID does not exist" containerID="47d755a0e7983898742460fed87e0d4f008c7020d44a017d1473810362d9e5c6" Jan 27 15:33:07 crc kubenswrapper[4697]: I0127 15:33:07.044804 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47d755a0e7983898742460fed87e0d4f008c7020d44a017d1473810362d9e5c6"} err="failed to get container status \"47d755a0e7983898742460fed87e0d4f008c7020d44a017d1473810362d9e5c6\": rpc error: code = NotFound desc = could not find container \"47d755a0e7983898742460fed87e0d4f008c7020d44a017d1473810362d9e5c6\": container with ID starting with 47d755a0e7983898742460fed87e0d4f008c7020d44a017d1473810362d9e5c6 not found: ID does not exist" Jan 27 15:33:07 crc kubenswrapper[4697]: I0127 15:33:07.044823 4697 scope.go:117] "RemoveContainer" containerID="9cf10d48eaa376374bcd61c81e74dba04a1a50dca2ca6df1ef81d081ecc07923" Jan 27 15:33:07 crc kubenswrapper[4697]: E0127 15:33:07.045230 4697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9cf10d48eaa376374bcd61c81e74dba04a1a50dca2ca6df1ef81d081ecc07923\": container with ID starting with 9cf10d48eaa376374bcd61c81e74dba04a1a50dca2ca6df1ef81d081ecc07923 not found: ID does not exist" containerID="9cf10d48eaa376374bcd61c81e74dba04a1a50dca2ca6df1ef81d081ecc07923" Jan 27 15:33:07 crc kubenswrapper[4697]: I0127 15:33:07.045270 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9cf10d48eaa376374bcd61c81e74dba04a1a50dca2ca6df1ef81d081ecc07923"} err="failed to get container status \"9cf10d48eaa376374bcd61c81e74dba04a1a50dca2ca6df1ef81d081ecc07923\": rpc error: code = NotFound desc = could not find container \"9cf10d48eaa376374bcd61c81e74dba04a1a50dca2ca6df1ef81d081ecc07923\": container with ID starting with 9cf10d48eaa376374bcd61c81e74dba04a1a50dca2ca6df1ef81d081ecc07923 not found: ID does not exist" Jan 27 15:33:07 crc kubenswrapper[4697]: I0127 15:33:07.080766 4697 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03c328cb-43ac-46ab-8677-80be0dde18e3-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 15:33:07 crc kubenswrapper[4697]: I0127 15:33:07.231558 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 15:33:07 crc kubenswrapper[4697]: I0127 15:33:07.242581 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 27 15:33:07 crc kubenswrapper[4697]: I0127 15:33:07.276561 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 27 15:33:07 crc kubenswrapper[4697]: E0127 15:33:07.277068 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03c328cb-43ac-46ab-8677-80be0dde18e3" containerName="proxy-httpd" Jan 27 15:33:07 crc kubenswrapper[4697]: I0127 15:33:07.277095 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="03c328cb-43ac-46ab-8677-80be0dde18e3" containerName="proxy-httpd" Jan 27 15:33:07 crc kubenswrapper[4697]: E0127 15:33:07.277125 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03c328cb-43ac-46ab-8677-80be0dde18e3" containerName="ceilometer-central-agent" Jan 27 15:33:07 crc kubenswrapper[4697]: I0127 15:33:07.277134 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="03c328cb-43ac-46ab-8677-80be0dde18e3" containerName="ceilometer-central-agent" Jan 27 15:33:07 crc kubenswrapper[4697]: E0127 15:33:07.277143 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03c328cb-43ac-46ab-8677-80be0dde18e3" containerName="sg-core" Jan 27 15:33:07 crc kubenswrapper[4697]: I0127 15:33:07.277152 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="03c328cb-43ac-46ab-8677-80be0dde18e3" containerName="sg-core" Jan 27 15:33:07 crc kubenswrapper[4697]: E0127 15:33:07.277165 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03c328cb-43ac-46ab-8677-80be0dde18e3" containerName="ceilometer-notification-agent" Jan 27 15:33:07 crc kubenswrapper[4697]: I0127 15:33:07.277172 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="03c328cb-43ac-46ab-8677-80be0dde18e3" containerName="ceilometer-notification-agent" Jan 27 15:33:07 crc kubenswrapper[4697]: I0127 15:33:07.277393 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="03c328cb-43ac-46ab-8677-80be0dde18e3" containerName="ceilometer-central-agent" Jan 27 15:33:07 crc kubenswrapper[4697]: I0127 15:33:07.277418 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="03c328cb-43ac-46ab-8677-80be0dde18e3" containerName="sg-core" Jan 27 15:33:07 crc kubenswrapper[4697]: I0127 15:33:07.277431 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="03c328cb-43ac-46ab-8677-80be0dde18e3" containerName="proxy-httpd" Jan 27 15:33:07 crc kubenswrapper[4697]: I0127 15:33:07.277446 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="03c328cb-43ac-46ab-8677-80be0dde18e3" containerName="ceilometer-notification-agent" Jan 27 15:33:07 crc kubenswrapper[4697]: I0127 15:33:07.288582 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 15:33:07 crc kubenswrapper[4697]: I0127 15:33:07.290634 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 27 15:33:07 crc kubenswrapper[4697]: I0127 15:33:07.290863 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 27 15:33:07 crc kubenswrapper[4697]: I0127 15:33:07.291025 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 27 15:33:07 crc kubenswrapper[4697]: I0127 15:33:07.312977 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 15:33:07 crc kubenswrapper[4697]: I0127 15:33:07.395859 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/43946a66-4e74-47e4-bfd3-63256993e153-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"43946a66-4e74-47e4-bfd3-63256993e153\") " pod="openstack/ceilometer-0" Jan 27 15:33:07 crc kubenswrapper[4697]: I0127 15:33:07.397311 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/43946a66-4e74-47e4-bfd3-63256993e153-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"43946a66-4e74-47e4-bfd3-63256993e153\") " pod="openstack/ceilometer-0" Jan 27 15:33:07 crc kubenswrapper[4697]: I0127 15:33:07.398252 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/43946a66-4e74-47e4-bfd3-63256993e153-run-httpd\") pod \"ceilometer-0\" (UID: \"43946a66-4e74-47e4-bfd3-63256993e153\") " pod="openstack/ceilometer-0" Jan 27 15:33:07 crc kubenswrapper[4697]: I0127 15:33:07.398969 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/43946a66-4e74-47e4-bfd3-63256993e153-log-httpd\") pod \"ceilometer-0\" (UID: \"43946a66-4e74-47e4-bfd3-63256993e153\") " pod="openstack/ceilometer-0" Jan 27 15:33:07 crc kubenswrapper[4697]: I0127 15:33:07.399004 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43946a66-4e74-47e4-bfd3-63256993e153-config-data\") pod \"ceilometer-0\" (UID: \"43946a66-4e74-47e4-bfd3-63256993e153\") " pod="openstack/ceilometer-0" Jan 27 15:33:07 crc kubenswrapper[4697]: I0127 15:33:07.399297 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5tz68\" (UniqueName: \"kubernetes.io/projected/43946a66-4e74-47e4-bfd3-63256993e153-kube-api-access-5tz68\") pod \"ceilometer-0\" (UID: \"43946a66-4e74-47e4-bfd3-63256993e153\") " pod="openstack/ceilometer-0" Jan 27 15:33:07 crc kubenswrapper[4697]: I0127 15:33:07.399358 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/43946a66-4e74-47e4-bfd3-63256993e153-scripts\") pod \"ceilometer-0\" (UID: \"43946a66-4e74-47e4-bfd3-63256993e153\") " pod="openstack/ceilometer-0" Jan 27 15:33:07 crc kubenswrapper[4697]: I0127 15:33:07.399395 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43946a66-4e74-47e4-bfd3-63256993e153-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"43946a66-4e74-47e4-bfd3-63256993e153\") " pod="openstack/ceilometer-0" Jan 27 15:33:07 crc kubenswrapper[4697]: I0127 15:33:07.502274 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5tz68\" (UniqueName: \"kubernetes.io/projected/43946a66-4e74-47e4-bfd3-63256993e153-kube-api-access-5tz68\") pod \"ceilometer-0\" (UID: \"43946a66-4e74-47e4-bfd3-63256993e153\") " pod="openstack/ceilometer-0" Jan 27 15:33:07 crc kubenswrapper[4697]: I0127 15:33:07.502383 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/43946a66-4e74-47e4-bfd3-63256993e153-scripts\") pod \"ceilometer-0\" (UID: \"43946a66-4e74-47e4-bfd3-63256993e153\") " pod="openstack/ceilometer-0" Jan 27 15:33:07 crc kubenswrapper[4697]: I0127 15:33:07.502412 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43946a66-4e74-47e4-bfd3-63256993e153-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"43946a66-4e74-47e4-bfd3-63256993e153\") " pod="openstack/ceilometer-0" Jan 27 15:33:07 crc kubenswrapper[4697]: I0127 15:33:07.502455 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/43946a66-4e74-47e4-bfd3-63256993e153-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"43946a66-4e74-47e4-bfd3-63256993e153\") " pod="openstack/ceilometer-0" Jan 27 15:33:07 crc kubenswrapper[4697]: I0127 15:33:07.502476 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/43946a66-4e74-47e4-bfd3-63256993e153-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"43946a66-4e74-47e4-bfd3-63256993e153\") " pod="openstack/ceilometer-0" Jan 27 15:33:07 crc kubenswrapper[4697]: I0127 15:33:07.502501 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/43946a66-4e74-47e4-bfd3-63256993e153-run-httpd\") pod \"ceilometer-0\" (UID: \"43946a66-4e74-47e4-bfd3-63256993e153\") " pod="openstack/ceilometer-0" Jan 27 15:33:07 crc kubenswrapper[4697]: I0127 15:33:07.502526 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/43946a66-4e74-47e4-bfd3-63256993e153-log-httpd\") pod \"ceilometer-0\" (UID: \"43946a66-4e74-47e4-bfd3-63256993e153\") " pod="openstack/ceilometer-0" Jan 27 15:33:07 crc kubenswrapper[4697]: I0127 15:33:07.502544 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43946a66-4e74-47e4-bfd3-63256993e153-config-data\") pod \"ceilometer-0\" (UID: \"43946a66-4e74-47e4-bfd3-63256993e153\") " pod="openstack/ceilometer-0" Jan 27 15:33:07 crc kubenswrapper[4697]: I0127 15:33:07.506959 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/43946a66-4e74-47e4-bfd3-63256993e153-log-httpd\") pod \"ceilometer-0\" (UID: \"43946a66-4e74-47e4-bfd3-63256993e153\") " pod="openstack/ceilometer-0" Jan 27 15:33:07 crc kubenswrapper[4697]: I0127 15:33:07.507226 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/43946a66-4e74-47e4-bfd3-63256993e153-run-httpd\") pod \"ceilometer-0\" (UID: \"43946a66-4e74-47e4-bfd3-63256993e153\") " pod="openstack/ceilometer-0" Jan 27 15:33:07 crc kubenswrapper[4697]: I0127 15:33:07.509184 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/43946a66-4e74-47e4-bfd3-63256993e153-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"43946a66-4e74-47e4-bfd3-63256993e153\") " pod="openstack/ceilometer-0" Jan 27 15:33:07 crc kubenswrapper[4697]: I0127 15:33:07.510331 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/43946a66-4e74-47e4-bfd3-63256993e153-scripts\") pod \"ceilometer-0\" (UID: \"43946a66-4e74-47e4-bfd3-63256993e153\") " pod="openstack/ceilometer-0" Jan 27 15:33:07 crc kubenswrapper[4697]: I0127 15:33:07.511106 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/43946a66-4e74-47e4-bfd3-63256993e153-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"43946a66-4e74-47e4-bfd3-63256993e153\") " pod="openstack/ceilometer-0" Jan 27 15:33:07 crc kubenswrapper[4697]: I0127 15:33:07.513795 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43946a66-4e74-47e4-bfd3-63256993e153-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"43946a66-4e74-47e4-bfd3-63256993e153\") " pod="openstack/ceilometer-0" Jan 27 15:33:07 crc kubenswrapper[4697]: I0127 15:33:07.518542 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43946a66-4e74-47e4-bfd3-63256993e153-config-data\") pod \"ceilometer-0\" (UID: \"43946a66-4e74-47e4-bfd3-63256993e153\") " pod="openstack/ceilometer-0" Jan 27 15:33:07 crc kubenswrapper[4697]: I0127 15:33:07.537238 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5tz68\" (UniqueName: \"kubernetes.io/projected/43946a66-4e74-47e4-bfd3-63256993e153-kube-api-access-5tz68\") pod \"ceilometer-0\" (UID: \"43946a66-4e74-47e4-bfd3-63256993e153\") " pod="openstack/ceilometer-0" Jan 27 15:33:07 crc kubenswrapper[4697]: I0127 15:33:07.633350 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 15:33:08 crc kubenswrapper[4697]: I0127 15:33:08.123327 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 15:33:08 crc kubenswrapper[4697]: I0127 15:33:08.125442 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 27 15:33:08 crc kubenswrapper[4697]: W0127 15:33:08.137371 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod43946a66_4e74_47e4_bfd3_63256993e153.slice/crio-10b732f6c7edf82adb9d4b3be5863634d38118efddbed930ef866c03ae52dd5e WatchSource:0}: Error finding container 10b732f6c7edf82adb9d4b3be5863634d38118efddbed930ef866c03ae52dd5e: Status 404 returned error can't find the container with id 10b732f6c7edf82adb9d4b3be5863634d38118efddbed930ef866c03ae52dd5e Jan 27 15:33:08 crc kubenswrapper[4697]: I0127 15:33:08.157881 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 27 15:33:08 crc kubenswrapper[4697]: I0127 15:33:08.485625 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 15:33:08 crc kubenswrapper[4697]: I0127 15:33:08.525555 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a03e6dcb-fc88-458b-9f8d-36aa045a14ca-logs\") pod \"a03e6dcb-fc88-458b-9f8d-36aa045a14ca\" (UID: \"a03e6dcb-fc88-458b-9f8d-36aa045a14ca\") " Jan 27 15:33:08 crc kubenswrapper[4697]: I0127 15:33:08.525597 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gv5wm\" (UniqueName: \"kubernetes.io/projected/a03e6dcb-fc88-458b-9f8d-36aa045a14ca-kube-api-access-gv5wm\") pod \"a03e6dcb-fc88-458b-9f8d-36aa045a14ca\" (UID: \"a03e6dcb-fc88-458b-9f8d-36aa045a14ca\") " Jan 27 15:33:08 crc kubenswrapper[4697]: I0127 15:33:08.525630 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a03e6dcb-fc88-458b-9f8d-36aa045a14ca-config-data\") pod \"a03e6dcb-fc88-458b-9f8d-36aa045a14ca\" (UID: \"a03e6dcb-fc88-458b-9f8d-36aa045a14ca\") " Jan 27 15:33:08 crc kubenswrapper[4697]: I0127 15:33:08.525929 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a03e6dcb-fc88-458b-9f8d-36aa045a14ca-combined-ca-bundle\") pod \"a03e6dcb-fc88-458b-9f8d-36aa045a14ca\" (UID: \"a03e6dcb-fc88-458b-9f8d-36aa045a14ca\") " Jan 27 15:33:08 crc kubenswrapper[4697]: I0127 15:33:08.526027 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a03e6dcb-fc88-458b-9f8d-36aa045a14ca-logs" (OuterVolumeSpecName: "logs") pod "a03e6dcb-fc88-458b-9f8d-36aa045a14ca" (UID: "a03e6dcb-fc88-458b-9f8d-36aa045a14ca"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:33:08 crc kubenswrapper[4697]: I0127 15:33:08.526420 4697 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a03e6dcb-fc88-458b-9f8d-36aa045a14ca-logs\") on node \"crc\" DevicePath \"\"" Jan 27 15:33:08 crc kubenswrapper[4697]: I0127 15:33:08.532976 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a03e6dcb-fc88-458b-9f8d-36aa045a14ca-kube-api-access-gv5wm" (OuterVolumeSpecName: "kube-api-access-gv5wm") pod "a03e6dcb-fc88-458b-9f8d-36aa045a14ca" (UID: "a03e6dcb-fc88-458b-9f8d-36aa045a14ca"). InnerVolumeSpecName "kube-api-access-gv5wm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:33:08 crc kubenswrapper[4697]: I0127 15:33:08.604759 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a03e6dcb-fc88-458b-9f8d-36aa045a14ca-config-data" (OuterVolumeSpecName: "config-data") pod "a03e6dcb-fc88-458b-9f8d-36aa045a14ca" (UID: "a03e6dcb-fc88-458b-9f8d-36aa045a14ca"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:33:08 crc kubenswrapper[4697]: I0127 15:33:08.624825 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03c328cb-43ac-46ab-8677-80be0dde18e3" path="/var/lib/kubelet/pods/03c328cb-43ac-46ab-8677-80be0dde18e3/volumes" Jan 27 15:33:08 crc kubenswrapper[4697]: I0127 15:33:08.628456 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gv5wm\" (UniqueName: \"kubernetes.io/projected/a03e6dcb-fc88-458b-9f8d-36aa045a14ca-kube-api-access-gv5wm\") on node \"crc\" DevicePath \"\"" Jan 27 15:33:08 crc kubenswrapper[4697]: I0127 15:33:08.628494 4697 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a03e6dcb-fc88-458b-9f8d-36aa045a14ca-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 15:33:08 crc kubenswrapper[4697]: I0127 15:33:08.648998 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a03e6dcb-fc88-458b-9f8d-36aa045a14ca-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a03e6dcb-fc88-458b-9f8d-36aa045a14ca" (UID: "a03e6dcb-fc88-458b-9f8d-36aa045a14ca"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:33:08 crc kubenswrapper[4697]: I0127 15:33:08.732373 4697 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a03e6dcb-fc88-458b-9f8d-36aa045a14ca-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 15:33:08 crc kubenswrapper[4697]: I0127 15:33:08.926987 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"43946a66-4e74-47e4-bfd3-63256993e153","Type":"ContainerStarted","Data":"28a34e4a96b34f3555c70456ff850490c92a5965b36c4c4c746de5b17af4626a"} Jan 27 15:33:08 crc kubenswrapper[4697]: I0127 15:33:08.927035 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"43946a66-4e74-47e4-bfd3-63256993e153","Type":"ContainerStarted","Data":"10b732f6c7edf82adb9d4b3be5863634d38118efddbed930ef866c03ae52dd5e"} Jan 27 15:33:08 crc kubenswrapper[4697]: I0127 15:33:08.929483 4697 generic.go:334] "Generic (PLEG): container finished" podID="a03e6dcb-fc88-458b-9f8d-36aa045a14ca" containerID="194e8851134b9bc0ab672fde5428d363e4d7f538ce6ca04f1637809eba62ce80" exitCode=0 Jan 27 15:33:08 crc kubenswrapper[4697]: I0127 15:33:08.929574 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 15:33:08 crc kubenswrapper[4697]: I0127 15:33:08.929559 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a03e6dcb-fc88-458b-9f8d-36aa045a14ca","Type":"ContainerDied","Data":"194e8851134b9bc0ab672fde5428d363e4d7f538ce6ca04f1637809eba62ce80"} Jan 27 15:33:08 crc kubenswrapper[4697]: I0127 15:33:08.929703 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a03e6dcb-fc88-458b-9f8d-36aa045a14ca","Type":"ContainerDied","Data":"bc602f4376c6f50fab82de25aeb1761167909ee3981d44125cf0b923fdf160fc"} Jan 27 15:33:08 crc kubenswrapper[4697]: I0127 15:33:08.929730 4697 scope.go:117] "RemoveContainer" containerID="194e8851134b9bc0ab672fde5428d363e4d7f538ce6ca04f1637809eba62ce80" Jan 27 15:33:08 crc kubenswrapper[4697]: I0127 15:33:08.952092 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 27 15:33:08 crc kubenswrapper[4697]: I0127 15:33:08.960259 4697 scope.go:117] "RemoveContainer" containerID="fd6ea4e0b4b8cd12606cf1a5309a5147a5ac6f2e68d99b3cd48cbbc957af7903" Jan 27 15:33:08 crc kubenswrapper[4697]: I0127 15:33:08.993874 4697 scope.go:117] "RemoveContainer" containerID="194e8851134b9bc0ab672fde5428d363e4d7f538ce6ca04f1637809eba62ce80" Jan 27 15:33:08 crc kubenswrapper[4697]: E0127 15:33:08.994267 4697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"194e8851134b9bc0ab672fde5428d363e4d7f538ce6ca04f1637809eba62ce80\": container with ID starting with 194e8851134b9bc0ab672fde5428d363e4d7f538ce6ca04f1637809eba62ce80 not found: ID does not exist" containerID="194e8851134b9bc0ab672fde5428d363e4d7f538ce6ca04f1637809eba62ce80" Jan 27 15:33:08 crc kubenswrapper[4697]: I0127 15:33:08.994295 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"194e8851134b9bc0ab672fde5428d363e4d7f538ce6ca04f1637809eba62ce80"} err="failed to get container status \"194e8851134b9bc0ab672fde5428d363e4d7f538ce6ca04f1637809eba62ce80\": rpc error: code = NotFound desc = could not find container \"194e8851134b9bc0ab672fde5428d363e4d7f538ce6ca04f1637809eba62ce80\": container with ID starting with 194e8851134b9bc0ab672fde5428d363e4d7f538ce6ca04f1637809eba62ce80 not found: ID does not exist" Jan 27 15:33:08 crc kubenswrapper[4697]: I0127 15:33:08.994315 4697 scope.go:117] "RemoveContainer" containerID="fd6ea4e0b4b8cd12606cf1a5309a5147a5ac6f2e68d99b3cd48cbbc957af7903" Jan 27 15:33:08 crc kubenswrapper[4697]: E0127 15:33:08.997777 4697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd6ea4e0b4b8cd12606cf1a5309a5147a5ac6f2e68d99b3cd48cbbc957af7903\": container with ID starting with fd6ea4e0b4b8cd12606cf1a5309a5147a5ac6f2e68d99b3cd48cbbc957af7903 not found: ID does not exist" containerID="fd6ea4e0b4b8cd12606cf1a5309a5147a5ac6f2e68d99b3cd48cbbc957af7903" Jan 27 15:33:08 crc kubenswrapper[4697]: I0127 15:33:08.997857 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd6ea4e0b4b8cd12606cf1a5309a5147a5ac6f2e68d99b3cd48cbbc957af7903"} err="failed to get container status \"fd6ea4e0b4b8cd12606cf1a5309a5147a5ac6f2e68d99b3cd48cbbc957af7903\": rpc error: code = NotFound desc = could not find container \"fd6ea4e0b4b8cd12606cf1a5309a5147a5ac6f2e68d99b3cd48cbbc957af7903\": container with ID starting with fd6ea4e0b4b8cd12606cf1a5309a5147a5ac6f2e68d99b3cd48cbbc957af7903 not found: ID does not exist" Jan 27 15:33:09 crc kubenswrapper[4697]: I0127 15:33:09.024500 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 27 15:33:09 crc kubenswrapper[4697]: I0127 15:33:09.047086 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 27 15:33:09 crc kubenswrapper[4697]: I0127 15:33:09.061395 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 27 15:33:09 crc kubenswrapper[4697]: E0127 15:33:09.061900 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a03e6dcb-fc88-458b-9f8d-36aa045a14ca" containerName="nova-api-log" Jan 27 15:33:09 crc kubenswrapper[4697]: I0127 15:33:09.061924 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="a03e6dcb-fc88-458b-9f8d-36aa045a14ca" containerName="nova-api-log" Jan 27 15:33:09 crc kubenswrapper[4697]: E0127 15:33:09.061936 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a03e6dcb-fc88-458b-9f8d-36aa045a14ca" containerName="nova-api-api" Jan 27 15:33:09 crc kubenswrapper[4697]: I0127 15:33:09.061944 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="a03e6dcb-fc88-458b-9f8d-36aa045a14ca" containerName="nova-api-api" Jan 27 15:33:09 crc kubenswrapper[4697]: I0127 15:33:09.062246 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="a03e6dcb-fc88-458b-9f8d-36aa045a14ca" containerName="nova-api-api" Jan 27 15:33:09 crc kubenswrapper[4697]: I0127 15:33:09.062271 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="a03e6dcb-fc88-458b-9f8d-36aa045a14ca" containerName="nova-api-log" Jan 27 15:33:09 crc kubenswrapper[4697]: I0127 15:33:09.063508 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 15:33:09 crc kubenswrapper[4697]: I0127 15:33:09.066289 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 27 15:33:09 crc kubenswrapper[4697]: I0127 15:33:09.066570 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 27 15:33:09 crc kubenswrapper[4697]: I0127 15:33:09.067294 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 27 15:33:09 crc kubenswrapper[4697]: I0127 15:33:09.070630 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 27 15:33:09 crc kubenswrapper[4697]: I0127 15:33:09.142088 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65ldh\" (UniqueName: \"kubernetes.io/projected/20439f38-838f-4b3c-a062-8e5cfcca21b5-kube-api-access-65ldh\") pod \"nova-api-0\" (UID: \"20439f38-838f-4b3c-a062-8e5cfcca21b5\") " pod="openstack/nova-api-0" Jan 27 15:33:09 crc kubenswrapper[4697]: I0127 15:33:09.142125 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/20439f38-838f-4b3c-a062-8e5cfcca21b5-logs\") pod \"nova-api-0\" (UID: \"20439f38-838f-4b3c-a062-8e5cfcca21b5\") " pod="openstack/nova-api-0" Jan 27 15:33:09 crc kubenswrapper[4697]: I0127 15:33:09.142175 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/20439f38-838f-4b3c-a062-8e5cfcca21b5-internal-tls-certs\") pod \"nova-api-0\" (UID: \"20439f38-838f-4b3c-a062-8e5cfcca21b5\") " pod="openstack/nova-api-0" Jan 27 15:33:09 crc kubenswrapper[4697]: I0127 15:33:09.142238 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20439f38-838f-4b3c-a062-8e5cfcca21b5-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"20439f38-838f-4b3c-a062-8e5cfcca21b5\") " pod="openstack/nova-api-0" Jan 27 15:33:09 crc kubenswrapper[4697]: I0127 15:33:09.142261 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20439f38-838f-4b3c-a062-8e5cfcca21b5-config-data\") pod \"nova-api-0\" (UID: \"20439f38-838f-4b3c-a062-8e5cfcca21b5\") " pod="openstack/nova-api-0" Jan 27 15:33:09 crc kubenswrapper[4697]: I0127 15:33:09.142284 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/20439f38-838f-4b3c-a062-8e5cfcca21b5-public-tls-certs\") pod \"nova-api-0\" (UID: \"20439f38-838f-4b3c-a062-8e5cfcca21b5\") " pod="openstack/nova-api-0" Jan 27 15:33:09 crc kubenswrapper[4697]: I0127 15:33:09.243542 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/20439f38-838f-4b3c-a062-8e5cfcca21b5-internal-tls-certs\") pod \"nova-api-0\" (UID: \"20439f38-838f-4b3c-a062-8e5cfcca21b5\") " pod="openstack/nova-api-0" Jan 27 15:33:09 crc kubenswrapper[4697]: I0127 15:33:09.243646 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20439f38-838f-4b3c-a062-8e5cfcca21b5-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"20439f38-838f-4b3c-a062-8e5cfcca21b5\") " pod="openstack/nova-api-0" Jan 27 15:33:09 crc kubenswrapper[4697]: I0127 15:33:09.243672 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20439f38-838f-4b3c-a062-8e5cfcca21b5-config-data\") pod \"nova-api-0\" (UID: \"20439f38-838f-4b3c-a062-8e5cfcca21b5\") " pod="openstack/nova-api-0" Jan 27 15:33:09 crc kubenswrapper[4697]: I0127 15:33:09.243712 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/20439f38-838f-4b3c-a062-8e5cfcca21b5-public-tls-certs\") pod \"nova-api-0\" (UID: \"20439f38-838f-4b3c-a062-8e5cfcca21b5\") " pod="openstack/nova-api-0" Jan 27 15:33:09 crc kubenswrapper[4697]: I0127 15:33:09.243810 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-65ldh\" (UniqueName: \"kubernetes.io/projected/20439f38-838f-4b3c-a062-8e5cfcca21b5-kube-api-access-65ldh\") pod \"nova-api-0\" (UID: \"20439f38-838f-4b3c-a062-8e5cfcca21b5\") " pod="openstack/nova-api-0" Jan 27 15:33:09 crc kubenswrapper[4697]: I0127 15:33:09.243832 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/20439f38-838f-4b3c-a062-8e5cfcca21b5-logs\") pod \"nova-api-0\" (UID: \"20439f38-838f-4b3c-a062-8e5cfcca21b5\") " pod="openstack/nova-api-0" Jan 27 15:33:09 crc kubenswrapper[4697]: I0127 15:33:09.244379 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/20439f38-838f-4b3c-a062-8e5cfcca21b5-logs\") pod \"nova-api-0\" (UID: \"20439f38-838f-4b3c-a062-8e5cfcca21b5\") " pod="openstack/nova-api-0" Jan 27 15:33:09 crc kubenswrapper[4697]: I0127 15:33:09.258376 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/20439f38-838f-4b3c-a062-8e5cfcca21b5-internal-tls-certs\") pod \"nova-api-0\" (UID: \"20439f38-838f-4b3c-a062-8e5cfcca21b5\") " pod="openstack/nova-api-0" Jan 27 15:33:09 crc kubenswrapper[4697]: I0127 15:33:09.261459 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20439f38-838f-4b3c-a062-8e5cfcca21b5-config-data\") pod \"nova-api-0\" (UID: \"20439f38-838f-4b3c-a062-8e5cfcca21b5\") " pod="openstack/nova-api-0" Jan 27 15:33:09 crc kubenswrapper[4697]: I0127 15:33:09.262052 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/20439f38-838f-4b3c-a062-8e5cfcca21b5-public-tls-certs\") pod \"nova-api-0\" (UID: \"20439f38-838f-4b3c-a062-8e5cfcca21b5\") " pod="openstack/nova-api-0" Jan 27 15:33:09 crc kubenswrapper[4697]: I0127 15:33:09.270405 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20439f38-838f-4b3c-a062-8e5cfcca21b5-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"20439f38-838f-4b3c-a062-8e5cfcca21b5\") " pod="openstack/nova-api-0" Jan 27 15:33:09 crc kubenswrapper[4697]: I0127 15:33:09.271186 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-mls9d"] Jan 27 15:33:09 crc kubenswrapper[4697]: I0127 15:33:09.272550 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-mls9d" Jan 27 15:33:09 crc kubenswrapper[4697]: I0127 15:33:09.277042 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Jan 27 15:33:09 crc kubenswrapper[4697]: I0127 15:33:09.278492 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-65ldh\" (UniqueName: \"kubernetes.io/projected/20439f38-838f-4b3c-a062-8e5cfcca21b5-kube-api-access-65ldh\") pod \"nova-api-0\" (UID: \"20439f38-838f-4b3c-a062-8e5cfcca21b5\") " pod="openstack/nova-api-0" Jan 27 15:33:09 crc kubenswrapper[4697]: I0127 15:33:09.281139 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Jan 27 15:33:09 crc kubenswrapper[4697]: I0127 15:33:09.326364 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-mls9d"] Jan 27 15:33:09 crc kubenswrapper[4697]: I0127 15:33:09.371348 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9d65ac1-a402-40bd-96e8-9e7cacbe8f0b-config-data\") pod \"nova-cell1-cell-mapping-mls9d\" (UID: \"e9d65ac1-a402-40bd-96e8-9e7cacbe8f0b\") " pod="openstack/nova-cell1-cell-mapping-mls9d" Jan 27 15:33:09 crc kubenswrapper[4697]: I0127 15:33:09.371530 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9d65ac1-a402-40bd-96e8-9e7cacbe8f0b-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-mls9d\" (UID: \"e9d65ac1-a402-40bd-96e8-9e7cacbe8f0b\") " pod="openstack/nova-cell1-cell-mapping-mls9d" Jan 27 15:33:09 crc kubenswrapper[4697]: I0127 15:33:09.371589 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9z4wg\" (UniqueName: \"kubernetes.io/projected/e9d65ac1-a402-40bd-96e8-9e7cacbe8f0b-kube-api-access-9z4wg\") pod \"nova-cell1-cell-mapping-mls9d\" (UID: \"e9d65ac1-a402-40bd-96e8-9e7cacbe8f0b\") " pod="openstack/nova-cell1-cell-mapping-mls9d" Jan 27 15:33:09 crc kubenswrapper[4697]: I0127 15:33:09.371612 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e9d65ac1-a402-40bd-96e8-9e7cacbe8f0b-scripts\") pod \"nova-cell1-cell-mapping-mls9d\" (UID: \"e9d65ac1-a402-40bd-96e8-9e7cacbe8f0b\") " pod="openstack/nova-cell1-cell-mapping-mls9d" Jan 27 15:33:09 crc kubenswrapper[4697]: I0127 15:33:09.400290 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 15:33:09 crc kubenswrapper[4697]: I0127 15:33:09.483249 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e9d65ac1-a402-40bd-96e8-9e7cacbe8f0b-scripts\") pod \"nova-cell1-cell-mapping-mls9d\" (UID: \"e9d65ac1-a402-40bd-96e8-9e7cacbe8f0b\") " pod="openstack/nova-cell1-cell-mapping-mls9d" Jan 27 15:33:09 crc kubenswrapper[4697]: I0127 15:33:09.483358 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9d65ac1-a402-40bd-96e8-9e7cacbe8f0b-config-data\") pod \"nova-cell1-cell-mapping-mls9d\" (UID: \"e9d65ac1-a402-40bd-96e8-9e7cacbe8f0b\") " pod="openstack/nova-cell1-cell-mapping-mls9d" Jan 27 15:33:09 crc kubenswrapper[4697]: I0127 15:33:09.483449 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9d65ac1-a402-40bd-96e8-9e7cacbe8f0b-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-mls9d\" (UID: \"e9d65ac1-a402-40bd-96e8-9e7cacbe8f0b\") " pod="openstack/nova-cell1-cell-mapping-mls9d" Jan 27 15:33:09 crc kubenswrapper[4697]: I0127 15:33:09.483485 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9z4wg\" (UniqueName: \"kubernetes.io/projected/e9d65ac1-a402-40bd-96e8-9e7cacbe8f0b-kube-api-access-9z4wg\") pod \"nova-cell1-cell-mapping-mls9d\" (UID: \"e9d65ac1-a402-40bd-96e8-9e7cacbe8f0b\") " pod="openstack/nova-cell1-cell-mapping-mls9d" Jan 27 15:33:09 crc kubenswrapper[4697]: I0127 15:33:09.497604 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e9d65ac1-a402-40bd-96e8-9e7cacbe8f0b-scripts\") pod \"nova-cell1-cell-mapping-mls9d\" (UID: \"e9d65ac1-a402-40bd-96e8-9e7cacbe8f0b\") " pod="openstack/nova-cell1-cell-mapping-mls9d" Jan 27 15:33:09 crc kubenswrapper[4697]: I0127 15:33:09.497827 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9d65ac1-a402-40bd-96e8-9e7cacbe8f0b-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-mls9d\" (UID: \"e9d65ac1-a402-40bd-96e8-9e7cacbe8f0b\") " pod="openstack/nova-cell1-cell-mapping-mls9d" Jan 27 15:33:09 crc kubenswrapper[4697]: I0127 15:33:09.499308 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9d65ac1-a402-40bd-96e8-9e7cacbe8f0b-config-data\") pod \"nova-cell1-cell-mapping-mls9d\" (UID: \"e9d65ac1-a402-40bd-96e8-9e7cacbe8f0b\") " pod="openstack/nova-cell1-cell-mapping-mls9d" Jan 27 15:33:09 crc kubenswrapper[4697]: I0127 15:33:09.514840 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9z4wg\" (UniqueName: \"kubernetes.io/projected/e9d65ac1-a402-40bd-96e8-9e7cacbe8f0b-kube-api-access-9z4wg\") pod \"nova-cell1-cell-mapping-mls9d\" (UID: \"e9d65ac1-a402-40bd-96e8-9e7cacbe8f0b\") " pod="openstack/nova-cell1-cell-mapping-mls9d" Jan 27 15:33:10 crc kubenswrapper[4697]: I0127 15:33:09.704274 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-mls9d" Jan 27 15:33:10 crc kubenswrapper[4697]: I0127 15:33:10.579874 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a03e6dcb-fc88-458b-9f8d-36aa045a14ca" path="/var/lib/kubelet/pods/a03e6dcb-fc88-458b-9f8d-36aa045a14ca/volumes" Jan 27 15:33:10 crc kubenswrapper[4697]: I0127 15:33:10.961960 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"43946a66-4e74-47e4-bfd3-63256993e153","Type":"ContainerStarted","Data":"19960b0990abfdf27210df3d096d14217d5464766c0b19225180390ba84e0058"} Jan 27 15:33:10 crc kubenswrapper[4697]: I0127 15:33:10.968936 4697 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-7j258" podUID="b4da5f97-06a4-452e-9f5e-87c97c5bf1f5" containerName="registry-server" probeResult="failure" output=< Jan 27 15:33:10 crc kubenswrapper[4697]: timeout: failed to connect service ":50051" within 1s Jan 27 15:33:10 crc kubenswrapper[4697]: > Jan 27 15:33:11 crc kubenswrapper[4697]: W0127 15:33:11.044256 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod20439f38_838f_4b3c_a062_8e5cfcca21b5.slice/crio-05e8c1bf29b65a18f2655151debc32bf9594d0a317eba925c34c5a5ee2841410 WatchSource:0}: Error finding container 05e8c1bf29b65a18f2655151debc32bf9594d0a317eba925c34c5a5ee2841410: Status 404 returned error can't find the container with id 05e8c1bf29b65a18f2655151debc32bf9594d0a317eba925c34c5a5ee2841410 Jan 27 15:33:11 crc kubenswrapper[4697]: I0127 15:33:11.052401 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 27 15:33:11 crc kubenswrapper[4697]: I0127 15:33:11.068227 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-mls9d"] Jan 27 15:33:11 crc kubenswrapper[4697]: I0127 15:33:11.973016 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"43946a66-4e74-47e4-bfd3-63256993e153","Type":"ContainerStarted","Data":"7a4154c3120c8e593ad0d02be9b7ac0cf151d4456211e80f8d99e6d83584954a"} Jan 27 15:33:11 crc kubenswrapper[4697]: I0127 15:33:11.975276 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"20439f38-838f-4b3c-a062-8e5cfcca21b5","Type":"ContainerStarted","Data":"e063f02d110f71ca42ad86e34aa822a8a51327f3a341d96685776a95b4c57544"} Jan 27 15:33:11 crc kubenswrapper[4697]: I0127 15:33:11.975310 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"20439f38-838f-4b3c-a062-8e5cfcca21b5","Type":"ContainerStarted","Data":"81b1e77119fa417fd0788720bff7571d4b94069fc208864934c258db1a333043"} Jan 27 15:33:11 crc kubenswrapper[4697]: I0127 15:33:11.975324 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"20439f38-838f-4b3c-a062-8e5cfcca21b5","Type":"ContainerStarted","Data":"05e8c1bf29b65a18f2655151debc32bf9594d0a317eba925c34c5a5ee2841410"} Jan 27 15:33:11 crc kubenswrapper[4697]: I0127 15:33:11.978426 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-mls9d" event={"ID":"e9d65ac1-a402-40bd-96e8-9e7cacbe8f0b","Type":"ContainerStarted","Data":"6c713ae0a01cf18d1cffdf269e6e59001a1834caed56b26e6be53ed3fb283d13"} Jan 27 15:33:11 crc kubenswrapper[4697]: I0127 15:33:11.978466 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-mls9d" event={"ID":"e9d65ac1-a402-40bd-96e8-9e7cacbe8f0b","Type":"ContainerStarted","Data":"31fc16aeb61c34fd3d319904fca57e3baedc60eb9ad194df19ca57d99ad6e0a9"} Jan 27 15:33:12 crc kubenswrapper[4697]: I0127 15:33:12.000818 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.000802659 podStartE2EDuration="3.000802659s" podCreationTimestamp="2026-01-27 15:33:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:33:11.993044939 +0000 UTC m=+1488.165444720" watchObservedRunningTime="2026-01-27 15:33:12.000802659 +0000 UTC m=+1488.173202450" Jan 27 15:33:12 crc kubenswrapper[4697]: I0127 15:33:12.015829 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-mls9d" podStartSLOduration=3.015803534 podStartE2EDuration="3.015803534s" podCreationTimestamp="2026-01-27 15:33:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:33:12.011656764 +0000 UTC m=+1488.184056545" watchObservedRunningTime="2026-01-27 15:33:12.015803534 +0000 UTC m=+1488.188203315" Jan 27 15:33:12 crc kubenswrapper[4697]: I0127 15:33:12.331611 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-89c5cd4d5-5nhp7" Jan 27 15:33:12 crc kubenswrapper[4697]: I0127 15:33:12.388899 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-czt4r"] Jan 27 15:33:12 crc kubenswrapper[4697]: I0127 15:33:12.389213 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-757b4f8459-czt4r" podUID="2ff89669-e519-40aa-bf6e-93e0d6ebced7" containerName="dnsmasq-dns" containerID="cri-o://26d509663ef0b579cee9235ec1ed1efb1c8fabf44b0eefc9b39c6b9ae718c769" gracePeriod=10 Jan 27 15:33:13 crc kubenswrapper[4697]: I0127 15:33:13.021836 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"43946a66-4e74-47e4-bfd3-63256993e153","Type":"ContainerStarted","Data":"6c75cd4698aaba7acc1a9c6a0dff217513cf68c67f72506a4293eee9b17338aa"} Jan 27 15:33:13 crc kubenswrapper[4697]: I0127 15:33:13.022432 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 27 15:33:13 crc kubenswrapper[4697]: I0127 15:33:13.025486 4697 generic.go:334] "Generic (PLEG): container finished" podID="2ff89669-e519-40aa-bf6e-93e0d6ebced7" containerID="26d509663ef0b579cee9235ec1ed1efb1c8fabf44b0eefc9b39c6b9ae718c769" exitCode=0 Jan 27 15:33:13 crc kubenswrapper[4697]: I0127 15:33:13.026630 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-czt4r" event={"ID":"2ff89669-e519-40aa-bf6e-93e0d6ebced7","Type":"ContainerDied","Data":"26d509663ef0b579cee9235ec1ed1efb1c8fabf44b0eefc9b39c6b9ae718c769"} Jan 27 15:33:13 crc kubenswrapper[4697]: I0127 15:33:13.026667 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-czt4r" event={"ID":"2ff89669-e519-40aa-bf6e-93e0d6ebced7","Type":"ContainerDied","Data":"b8ece5b9d60b5baaefce0a72baf3feb919bfaf3c14d4ad0a1d12970078b90896"} Jan 27 15:33:13 crc kubenswrapper[4697]: I0127 15:33:13.026682 4697 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b8ece5b9d60b5baaefce0a72baf3feb919bfaf3c14d4ad0a1d12970078b90896" Jan 27 15:33:13 crc kubenswrapper[4697]: I0127 15:33:13.049088 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.668942498 podStartE2EDuration="6.049071658s" podCreationTimestamp="2026-01-27 15:33:07 +0000 UTC" firstStartedPulling="2026-01-27 15:33:08.140419157 +0000 UTC m=+1484.312818958" lastFinishedPulling="2026-01-27 15:33:12.520548337 +0000 UTC m=+1488.692948118" observedRunningTime="2026-01-27 15:33:13.043060022 +0000 UTC m=+1489.215459803" watchObservedRunningTime="2026-01-27 15:33:13.049071658 +0000 UTC m=+1489.221471439" Jan 27 15:33:13 crc kubenswrapper[4697]: I0127 15:33:13.056232 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-czt4r" Jan 27 15:33:13 crc kubenswrapper[4697]: I0127 15:33:13.170179 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2ff89669-e519-40aa-bf6e-93e0d6ebced7-dns-swift-storage-0\") pod \"2ff89669-e519-40aa-bf6e-93e0d6ebced7\" (UID: \"2ff89669-e519-40aa-bf6e-93e0d6ebced7\") " Jan 27 15:33:13 crc kubenswrapper[4697]: I0127 15:33:13.170252 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2ff89669-e519-40aa-bf6e-93e0d6ebced7-dns-svc\") pod \"2ff89669-e519-40aa-bf6e-93e0d6ebced7\" (UID: \"2ff89669-e519-40aa-bf6e-93e0d6ebced7\") " Jan 27 15:33:13 crc kubenswrapper[4697]: I0127 15:33:13.170326 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zpd28\" (UniqueName: \"kubernetes.io/projected/2ff89669-e519-40aa-bf6e-93e0d6ebced7-kube-api-access-zpd28\") pod \"2ff89669-e519-40aa-bf6e-93e0d6ebced7\" (UID: \"2ff89669-e519-40aa-bf6e-93e0d6ebced7\") " Jan 27 15:33:13 crc kubenswrapper[4697]: I0127 15:33:13.170408 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2ff89669-e519-40aa-bf6e-93e0d6ebced7-ovsdbserver-sb\") pod \"2ff89669-e519-40aa-bf6e-93e0d6ebced7\" (UID: \"2ff89669-e519-40aa-bf6e-93e0d6ebced7\") " Jan 27 15:33:13 crc kubenswrapper[4697]: I0127 15:33:13.170561 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ff89669-e519-40aa-bf6e-93e0d6ebced7-config\") pod \"2ff89669-e519-40aa-bf6e-93e0d6ebced7\" (UID: \"2ff89669-e519-40aa-bf6e-93e0d6ebced7\") " Jan 27 15:33:13 crc kubenswrapper[4697]: I0127 15:33:13.170649 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2ff89669-e519-40aa-bf6e-93e0d6ebced7-ovsdbserver-nb\") pod \"2ff89669-e519-40aa-bf6e-93e0d6ebced7\" (UID: \"2ff89669-e519-40aa-bf6e-93e0d6ebced7\") " Jan 27 15:33:13 crc kubenswrapper[4697]: I0127 15:33:13.196068 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ff89669-e519-40aa-bf6e-93e0d6ebced7-kube-api-access-zpd28" (OuterVolumeSpecName: "kube-api-access-zpd28") pod "2ff89669-e519-40aa-bf6e-93e0d6ebced7" (UID: "2ff89669-e519-40aa-bf6e-93e0d6ebced7"). InnerVolumeSpecName "kube-api-access-zpd28". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:33:13 crc kubenswrapper[4697]: I0127 15:33:13.244931 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ff89669-e519-40aa-bf6e-93e0d6ebced7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2ff89669-e519-40aa-bf6e-93e0d6ebced7" (UID: "2ff89669-e519-40aa-bf6e-93e0d6ebced7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:33:13 crc kubenswrapper[4697]: I0127 15:33:13.249234 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ff89669-e519-40aa-bf6e-93e0d6ebced7-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "2ff89669-e519-40aa-bf6e-93e0d6ebced7" (UID: "2ff89669-e519-40aa-bf6e-93e0d6ebced7"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:33:13 crc kubenswrapper[4697]: I0127 15:33:13.255998 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ff89669-e519-40aa-bf6e-93e0d6ebced7-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2ff89669-e519-40aa-bf6e-93e0d6ebced7" (UID: "2ff89669-e519-40aa-bf6e-93e0d6ebced7"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:33:13 crc kubenswrapper[4697]: I0127 15:33:13.261716 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ff89669-e519-40aa-bf6e-93e0d6ebced7-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "2ff89669-e519-40aa-bf6e-93e0d6ebced7" (UID: "2ff89669-e519-40aa-bf6e-93e0d6ebced7"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:33:13 crc kubenswrapper[4697]: I0127 15:33:13.270330 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ff89669-e519-40aa-bf6e-93e0d6ebced7-config" (OuterVolumeSpecName: "config") pod "2ff89669-e519-40aa-bf6e-93e0d6ebced7" (UID: "2ff89669-e519-40aa-bf6e-93e0d6ebced7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:33:13 crc kubenswrapper[4697]: I0127 15:33:13.273132 4697 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ff89669-e519-40aa-bf6e-93e0d6ebced7-config\") on node \"crc\" DevicePath \"\"" Jan 27 15:33:13 crc kubenswrapper[4697]: I0127 15:33:13.273326 4697 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2ff89669-e519-40aa-bf6e-93e0d6ebced7-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 15:33:13 crc kubenswrapper[4697]: I0127 15:33:13.273431 4697 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2ff89669-e519-40aa-bf6e-93e0d6ebced7-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 27 15:33:13 crc kubenswrapper[4697]: I0127 15:33:13.273528 4697 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2ff89669-e519-40aa-bf6e-93e0d6ebced7-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 15:33:13 crc kubenswrapper[4697]: I0127 15:33:13.273632 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zpd28\" (UniqueName: \"kubernetes.io/projected/2ff89669-e519-40aa-bf6e-93e0d6ebced7-kube-api-access-zpd28\") on node \"crc\" DevicePath \"\"" Jan 27 15:33:13 crc kubenswrapper[4697]: I0127 15:33:13.273726 4697 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2ff89669-e519-40aa-bf6e-93e0d6ebced7-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 15:33:14 crc kubenswrapper[4697]: I0127 15:33:14.033253 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-czt4r" Jan 27 15:33:14 crc kubenswrapper[4697]: I0127 15:33:14.081983 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-czt4r"] Jan 27 15:33:14 crc kubenswrapper[4697]: I0127 15:33:14.093547 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-czt4r"] Jan 27 15:33:14 crc kubenswrapper[4697]: I0127 15:33:14.578130 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ff89669-e519-40aa-bf6e-93e0d6ebced7" path="/var/lib/kubelet/pods/2ff89669-e519-40aa-bf6e-93e0d6ebced7/volumes" Jan 27 15:33:17 crc kubenswrapper[4697]: I0127 15:33:17.978345 4697 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-757b4f8459-czt4r" podUID="2ff89669-e519-40aa-bf6e-93e0d6ebced7" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.193:5353: i/o timeout" Jan 27 15:33:19 crc kubenswrapper[4697]: I0127 15:33:19.090851 4697 generic.go:334] "Generic (PLEG): container finished" podID="e9d65ac1-a402-40bd-96e8-9e7cacbe8f0b" containerID="6c713ae0a01cf18d1cffdf269e6e59001a1834caed56b26e6be53ed3fb283d13" exitCode=0 Jan 27 15:33:19 crc kubenswrapper[4697]: I0127 15:33:19.090897 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-mls9d" event={"ID":"e9d65ac1-a402-40bd-96e8-9e7cacbe8f0b","Type":"ContainerDied","Data":"6c713ae0a01cf18d1cffdf269e6e59001a1834caed56b26e6be53ed3fb283d13"} Jan 27 15:33:19 crc kubenswrapper[4697]: I0127 15:33:19.401020 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 27 15:33:19 crc kubenswrapper[4697]: I0127 15:33:19.401058 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 27 15:33:20 crc kubenswrapper[4697]: I0127 15:33:20.415978 4697 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="20439f38-838f-4b3c-a062-8e5cfcca21b5" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.206:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 15:33:20 crc kubenswrapper[4697]: I0127 15:33:20.416211 4697 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="20439f38-838f-4b3c-a062-8e5cfcca21b5" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.206:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 15:33:20 crc kubenswrapper[4697]: I0127 15:33:20.536182 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-mls9d" Jan 27 15:33:20 crc kubenswrapper[4697]: I0127 15:33:20.645112 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9z4wg\" (UniqueName: \"kubernetes.io/projected/e9d65ac1-a402-40bd-96e8-9e7cacbe8f0b-kube-api-access-9z4wg\") pod \"e9d65ac1-a402-40bd-96e8-9e7cacbe8f0b\" (UID: \"e9d65ac1-a402-40bd-96e8-9e7cacbe8f0b\") " Jan 27 15:33:20 crc kubenswrapper[4697]: I0127 15:33:20.645251 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9d65ac1-a402-40bd-96e8-9e7cacbe8f0b-combined-ca-bundle\") pod \"e9d65ac1-a402-40bd-96e8-9e7cacbe8f0b\" (UID: \"e9d65ac1-a402-40bd-96e8-9e7cacbe8f0b\") " Jan 27 15:33:20 crc kubenswrapper[4697]: I0127 15:33:20.646951 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9d65ac1-a402-40bd-96e8-9e7cacbe8f0b-config-data\") pod \"e9d65ac1-a402-40bd-96e8-9e7cacbe8f0b\" (UID: \"e9d65ac1-a402-40bd-96e8-9e7cacbe8f0b\") " Jan 27 15:33:20 crc kubenswrapper[4697]: I0127 15:33:20.647013 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e9d65ac1-a402-40bd-96e8-9e7cacbe8f0b-scripts\") pod \"e9d65ac1-a402-40bd-96e8-9e7cacbe8f0b\" (UID: \"e9d65ac1-a402-40bd-96e8-9e7cacbe8f0b\") " Jan 27 15:33:20 crc kubenswrapper[4697]: I0127 15:33:20.651267 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9d65ac1-a402-40bd-96e8-9e7cacbe8f0b-kube-api-access-9z4wg" (OuterVolumeSpecName: "kube-api-access-9z4wg") pod "e9d65ac1-a402-40bd-96e8-9e7cacbe8f0b" (UID: "e9d65ac1-a402-40bd-96e8-9e7cacbe8f0b"). InnerVolumeSpecName "kube-api-access-9z4wg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:33:20 crc kubenswrapper[4697]: I0127 15:33:20.652700 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9d65ac1-a402-40bd-96e8-9e7cacbe8f0b-scripts" (OuterVolumeSpecName: "scripts") pod "e9d65ac1-a402-40bd-96e8-9e7cacbe8f0b" (UID: "e9d65ac1-a402-40bd-96e8-9e7cacbe8f0b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:33:20 crc kubenswrapper[4697]: I0127 15:33:20.684474 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9d65ac1-a402-40bd-96e8-9e7cacbe8f0b-config-data" (OuterVolumeSpecName: "config-data") pod "e9d65ac1-a402-40bd-96e8-9e7cacbe8f0b" (UID: "e9d65ac1-a402-40bd-96e8-9e7cacbe8f0b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:33:20 crc kubenswrapper[4697]: I0127 15:33:20.707106 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9d65ac1-a402-40bd-96e8-9e7cacbe8f0b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e9d65ac1-a402-40bd-96e8-9e7cacbe8f0b" (UID: "e9d65ac1-a402-40bd-96e8-9e7cacbe8f0b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:33:20 crc kubenswrapper[4697]: I0127 15:33:20.748873 4697 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9d65ac1-a402-40bd-96e8-9e7cacbe8f0b-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 15:33:20 crc kubenswrapper[4697]: I0127 15:33:20.748913 4697 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e9d65ac1-a402-40bd-96e8-9e7cacbe8f0b-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 15:33:20 crc kubenswrapper[4697]: I0127 15:33:20.748925 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9z4wg\" (UniqueName: \"kubernetes.io/projected/e9d65ac1-a402-40bd-96e8-9e7cacbe8f0b-kube-api-access-9z4wg\") on node \"crc\" DevicePath \"\"" Jan 27 15:33:20 crc kubenswrapper[4697]: I0127 15:33:20.748937 4697 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9d65ac1-a402-40bd-96e8-9e7cacbe8f0b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 15:33:20 crc kubenswrapper[4697]: I0127 15:33:20.954169 4697 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-7j258" podUID="b4da5f97-06a4-452e-9f5e-87c97c5bf1f5" containerName="registry-server" probeResult="failure" output=< Jan 27 15:33:20 crc kubenswrapper[4697]: timeout: failed to connect service ":50051" within 1s Jan 27 15:33:20 crc kubenswrapper[4697]: > Jan 27 15:33:21 crc kubenswrapper[4697]: I0127 15:33:21.109522 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-mls9d" event={"ID":"e9d65ac1-a402-40bd-96e8-9e7cacbe8f0b","Type":"ContainerDied","Data":"31fc16aeb61c34fd3d319904fca57e3baedc60eb9ad194df19ca57d99ad6e0a9"} Jan 27 15:33:21 crc kubenswrapper[4697]: I0127 15:33:21.109560 4697 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="31fc16aeb61c34fd3d319904fca57e3baedc60eb9ad194df19ca57d99ad6e0a9" Jan 27 15:33:21 crc kubenswrapper[4697]: I0127 15:33:21.109624 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-mls9d" Jan 27 15:33:21 crc kubenswrapper[4697]: I0127 15:33:21.290052 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 27 15:33:21 crc kubenswrapper[4697]: I0127 15:33:21.290451 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="20439f38-838f-4b3c-a062-8e5cfcca21b5" containerName="nova-api-log" containerID="cri-o://81b1e77119fa417fd0788720bff7571d4b94069fc208864934c258db1a333043" gracePeriod=30 Jan 27 15:33:21 crc kubenswrapper[4697]: I0127 15:33:21.290575 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="20439f38-838f-4b3c-a062-8e5cfcca21b5" containerName="nova-api-api" containerID="cri-o://e063f02d110f71ca42ad86e34aa822a8a51327f3a341d96685776a95b4c57544" gracePeriod=30 Jan 27 15:33:21 crc kubenswrapper[4697]: I0127 15:33:21.307753 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 15:33:21 crc kubenswrapper[4697]: I0127 15:33:21.307962 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="501c9575-b5b8-4c4a-a225-f1853eec3ea8" containerName="nova-scheduler-scheduler" containerID="cri-o://7646f676d62a2b950c52eadb999f0729d3653a85b166248081dd1d209dbf2644" gracePeriod=30 Jan 27 15:33:21 crc kubenswrapper[4697]: I0127 15:33:21.425986 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 15:33:21 crc kubenswrapper[4697]: I0127 15:33:21.426195 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="c0d44e3f-5773-4f1d-98bd-6ee63096a361" containerName="nova-metadata-log" containerID="cri-o://b62e58371635562f614aa3b510293c65bf2615dee1aaeb18d11aa77f37d36b53" gracePeriod=30 Jan 27 15:33:21 crc kubenswrapper[4697]: I0127 15:33:21.426438 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="c0d44e3f-5773-4f1d-98bd-6ee63096a361" containerName="nova-metadata-metadata" containerID="cri-o://ed8fc3f91330cf9c756171832c32313774b0325c7eb210f00e86b4a60338e6a5" gracePeriod=30 Jan 27 15:33:22 crc kubenswrapper[4697]: I0127 15:33:22.121166 4697 generic.go:334] "Generic (PLEG): container finished" podID="c0d44e3f-5773-4f1d-98bd-6ee63096a361" containerID="b62e58371635562f614aa3b510293c65bf2615dee1aaeb18d11aa77f37d36b53" exitCode=143 Jan 27 15:33:22 crc kubenswrapper[4697]: I0127 15:33:22.121247 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c0d44e3f-5773-4f1d-98bd-6ee63096a361","Type":"ContainerDied","Data":"b62e58371635562f614aa3b510293c65bf2615dee1aaeb18d11aa77f37d36b53"} Jan 27 15:33:22 crc kubenswrapper[4697]: I0127 15:33:22.124334 4697 generic.go:334] "Generic (PLEG): container finished" podID="20439f38-838f-4b3c-a062-8e5cfcca21b5" containerID="81b1e77119fa417fd0788720bff7571d4b94069fc208864934c258db1a333043" exitCode=143 Jan 27 15:33:22 crc kubenswrapper[4697]: I0127 15:33:22.124447 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"20439f38-838f-4b3c-a062-8e5cfcca21b5","Type":"ContainerDied","Data":"81b1e77119fa417fd0788720bff7571d4b94069fc208864934c258db1a333043"} Jan 27 15:33:23 crc kubenswrapper[4697]: I0127 15:33:23.139583 4697 generic.go:334] "Generic (PLEG): container finished" podID="501c9575-b5b8-4c4a-a225-f1853eec3ea8" containerID="7646f676d62a2b950c52eadb999f0729d3653a85b166248081dd1d209dbf2644" exitCode=0 Jan 27 15:33:23 crc kubenswrapper[4697]: I0127 15:33:23.139653 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"501c9575-b5b8-4c4a-a225-f1853eec3ea8","Type":"ContainerDied","Data":"7646f676d62a2b950c52eadb999f0729d3653a85b166248081dd1d209dbf2644"} Jan 27 15:33:23 crc kubenswrapper[4697]: I0127 15:33:23.456602 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 15:33:23 crc kubenswrapper[4697]: I0127 15:33:23.614675 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mghhh\" (UniqueName: \"kubernetes.io/projected/501c9575-b5b8-4c4a-a225-f1853eec3ea8-kube-api-access-mghhh\") pod \"501c9575-b5b8-4c4a-a225-f1853eec3ea8\" (UID: \"501c9575-b5b8-4c4a-a225-f1853eec3ea8\") " Jan 27 15:33:23 crc kubenswrapper[4697]: I0127 15:33:23.615107 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/501c9575-b5b8-4c4a-a225-f1853eec3ea8-combined-ca-bundle\") pod \"501c9575-b5b8-4c4a-a225-f1853eec3ea8\" (UID: \"501c9575-b5b8-4c4a-a225-f1853eec3ea8\") " Jan 27 15:33:23 crc kubenswrapper[4697]: I0127 15:33:23.615203 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/501c9575-b5b8-4c4a-a225-f1853eec3ea8-config-data\") pod \"501c9575-b5b8-4c4a-a225-f1853eec3ea8\" (UID: \"501c9575-b5b8-4c4a-a225-f1853eec3ea8\") " Jan 27 15:33:23 crc kubenswrapper[4697]: I0127 15:33:23.623913 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/501c9575-b5b8-4c4a-a225-f1853eec3ea8-kube-api-access-mghhh" (OuterVolumeSpecName: "kube-api-access-mghhh") pod "501c9575-b5b8-4c4a-a225-f1853eec3ea8" (UID: "501c9575-b5b8-4c4a-a225-f1853eec3ea8"). InnerVolumeSpecName "kube-api-access-mghhh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:33:23 crc kubenswrapper[4697]: I0127 15:33:23.646002 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/501c9575-b5b8-4c4a-a225-f1853eec3ea8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "501c9575-b5b8-4c4a-a225-f1853eec3ea8" (UID: "501c9575-b5b8-4c4a-a225-f1853eec3ea8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:33:23 crc kubenswrapper[4697]: I0127 15:33:23.650231 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/501c9575-b5b8-4c4a-a225-f1853eec3ea8-config-data" (OuterVolumeSpecName: "config-data") pod "501c9575-b5b8-4c4a-a225-f1853eec3ea8" (UID: "501c9575-b5b8-4c4a-a225-f1853eec3ea8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:33:23 crc kubenswrapper[4697]: I0127 15:33:23.717700 4697 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/501c9575-b5b8-4c4a-a225-f1853eec3ea8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 15:33:23 crc kubenswrapper[4697]: I0127 15:33:23.717893 4697 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/501c9575-b5b8-4c4a-a225-f1853eec3ea8-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 15:33:23 crc kubenswrapper[4697]: I0127 15:33:23.717930 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mghhh\" (UniqueName: \"kubernetes.io/projected/501c9575-b5b8-4c4a-a225-f1853eec3ea8-kube-api-access-mghhh\") on node \"crc\" DevicePath \"\"" Jan 27 15:33:24 crc kubenswrapper[4697]: I0127 15:33:24.153238 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"501c9575-b5b8-4c4a-a225-f1853eec3ea8","Type":"ContainerDied","Data":"f88979a433a5e2a0fc3fce1d84be78e51fd5dd6eda5e7c9667699c23b61c1d72"} Jan 27 15:33:24 crc kubenswrapper[4697]: I0127 15:33:24.153306 4697 scope.go:117] "RemoveContainer" containerID="7646f676d62a2b950c52eadb999f0729d3653a85b166248081dd1d209dbf2644" Jan 27 15:33:24 crc kubenswrapper[4697]: I0127 15:33:24.153345 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 15:33:24 crc kubenswrapper[4697]: I0127 15:33:24.198054 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 15:33:24 crc kubenswrapper[4697]: I0127 15:33:24.218197 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 15:33:24 crc kubenswrapper[4697]: I0127 15:33:24.234391 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 15:33:24 crc kubenswrapper[4697]: E0127 15:33:24.234922 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="501c9575-b5b8-4c4a-a225-f1853eec3ea8" containerName="nova-scheduler-scheduler" Jan 27 15:33:24 crc kubenswrapper[4697]: I0127 15:33:24.234944 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="501c9575-b5b8-4c4a-a225-f1853eec3ea8" containerName="nova-scheduler-scheduler" Jan 27 15:33:24 crc kubenswrapper[4697]: E0127 15:33:24.234976 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ff89669-e519-40aa-bf6e-93e0d6ebced7" containerName="dnsmasq-dns" Jan 27 15:33:24 crc kubenswrapper[4697]: I0127 15:33:24.234985 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ff89669-e519-40aa-bf6e-93e0d6ebced7" containerName="dnsmasq-dns" Jan 27 15:33:24 crc kubenswrapper[4697]: E0127 15:33:24.235020 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ff89669-e519-40aa-bf6e-93e0d6ebced7" containerName="init" Jan 27 15:33:24 crc kubenswrapper[4697]: I0127 15:33:24.235028 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ff89669-e519-40aa-bf6e-93e0d6ebced7" containerName="init" Jan 27 15:33:24 crc kubenswrapper[4697]: E0127 15:33:24.235045 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9d65ac1-a402-40bd-96e8-9e7cacbe8f0b" containerName="nova-manage" Jan 27 15:33:24 crc kubenswrapper[4697]: I0127 15:33:24.235052 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9d65ac1-a402-40bd-96e8-9e7cacbe8f0b" containerName="nova-manage" Jan 27 15:33:24 crc kubenswrapper[4697]: I0127 15:33:24.235313 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="501c9575-b5b8-4c4a-a225-f1853eec3ea8" containerName="nova-scheduler-scheduler" Jan 27 15:33:24 crc kubenswrapper[4697]: I0127 15:33:24.235333 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ff89669-e519-40aa-bf6e-93e0d6ebced7" containerName="dnsmasq-dns" Jan 27 15:33:24 crc kubenswrapper[4697]: I0127 15:33:24.235353 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9d65ac1-a402-40bd-96e8-9e7cacbe8f0b" containerName="nova-manage" Jan 27 15:33:24 crc kubenswrapper[4697]: I0127 15:33:24.236155 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 15:33:24 crc kubenswrapper[4697]: I0127 15:33:24.238570 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 27 15:33:24 crc kubenswrapper[4697]: I0127 15:33:24.255674 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 15:33:24 crc kubenswrapper[4697]: I0127 15:33:24.428419 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbt66\" (UniqueName: \"kubernetes.io/projected/0aac0bcf-d6ae-4188-b597-e42935d81d0e-kube-api-access-xbt66\") pod \"nova-scheduler-0\" (UID: \"0aac0bcf-d6ae-4188-b597-e42935d81d0e\") " pod="openstack/nova-scheduler-0" Jan 27 15:33:24 crc kubenswrapper[4697]: I0127 15:33:24.428542 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0aac0bcf-d6ae-4188-b597-e42935d81d0e-config-data\") pod \"nova-scheduler-0\" (UID: \"0aac0bcf-d6ae-4188-b597-e42935d81d0e\") " pod="openstack/nova-scheduler-0" Jan 27 15:33:24 crc kubenswrapper[4697]: I0127 15:33:24.428578 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0aac0bcf-d6ae-4188-b597-e42935d81d0e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"0aac0bcf-d6ae-4188-b597-e42935d81d0e\") " pod="openstack/nova-scheduler-0" Jan 27 15:33:24 crc kubenswrapper[4697]: I0127 15:33:24.530636 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0aac0bcf-d6ae-4188-b597-e42935d81d0e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"0aac0bcf-d6ae-4188-b597-e42935d81d0e\") " pod="openstack/nova-scheduler-0" Jan 27 15:33:24 crc kubenswrapper[4697]: I0127 15:33:24.530813 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xbt66\" (UniqueName: \"kubernetes.io/projected/0aac0bcf-d6ae-4188-b597-e42935d81d0e-kube-api-access-xbt66\") pod \"nova-scheduler-0\" (UID: \"0aac0bcf-d6ae-4188-b597-e42935d81d0e\") " pod="openstack/nova-scheduler-0" Jan 27 15:33:24 crc kubenswrapper[4697]: I0127 15:33:24.530920 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0aac0bcf-d6ae-4188-b597-e42935d81d0e-config-data\") pod \"nova-scheduler-0\" (UID: \"0aac0bcf-d6ae-4188-b597-e42935d81d0e\") " pod="openstack/nova-scheduler-0" Jan 27 15:33:24 crc kubenswrapper[4697]: I0127 15:33:24.533773 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 27 15:33:24 crc kubenswrapper[4697]: I0127 15:33:24.541159 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0aac0bcf-d6ae-4188-b597-e42935d81d0e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"0aac0bcf-d6ae-4188-b597-e42935d81d0e\") " pod="openstack/nova-scheduler-0" Jan 27 15:33:24 crc kubenswrapper[4697]: I0127 15:33:24.545030 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0aac0bcf-d6ae-4188-b597-e42935d81d0e-config-data\") pod \"nova-scheduler-0\" (UID: \"0aac0bcf-d6ae-4188-b597-e42935d81d0e\") " pod="openstack/nova-scheduler-0" Jan 27 15:33:24 crc kubenswrapper[4697]: I0127 15:33:24.571991 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xbt66\" (UniqueName: \"kubernetes.io/projected/0aac0bcf-d6ae-4188-b597-e42935d81d0e-kube-api-access-xbt66\") pod \"nova-scheduler-0\" (UID: \"0aac0bcf-d6ae-4188-b597-e42935d81d0e\") " pod="openstack/nova-scheduler-0" Jan 27 15:33:24 crc kubenswrapper[4697]: I0127 15:33:24.599401 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="501c9575-b5b8-4c4a-a225-f1853eec3ea8" path="/var/lib/kubelet/pods/501c9575-b5b8-4c4a-a225-f1853eec3ea8/volumes" Jan 27 15:33:24 crc kubenswrapper[4697]: I0127 15:33:24.855556 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 15:33:24 crc kubenswrapper[4697]: I0127 15:33:24.863549 4697 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="c0d44e3f-5773-4f1d-98bd-6ee63096a361" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.199:8775/\": read tcp 10.217.0.2:53328->10.217.0.199:8775: read: connection reset by peer" Jan 27 15:33:24 crc kubenswrapper[4697]: I0127 15:33:24.863895 4697 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="c0d44e3f-5773-4f1d-98bd-6ee63096a361" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.199:8775/\": read tcp 10.217.0.2:53336->10.217.0.199:8775: read: connection reset by peer" Jan 27 15:33:25 crc kubenswrapper[4697]: I0127 15:33:25.109129 4697 patch_prober.go:28] interesting pod/machine-config-daemon-wz495 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 15:33:25 crc kubenswrapper[4697]: I0127 15:33:25.109468 4697 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 15:33:25 crc kubenswrapper[4697]: I0127 15:33:25.169692 4697 generic.go:334] "Generic (PLEG): container finished" podID="c0d44e3f-5773-4f1d-98bd-6ee63096a361" containerID="ed8fc3f91330cf9c756171832c32313774b0325c7eb210f00e86b4a60338e6a5" exitCode=0 Jan 27 15:33:25 crc kubenswrapper[4697]: I0127 15:33:25.169760 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c0d44e3f-5773-4f1d-98bd-6ee63096a361","Type":"ContainerDied","Data":"ed8fc3f91330cf9c756171832c32313774b0325c7eb210f00e86b4a60338e6a5"} Jan 27 15:33:25 crc kubenswrapper[4697]: I0127 15:33:25.371352 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 15:33:25 crc kubenswrapper[4697]: I0127 15:33:25.412862 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 15:33:25 crc kubenswrapper[4697]: I0127 15:33:25.449850 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0d44e3f-5773-4f1d-98bd-6ee63096a361-config-data\") pod \"c0d44e3f-5773-4f1d-98bd-6ee63096a361\" (UID: \"c0d44e3f-5773-4f1d-98bd-6ee63096a361\") " Jan 27 15:33:25 crc kubenswrapper[4697]: I0127 15:33:25.449916 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c0d44e3f-5773-4f1d-98bd-6ee63096a361-nova-metadata-tls-certs\") pod \"c0d44e3f-5773-4f1d-98bd-6ee63096a361\" (UID: \"c0d44e3f-5773-4f1d-98bd-6ee63096a361\") " Jan 27 15:33:25 crc kubenswrapper[4697]: I0127 15:33:25.449943 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0d44e3f-5773-4f1d-98bd-6ee63096a361-combined-ca-bundle\") pod \"c0d44e3f-5773-4f1d-98bd-6ee63096a361\" (UID: \"c0d44e3f-5773-4f1d-98bd-6ee63096a361\") " Jan 27 15:33:25 crc kubenswrapper[4697]: I0127 15:33:25.449976 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p28nn\" (UniqueName: \"kubernetes.io/projected/c0d44e3f-5773-4f1d-98bd-6ee63096a361-kube-api-access-p28nn\") pod \"c0d44e3f-5773-4f1d-98bd-6ee63096a361\" (UID: \"c0d44e3f-5773-4f1d-98bd-6ee63096a361\") " Jan 27 15:33:25 crc kubenswrapper[4697]: I0127 15:33:25.450018 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c0d44e3f-5773-4f1d-98bd-6ee63096a361-logs\") pod \"c0d44e3f-5773-4f1d-98bd-6ee63096a361\" (UID: \"c0d44e3f-5773-4f1d-98bd-6ee63096a361\") " Jan 27 15:33:25 crc kubenswrapper[4697]: I0127 15:33:25.450713 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c0d44e3f-5773-4f1d-98bd-6ee63096a361-logs" (OuterVolumeSpecName: "logs") pod "c0d44e3f-5773-4f1d-98bd-6ee63096a361" (UID: "c0d44e3f-5773-4f1d-98bd-6ee63096a361"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:33:25 crc kubenswrapper[4697]: I0127 15:33:25.464834 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0d44e3f-5773-4f1d-98bd-6ee63096a361-kube-api-access-p28nn" (OuterVolumeSpecName: "kube-api-access-p28nn") pod "c0d44e3f-5773-4f1d-98bd-6ee63096a361" (UID: "c0d44e3f-5773-4f1d-98bd-6ee63096a361"). InnerVolumeSpecName "kube-api-access-p28nn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:33:25 crc kubenswrapper[4697]: I0127 15:33:25.496362 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0d44e3f-5773-4f1d-98bd-6ee63096a361-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c0d44e3f-5773-4f1d-98bd-6ee63096a361" (UID: "c0d44e3f-5773-4f1d-98bd-6ee63096a361"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:33:25 crc kubenswrapper[4697]: I0127 15:33:25.526013 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0d44e3f-5773-4f1d-98bd-6ee63096a361-config-data" (OuterVolumeSpecName: "config-data") pod "c0d44e3f-5773-4f1d-98bd-6ee63096a361" (UID: "c0d44e3f-5773-4f1d-98bd-6ee63096a361"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:33:25 crc kubenswrapper[4697]: I0127 15:33:25.551680 4697 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0d44e3f-5773-4f1d-98bd-6ee63096a361-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 15:33:25 crc kubenswrapper[4697]: I0127 15:33:25.551709 4697 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0d44e3f-5773-4f1d-98bd-6ee63096a361-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 15:33:25 crc kubenswrapper[4697]: I0127 15:33:25.551721 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p28nn\" (UniqueName: \"kubernetes.io/projected/c0d44e3f-5773-4f1d-98bd-6ee63096a361-kube-api-access-p28nn\") on node \"crc\" DevicePath \"\"" Jan 27 15:33:25 crc kubenswrapper[4697]: I0127 15:33:25.551730 4697 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c0d44e3f-5773-4f1d-98bd-6ee63096a361-logs\") on node \"crc\" DevicePath \"\"" Jan 27 15:33:25 crc kubenswrapper[4697]: I0127 15:33:25.551778 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0d44e3f-5773-4f1d-98bd-6ee63096a361-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "c0d44e3f-5773-4f1d-98bd-6ee63096a361" (UID: "c0d44e3f-5773-4f1d-98bd-6ee63096a361"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:33:25 crc kubenswrapper[4697]: I0127 15:33:25.653556 4697 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c0d44e3f-5773-4f1d-98bd-6ee63096a361-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 15:33:26 crc kubenswrapper[4697]: I0127 15:33:26.180338 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0aac0bcf-d6ae-4188-b597-e42935d81d0e","Type":"ContainerStarted","Data":"d71da036373131a33e9ee664da8271e7790d69aa09e110822c91a01213d8c4b6"} Jan 27 15:33:26 crc kubenswrapper[4697]: I0127 15:33:26.180641 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0aac0bcf-d6ae-4188-b597-e42935d81d0e","Type":"ContainerStarted","Data":"f25271b574bda69c1924069739efc3d281277cabd848dd7df68f011f6d12d7fd"} Jan 27 15:33:26 crc kubenswrapper[4697]: I0127 15:33:26.182130 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c0d44e3f-5773-4f1d-98bd-6ee63096a361","Type":"ContainerDied","Data":"9246ca622c5df714c4b34a53adf5e148dbe9b77906eb11d7f650ae403a8883e0"} Jan 27 15:33:26 crc kubenswrapper[4697]: I0127 15:33:26.182179 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 15:33:26 crc kubenswrapper[4697]: I0127 15:33:26.182183 4697 scope.go:117] "RemoveContainer" containerID="ed8fc3f91330cf9c756171832c32313774b0325c7eb210f00e86b4a60338e6a5" Jan 27 15:33:26 crc kubenswrapper[4697]: I0127 15:33:26.204094 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.204072396 podStartE2EDuration="2.204072396s" podCreationTimestamp="2026-01-27 15:33:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:33:26.202339884 +0000 UTC m=+1502.374739665" watchObservedRunningTime="2026-01-27 15:33:26.204072396 +0000 UTC m=+1502.376472177" Jan 27 15:33:26 crc kubenswrapper[4697]: I0127 15:33:26.212162 4697 scope.go:117] "RemoveContainer" containerID="b62e58371635562f614aa3b510293c65bf2615dee1aaeb18d11aa77f37d36b53" Jan 27 15:33:26 crc kubenswrapper[4697]: I0127 15:33:26.238749 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 15:33:26 crc kubenswrapper[4697]: I0127 15:33:26.261844 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 15:33:26 crc kubenswrapper[4697]: I0127 15:33:26.267066 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 27 15:33:26 crc kubenswrapper[4697]: E0127 15:33:26.267419 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0d44e3f-5773-4f1d-98bd-6ee63096a361" containerName="nova-metadata-log" Jan 27 15:33:26 crc kubenswrapper[4697]: I0127 15:33:26.267435 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0d44e3f-5773-4f1d-98bd-6ee63096a361" containerName="nova-metadata-log" Jan 27 15:33:26 crc kubenswrapper[4697]: E0127 15:33:26.267479 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0d44e3f-5773-4f1d-98bd-6ee63096a361" containerName="nova-metadata-metadata" Jan 27 15:33:26 crc kubenswrapper[4697]: I0127 15:33:26.267485 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0d44e3f-5773-4f1d-98bd-6ee63096a361" containerName="nova-metadata-metadata" Jan 27 15:33:26 crc kubenswrapper[4697]: I0127 15:33:26.267640 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0d44e3f-5773-4f1d-98bd-6ee63096a361" containerName="nova-metadata-log" Jan 27 15:33:26 crc kubenswrapper[4697]: I0127 15:33:26.267671 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0d44e3f-5773-4f1d-98bd-6ee63096a361" containerName="nova-metadata-metadata" Jan 27 15:33:26 crc kubenswrapper[4697]: I0127 15:33:26.272944 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 15:33:26 crc kubenswrapper[4697]: I0127 15:33:26.276754 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 27 15:33:26 crc kubenswrapper[4697]: I0127 15:33:26.276966 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 27 15:33:26 crc kubenswrapper[4697]: I0127 15:33:26.282978 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 15:33:26 crc kubenswrapper[4697]: I0127 15:33:26.479748 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c1b720b-0b31-4c5d-9306-ca65e780dc12-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3c1b720b-0b31-4c5d-9306-ca65e780dc12\") " pod="openstack/nova-metadata-0" Jan 27 15:33:26 crc kubenswrapper[4697]: I0127 15:33:26.479942 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2bgdn\" (UniqueName: \"kubernetes.io/projected/3c1b720b-0b31-4c5d-9306-ca65e780dc12-kube-api-access-2bgdn\") pod \"nova-metadata-0\" (UID: \"3c1b720b-0b31-4c5d-9306-ca65e780dc12\") " pod="openstack/nova-metadata-0" Jan 27 15:33:26 crc kubenswrapper[4697]: I0127 15:33:26.479973 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3c1b720b-0b31-4c5d-9306-ca65e780dc12-logs\") pod \"nova-metadata-0\" (UID: \"3c1b720b-0b31-4c5d-9306-ca65e780dc12\") " pod="openstack/nova-metadata-0" Jan 27 15:33:26 crc kubenswrapper[4697]: I0127 15:33:26.480009 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c1b720b-0b31-4c5d-9306-ca65e780dc12-config-data\") pod \"nova-metadata-0\" (UID: \"3c1b720b-0b31-4c5d-9306-ca65e780dc12\") " pod="openstack/nova-metadata-0" Jan 27 15:33:26 crc kubenswrapper[4697]: I0127 15:33:26.480039 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c1b720b-0b31-4c5d-9306-ca65e780dc12-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"3c1b720b-0b31-4c5d-9306-ca65e780dc12\") " pod="openstack/nova-metadata-0" Jan 27 15:33:26 crc kubenswrapper[4697]: I0127 15:33:26.578237 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0d44e3f-5773-4f1d-98bd-6ee63096a361" path="/var/lib/kubelet/pods/c0d44e3f-5773-4f1d-98bd-6ee63096a361/volumes" Jan 27 15:33:26 crc kubenswrapper[4697]: I0127 15:33:26.581044 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3c1b720b-0b31-4c5d-9306-ca65e780dc12-logs\") pod \"nova-metadata-0\" (UID: \"3c1b720b-0b31-4c5d-9306-ca65e780dc12\") " pod="openstack/nova-metadata-0" Jan 27 15:33:26 crc kubenswrapper[4697]: I0127 15:33:26.581088 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c1b720b-0b31-4c5d-9306-ca65e780dc12-config-data\") pod \"nova-metadata-0\" (UID: \"3c1b720b-0b31-4c5d-9306-ca65e780dc12\") " pod="openstack/nova-metadata-0" Jan 27 15:33:26 crc kubenswrapper[4697]: I0127 15:33:26.581123 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c1b720b-0b31-4c5d-9306-ca65e780dc12-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"3c1b720b-0b31-4c5d-9306-ca65e780dc12\") " pod="openstack/nova-metadata-0" Jan 27 15:33:26 crc kubenswrapper[4697]: I0127 15:33:26.581170 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c1b720b-0b31-4c5d-9306-ca65e780dc12-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3c1b720b-0b31-4c5d-9306-ca65e780dc12\") " pod="openstack/nova-metadata-0" Jan 27 15:33:26 crc kubenswrapper[4697]: I0127 15:33:26.581258 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2bgdn\" (UniqueName: \"kubernetes.io/projected/3c1b720b-0b31-4c5d-9306-ca65e780dc12-kube-api-access-2bgdn\") pod \"nova-metadata-0\" (UID: \"3c1b720b-0b31-4c5d-9306-ca65e780dc12\") " pod="openstack/nova-metadata-0" Jan 27 15:33:26 crc kubenswrapper[4697]: I0127 15:33:26.581496 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3c1b720b-0b31-4c5d-9306-ca65e780dc12-logs\") pod \"nova-metadata-0\" (UID: \"3c1b720b-0b31-4c5d-9306-ca65e780dc12\") " pod="openstack/nova-metadata-0" Jan 27 15:33:26 crc kubenswrapper[4697]: I0127 15:33:26.587339 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c1b720b-0b31-4c5d-9306-ca65e780dc12-config-data\") pod \"nova-metadata-0\" (UID: \"3c1b720b-0b31-4c5d-9306-ca65e780dc12\") " pod="openstack/nova-metadata-0" Jan 27 15:33:26 crc kubenswrapper[4697]: I0127 15:33:26.590023 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c1b720b-0b31-4c5d-9306-ca65e780dc12-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"3c1b720b-0b31-4c5d-9306-ca65e780dc12\") " pod="openstack/nova-metadata-0" Jan 27 15:33:26 crc kubenswrapper[4697]: I0127 15:33:26.600444 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c1b720b-0b31-4c5d-9306-ca65e780dc12-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3c1b720b-0b31-4c5d-9306-ca65e780dc12\") " pod="openstack/nova-metadata-0" Jan 27 15:33:26 crc kubenswrapper[4697]: I0127 15:33:26.605445 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2bgdn\" (UniqueName: \"kubernetes.io/projected/3c1b720b-0b31-4c5d-9306-ca65e780dc12-kube-api-access-2bgdn\") pod \"nova-metadata-0\" (UID: \"3c1b720b-0b31-4c5d-9306-ca65e780dc12\") " pod="openstack/nova-metadata-0" Jan 27 15:33:26 crc kubenswrapper[4697]: I0127 15:33:26.892008 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 15:33:27 crc kubenswrapper[4697]: I0127 15:33:27.363810 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 15:33:28 crc kubenswrapper[4697]: I0127 15:33:28.235345 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3c1b720b-0b31-4c5d-9306-ca65e780dc12","Type":"ContainerStarted","Data":"8060f53bbf3820e66dc6f67bdf684b69cabdd0a9e516819781cad1eff1ea1a5e"} Jan 27 15:33:28 crc kubenswrapper[4697]: I0127 15:33:28.235886 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3c1b720b-0b31-4c5d-9306-ca65e780dc12","Type":"ContainerStarted","Data":"6d8fbc4a9b499c4d9885bcc978ac495a366deccd7b43c87f5dcc6beb5d1175d1"} Jan 27 15:33:28 crc kubenswrapper[4697]: I0127 15:33:28.238292 4697 generic.go:334] "Generic (PLEG): container finished" podID="20439f38-838f-4b3c-a062-8e5cfcca21b5" containerID="e063f02d110f71ca42ad86e34aa822a8a51327f3a341d96685776a95b4c57544" exitCode=0 Jan 27 15:33:28 crc kubenswrapper[4697]: I0127 15:33:28.238331 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"20439f38-838f-4b3c-a062-8e5cfcca21b5","Type":"ContainerDied","Data":"e063f02d110f71ca42ad86e34aa822a8a51327f3a341d96685776a95b4c57544"} Jan 27 15:33:28 crc kubenswrapper[4697]: I0127 15:33:28.376704 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 15:33:28 crc kubenswrapper[4697]: I0127 15:33:28.429575 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/20439f38-838f-4b3c-a062-8e5cfcca21b5-public-tls-certs\") pod \"20439f38-838f-4b3c-a062-8e5cfcca21b5\" (UID: \"20439f38-838f-4b3c-a062-8e5cfcca21b5\") " Jan 27 15:33:28 crc kubenswrapper[4697]: I0127 15:33:28.429718 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/20439f38-838f-4b3c-a062-8e5cfcca21b5-logs\") pod \"20439f38-838f-4b3c-a062-8e5cfcca21b5\" (UID: \"20439f38-838f-4b3c-a062-8e5cfcca21b5\") " Jan 27 15:33:28 crc kubenswrapper[4697]: I0127 15:33:28.429934 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-65ldh\" (UniqueName: \"kubernetes.io/projected/20439f38-838f-4b3c-a062-8e5cfcca21b5-kube-api-access-65ldh\") pod \"20439f38-838f-4b3c-a062-8e5cfcca21b5\" (UID: \"20439f38-838f-4b3c-a062-8e5cfcca21b5\") " Jan 27 15:33:28 crc kubenswrapper[4697]: I0127 15:33:28.429992 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/20439f38-838f-4b3c-a062-8e5cfcca21b5-internal-tls-certs\") pod \"20439f38-838f-4b3c-a062-8e5cfcca21b5\" (UID: \"20439f38-838f-4b3c-a062-8e5cfcca21b5\") " Jan 27 15:33:28 crc kubenswrapper[4697]: I0127 15:33:28.430033 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20439f38-838f-4b3c-a062-8e5cfcca21b5-config-data\") pod \"20439f38-838f-4b3c-a062-8e5cfcca21b5\" (UID: \"20439f38-838f-4b3c-a062-8e5cfcca21b5\") " Jan 27 15:33:28 crc kubenswrapper[4697]: I0127 15:33:28.430071 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20439f38-838f-4b3c-a062-8e5cfcca21b5-combined-ca-bundle\") pod \"20439f38-838f-4b3c-a062-8e5cfcca21b5\" (UID: \"20439f38-838f-4b3c-a062-8e5cfcca21b5\") " Jan 27 15:33:28 crc kubenswrapper[4697]: I0127 15:33:28.431408 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20439f38-838f-4b3c-a062-8e5cfcca21b5-logs" (OuterVolumeSpecName: "logs") pod "20439f38-838f-4b3c-a062-8e5cfcca21b5" (UID: "20439f38-838f-4b3c-a062-8e5cfcca21b5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:33:28 crc kubenswrapper[4697]: I0127 15:33:28.434454 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20439f38-838f-4b3c-a062-8e5cfcca21b5-kube-api-access-65ldh" (OuterVolumeSpecName: "kube-api-access-65ldh") pod "20439f38-838f-4b3c-a062-8e5cfcca21b5" (UID: "20439f38-838f-4b3c-a062-8e5cfcca21b5"). InnerVolumeSpecName "kube-api-access-65ldh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:33:28 crc kubenswrapper[4697]: I0127 15:33:28.459521 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20439f38-838f-4b3c-a062-8e5cfcca21b5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "20439f38-838f-4b3c-a062-8e5cfcca21b5" (UID: "20439f38-838f-4b3c-a062-8e5cfcca21b5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:33:28 crc kubenswrapper[4697]: I0127 15:33:28.471839 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20439f38-838f-4b3c-a062-8e5cfcca21b5-config-data" (OuterVolumeSpecName: "config-data") pod "20439f38-838f-4b3c-a062-8e5cfcca21b5" (UID: "20439f38-838f-4b3c-a062-8e5cfcca21b5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:33:28 crc kubenswrapper[4697]: I0127 15:33:28.476040 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20439f38-838f-4b3c-a062-8e5cfcca21b5-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "20439f38-838f-4b3c-a062-8e5cfcca21b5" (UID: "20439f38-838f-4b3c-a062-8e5cfcca21b5"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:33:28 crc kubenswrapper[4697]: I0127 15:33:28.489561 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20439f38-838f-4b3c-a062-8e5cfcca21b5-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "20439f38-838f-4b3c-a062-8e5cfcca21b5" (UID: "20439f38-838f-4b3c-a062-8e5cfcca21b5"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:33:28 crc kubenswrapper[4697]: I0127 15:33:28.539777 4697 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/20439f38-838f-4b3c-a062-8e5cfcca21b5-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 15:33:28 crc kubenswrapper[4697]: I0127 15:33:28.539830 4697 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/20439f38-838f-4b3c-a062-8e5cfcca21b5-logs\") on node \"crc\" DevicePath \"\"" Jan 27 15:33:28 crc kubenswrapper[4697]: I0127 15:33:28.539847 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-65ldh\" (UniqueName: \"kubernetes.io/projected/20439f38-838f-4b3c-a062-8e5cfcca21b5-kube-api-access-65ldh\") on node \"crc\" DevicePath \"\"" Jan 27 15:33:28 crc kubenswrapper[4697]: I0127 15:33:28.539860 4697 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/20439f38-838f-4b3c-a062-8e5cfcca21b5-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 15:33:28 crc kubenswrapper[4697]: I0127 15:33:28.539871 4697 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20439f38-838f-4b3c-a062-8e5cfcca21b5-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 15:33:28 crc kubenswrapper[4697]: I0127 15:33:28.539881 4697 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20439f38-838f-4b3c-a062-8e5cfcca21b5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 15:33:29 crc kubenswrapper[4697]: I0127 15:33:29.248274 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3c1b720b-0b31-4c5d-9306-ca65e780dc12","Type":"ContainerStarted","Data":"08698650300d321960c1d175eb620016a024f32dc475cf6dfe7bb5bae0632e28"} Jan 27 15:33:29 crc kubenswrapper[4697]: I0127 15:33:29.249811 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"20439f38-838f-4b3c-a062-8e5cfcca21b5","Type":"ContainerDied","Data":"05e8c1bf29b65a18f2655151debc32bf9594d0a317eba925c34c5a5ee2841410"} Jan 27 15:33:29 crc kubenswrapper[4697]: I0127 15:33:29.249854 4697 scope.go:117] "RemoveContainer" containerID="e063f02d110f71ca42ad86e34aa822a8a51327f3a341d96685776a95b4c57544" Jan 27 15:33:29 crc kubenswrapper[4697]: I0127 15:33:29.250024 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 15:33:29 crc kubenswrapper[4697]: I0127 15:33:29.276939 4697 scope.go:117] "RemoveContainer" containerID="81b1e77119fa417fd0788720bff7571d4b94069fc208864934c258db1a333043" Jan 27 15:33:29 crc kubenswrapper[4697]: I0127 15:33:29.292861 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.292826272 podStartE2EDuration="3.292826272s" podCreationTimestamp="2026-01-27 15:33:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:33:29.292605937 +0000 UTC m=+1505.465005728" watchObservedRunningTime="2026-01-27 15:33:29.292826272 +0000 UTC m=+1505.465226053" Jan 27 15:33:29 crc kubenswrapper[4697]: I0127 15:33:29.321991 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 27 15:33:29 crc kubenswrapper[4697]: I0127 15:33:29.341426 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 27 15:33:29 crc kubenswrapper[4697]: I0127 15:33:29.353569 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 27 15:33:29 crc kubenswrapper[4697]: E0127 15:33:29.354169 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20439f38-838f-4b3c-a062-8e5cfcca21b5" containerName="nova-api-log" Jan 27 15:33:29 crc kubenswrapper[4697]: I0127 15:33:29.354191 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="20439f38-838f-4b3c-a062-8e5cfcca21b5" containerName="nova-api-log" Jan 27 15:33:29 crc kubenswrapper[4697]: E0127 15:33:29.354243 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20439f38-838f-4b3c-a062-8e5cfcca21b5" containerName="nova-api-api" Jan 27 15:33:29 crc kubenswrapper[4697]: I0127 15:33:29.354252 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="20439f38-838f-4b3c-a062-8e5cfcca21b5" containerName="nova-api-api" Jan 27 15:33:29 crc kubenswrapper[4697]: I0127 15:33:29.354486 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="20439f38-838f-4b3c-a062-8e5cfcca21b5" containerName="nova-api-log" Jan 27 15:33:29 crc kubenswrapper[4697]: I0127 15:33:29.354515 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="20439f38-838f-4b3c-a062-8e5cfcca21b5" containerName="nova-api-api" Jan 27 15:33:29 crc kubenswrapper[4697]: I0127 15:33:29.355767 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 15:33:29 crc kubenswrapper[4697]: I0127 15:33:29.358122 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 27 15:33:29 crc kubenswrapper[4697]: I0127 15:33:29.358286 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 27 15:33:29 crc kubenswrapper[4697]: I0127 15:33:29.362046 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 27 15:33:29 crc kubenswrapper[4697]: I0127 15:33:29.363368 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 27 15:33:29 crc kubenswrapper[4697]: I0127 15:33:29.456555 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7a25f76-cbe2-44a4-911d-40b875d2f934-config-data\") pod \"nova-api-0\" (UID: \"f7a25f76-cbe2-44a4-911d-40b875d2f934\") " pod="openstack/nova-api-0" Jan 27 15:33:29 crc kubenswrapper[4697]: I0127 15:33:29.456643 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7a25f76-cbe2-44a4-911d-40b875d2f934-public-tls-certs\") pod \"nova-api-0\" (UID: \"f7a25f76-cbe2-44a4-911d-40b875d2f934\") " pod="openstack/nova-api-0" Jan 27 15:33:29 crc kubenswrapper[4697]: I0127 15:33:29.456677 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpt22\" (UniqueName: \"kubernetes.io/projected/f7a25f76-cbe2-44a4-911d-40b875d2f934-kube-api-access-bpt22\") pod \"nova-api-0\" (UID: \"f7a25f76-cbe2-44a4-911d-40b875d2f934\") " pod="openstack/nova-api-0" Jan 27 15:33:29 crc kubenswrapper[4697]: I0127 15:33:29.456750 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f7a25f76-cbe2-44a4-911d-40b875d2f934-logs\") pod \"nova-api-0\" (UID: \"f7a25f76-cbe2-44a4-911d-40b875d2f934\") " pod="openstack/nova-api-0" Jan 27 15:33:29 crc kubenswrapper[4697]: I0127 15:33:29.456890 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7a25f76-cbe2-44a4-911d-40b875d2f934-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f7a25f76-cbe2-44a4-911d-40b875d2f934\") " pod="openstack/nova-api-0" Jan 27 15:33:29 crc kubenswrapper[4697]: I0127 15:33:29.457140 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7a25f76-cbe2-44a4-911d-40b875d2f934-internal-tls-certs\") pod \"nova-api-0\" (UID: \"f7a25f76-cbe2-44a4-911d-40b875d2f934\") " pod="openstack/nova-api-0" Jan 27 15:33:29 crc kubenswrapper[4697]: I0127 15:33:29.558651 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f7a25f76-cbe2-44a4-911d-40b875d2f934-logs\") pod \"nova-api-0\" (UID: \"f7a25f76-cbe2-44a4-911d-40b875d2f934\") " pod="openstack/nova-api-0" Jan 27 15:33:29 crc kubenswrapper[4697]: I0127 15:33:29.559008 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7a25f76-cbe2-44a4-911d-40b875d2f934-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f7a25f76-cbe2-44a4-911d-40b875d2f934\") " pod="openstack/nova-api-0" Jan 27 15:33:29 crc kubenswrapper[4697]: I0127 15:33:29.559111 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7a25f76-cbe2-44a4-911d-40b875d2f934-internal-tls-certs\") pod \"nova-api-0\" (UID: \"f7a25f76-cbe2-44a4-911d-40b875d2f934\") " pod="openstack/nova-api-0" Jan 27 15:33:29 crc kubenswrapper[4697]: I0127 15:33:29.559135 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f7a25f76-cbe2-44a4-911d-40b875d2f934-logs\") pod \"nova-api-0\" (UID: \"f7a25f76-cbe2-44a4-911d-40b875d2f934\") " pod="openstack/nova-api-0" Jan 27 15:33:29 crc kubenswrapper[4697]: I0127 15:33:29.559147 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7a25f76-cbe2-44a4-911d-40b875d2f934-config-data\") pod \"nova-api-0\" (UID: \"f7a25f76-cbe2-44a4-911d-40b875d2f934\") " pod="openstack/nova-api-0" Jan 27 15:33:29 crc kubenswrapper[4697]: I0127 15:33:29.559344 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7a25f76-cbe2-44a4-911d-40b875d2f934-public-tls-certs\") pod \"nova-api-0\" (UID: \"f7a25f76-cbe2-44a4-911d-40b875d2f934\") " pod="openstack/nova-api-0" Jan 27 15:33:29 crc kubenswrapper[4697]: I0127 15:33:29.559385 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bpt22\" (UniqueName: \"kubernetes.io/projected/f7a25f76-cbe2-44a4-911d-40b875d2f934-kube-api-access-bpt22\") pod \"nova-api-0\" (UID: \"f7a25f76-cbe2-44a4-911d-40b875d2f934\") " pod="openstack/nova-api-0" Jan 27 15:33:29 crc kubenswrapper[4697]: I0127 15:33:29.564720 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7a25f76-cbe2-44a4-911d-40b875d2f934-public-tls-certs\") pod \"nova-api-0\" (UID: \"f7a25f76-cbe2-44a4-911d-40b875d2f934\") " pod="openstack/nova-api-0" Jan 27 15:33:29 crc kubenswrapper[4697]: I0127 15:33:29.566858 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7a25f76-cbe2-44a4-911d-40b875d2f934-config-data\") pod \"nova-api-0\" (UID: \"f7a25f76-cbe2-44a4-911d-40b875d2f934\") " pod="openstack/nova-api-0" Jan 27 15:33:29 crc kubenswrapper[4697]: I0127 15:33:29.570068 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7a25f76-cbe2-44a4-911d-40b875d2f934-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f7a25f76-cbe2-44a4-911d-40b875d2f934\") " pod="openstack/nova-api-0" Jan 27 15:33:29 crc kubenswrapper[4697]: I0127 15:33:29.570835 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7a25f76-cbe2-44a4-911d-40b875d2f934-internal-tls-certs\") pod \"nova-api-0\" (UID: \"f7a25f76-cbe2-44a4-911d-40b875d2f934\") " pod="openstack/nova-api-0" Jan 27 15:33:29 crc kubenswrapper[4697]: I0127 15:33:29.585497 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bpt22\" (UniqueName: \"kubernetes.io/projected/f7a25f76-cbe2-44a4-911d-40b875d2f934-kube-api-access-bpt22\") pod \"nova-api-0\" (UID: \"f7a25f76-cbe2-44a4-911d-40b875d2f934\") " pod="openstack/nova-api-0" Jan 27 15:33:29 crc kubenswrapper[4697]: I0127 15:33:29.693155 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 15:33:29 crc kubenswrapper[4697]: I0127 15:33:29.856104 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 27 15:33:30 crc kubenswrapper[4697]: W0127 15:33:30.245443 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf7a25f76_cbe2_44a4_911d_40b875d2f934.slice/crio-c2ee237e044f43507d331a28c5d15998240cb2eb0bc41887bab1e6e6dec1fc4e WatchSource:0}: Error finding container c2ee237e044f43507d331a28c5d15998240cb2eb0bc41887bab1e6e6dec1fc4e: Status 404 returned error can't find the container with id c2ee237e044f43507d331a28c5d15998240cb2eb0bc41887bab1e6e6dec1fc4e Jan 27 15:33:30 crc kubenswrapper[4697]: I0127 15:33:30.256818 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 27 15:33:30 crc kubenswrapper[4697]: I0127 15:33:30.262129 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f7a25f76-cbe2-44a4-911d-40b875d2f934","Type":"ContainerStarted","Data":"c2ee237e044f43507d331a28c5d15998240cb2eb0bc41887bab1e6e6dec1fc4e"} Jan 27 15:33:30 crc kubenswrapper[4697]: I0127 15:33:30.581177 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20439f38-838f-4b3c-a062-8e5cfcca21b5" path="/var/lib/kubelet/pods/20439f38-838f-4b3c-a062-8e5cfcca21b5/volumes" Jan 27 15:33:30 crc kubenswrapper[4697]: I0127 15:33:30.974415 4697 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-7j258" podUID="b4da5f97-06a4-452e-9f5e-87c97c5bf1f5" containerName="registry-server" probeResult="failure" output=< Jan 27 15:33:30 crc kubenswrapper[4697]: timeout: failed to connect service ":50051" within 1s Jan 27 15:33:30 crc kubenswrapper[4697]: > Jan 27 15:33:31 crc kubenswrapper[4697]: I0127 15:33:31.272577 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f7a25f76-cbe2-44a4-911d-40b875d2f934","Type":"ContainerStarted","Data":"db3f0aedbb2dbd994dc76529c4f8f1322662316d22a13c08f5b12f734a726510"} Jan 27 15:33:31 crc kubenswrapper[4697]: I0127 15:33:31.892365 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 27 15:33:31 crc kubenswrapper[4697]: I0127 15:33:31.892724 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 27 15:33:32 crc kubenswrapper[4697]: I0127 15:33:32.284227 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f7a25f76-cbe2-44a4-911d-40b875d2f934","Type":"ContainerStarted","Data":"c02247897e1c3c04f966f88dc8854cf0505641afbc24fe9c523ef6d874156acf"} Jan 27 15:33:32 crc kubenswrapper[4697]: I0127 15:33:32.307630 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.307603973 podStartE2EDuration="3.307603973s" podCreationTimestamp="2026-01-27 15:33:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:33:32.302478348 +0000 UTC m=+1508.474878129" watchObservedRunningTime="2026-01-27 15:33:32.307603973 +0000 UTC m=+1508.480003754" Jan 27 15:33:34 crc kubenswrapper[4697]: I0127 15:33:34.855850 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 27 15:33:34 crc kubenswrapper[4697]: I0127 15:33:34.883288 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 27 15:33:35 crc kubenswrapper[4697]: I0127 15:33:35.423763 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 27 15:33:36 crc kubenswrapper[4697]: I0127 15:33:36.892386 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 27 15:33:36 crc kubenswrapper[4697]: I0127 15:33:36.892726 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 27 15:33:37 crc kubenswrapper[4697]: I0127 15:33:37.641021 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 27 15:33:37 crc kubenswrapper[4697]: I0127 15:33:37.905107 4697 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="3c1b720b-0b31-4c5d-9306-ca65e780dc12" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.209:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 15:33:37 crc kubenswrapper[4697]: I0127 15:33:37.905109 4697 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="3c1b720b-0b31-4c5d-9306-ca65e780dc12" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.209:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 15:33:39 crc kubenswrapper[4697]: I0127 15:33:39.694503 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 27 15:33:39 crc kubenswrapper[4697]: I0127 15:33:39.694901 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 27 15:33:40 crc kubenswrapper[4697]: I0127 15:33:40.705993 4697 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="f7a25f76-cbe2-44a4-911d-40b875d2f934" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.210:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 15:33:40 crc kubenswrapper[4697]: I0127 15:33:40.706008 4697 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="f7a25f76-cbe2-44a4-911d-40b875d2f934" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.210:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 15:33:41 crc kubenswrapper[4697]: I0127 15:33:41.065365 4697 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-7j258" podUID="b4da5f97-06a4-452e-9f5e-87c97c5bf1f5" containerName="registry-server" probeResult="failure" output=< Jan 27 15:33:41 crc kubenswrapper[4697]: timeout: failed to connect service ":50051" within 1s Jan 27 15:33:41 crc kubenswrapper[4697]: > Jan 27 15:33:43 crc kubenswrapper[4697]: I0127 15:33:43.452616 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-68m2q"] Jan 27 15:33:43 crc kubenswrapper[4697]: I0127 15:33:43.454916 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-68m2q" Jan 27 15:33:43 crc kubenswrapper[4697]: I0127 15:33:43.474403 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-68m2q"] Jan 27 15:33:43 crc kubenswrapper[4697]: I0127 15:33:43.647495 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkt8f\" (UniqueName: \"kubernetes.io/projected/6d7eb4cc-08fe-4ad8-9400-dcecbcbf45ed-kube-api-access-zkt8f\") pod \"redhat-marketplace-68m2q\" (UID: \"6d7eb4cc-08fe-4ad8-9400-dcecbcbf45ed\") " pod="openshift-marketplace/redhat-marketplace-68m2q" Jan 27 15:33:43 crc kubenswrapper[4697]: I0127 15:33:43.648434 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d7eb4cc-08fe-4ad8-9400-dcecbcbf45ed-utilities\") pod \"redhat-marketplace-68m2q\" (UID: \"6d7eb4cc-08fe-4ad8-9400-dcecbcbf45ed\") " pod="openshift-marketplace/redhat-marketplace-68m2q" Jan 27 15:33:43 crc kubenswrapper[4697]: I0127 15:33:43.648505 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d7eb4cc-08fe-4ad8-9400-dcecbcbf45ed-catalog-content\") pod \"redhat-marketplace-68m2q\" (UID: \"6d7eb4cc-08fe-4ad8-9400-dcecbcbf45ed\") " pod="openshift-marketplace/redhat-marketplace-68m2q" Jan 27 15:33:43 crc kubenswrapper[4697]: I0127 15:33:43.749988 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zkt8f\" (UniqueName: \"kubernetes.io/projected/6d7eb4cc-08fe-4ad8-9400-dcecbcbf45ed-kube-api-access-zkt8f\") pod \"redhat-marketplace-68m2q\" (UID: \"6d7eb4cc-08fe-4ad8-9400-dcecbcbf45ed\") " pod="openshift-marketplace/redhat-marketplace-68m2q" Jan 27 15:33:43 crc kubenswrapper[4697]: I0127 15:33:43.750078 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d7eb4cc-08fe-4ad8-9400-dcecbcbf45ed-utilities\") pod \"redhat-marketplace-68m2q\" (UID: \"6d7eb4cc-08fe-4ad8-9400-dcecbcbf45ed\") " pod="openshift-marketplace/redhat-marketplace-68m2q" Jan 27 15:33:43 crc kubenswrapper[4697]: I0127 15:33:43.750152 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d7eb4cc-08fe-4ad8-9400-dcecbcbf45ed-catalog-content\") pod \"redhat-marketplace-68m2q\" (UID: \"6d7eb4cc-08fe-4ad8-9400-dcecbcbf45ed\") " pod="openshift-marketplace/redhat-marketplace-68m2q" Jan 27 15:33:43 crc kubenswrapper[4697]: I0127 15:33:43.751065 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d7eb4cc-08fe-4ad8-9400-dcecbcbf45ed-utilities\") pod \"redhat-marketplace-68m2q\" (UID: \"6d7eb4cc-08fe-4ad8-9400-dcecbcbf45ed\") " pod="openshift-marketplace/redhat-marketplace-68m2q" Jan 27 15:33:43 crc kubenswrapper[4697]: I0127 15:33:43.751138 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d7eb4cc-08fe-4ad8-9400-dcecbcbf45ed-catalog-content\") pod \"redhat-marketplace-68m2q\" (UID: \"6d7eb4cc-08fe-4ad8-9400-dcecbcbf45ed\") " pod="openshift-marketplace/redhat-marketplace-68m2q" Jan 27 15:33:43 crc kubenswrapper[4697]: I0127 15:33:43.775084 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zkt8f\" (UniqueName: \"kubernetes.io/projected/6d7eb4cc-08fe-4ad8-9400-dcecbcbf45ed-kube-api-access-zkt8f\") pod \"redhat-marketplace-68m2q\" (UID: \"6d7eb4cc-08fe-4ad8-9400-dcecbcbf45ed\") " pod="openshift-marketplace/redhat-marketplace-68m2q" Jan 27 15:33:44 crc kubenswrapper[4697]: I0127 15:33:44.072296 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-68m2q" Jan 27 15:33:44 crc kubenswrapper[4697]: I0127 15:33:44.651952 4697 scope.go:117] "RemoveContainer" containerID="d45ff3f7ef76614be34b74510b586a34de4d4a412b0b2ebe6939839e20680471" Jan 27 15:33:44 crc kubenswrapper[4697]: I0127 15:33:44.677421 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-68m2q"] Jan 27 15:33:45 crc kubenswrapper[4697]: I0127 15:33:45.436471 4697 generic.go:334] "Generic (PLEG): container finished" podID="6d7eb4cc-08fe-4ad8-9400-dcecbcbf45ed" containerID="b2209aa300c7f0ee756c912114748b8a966a960030b29a9dd69e95cee816937b" exitCode=0 Jan 27 15:33:45 crc kubenswrapper[4697]: I0127 15:33:45.436727 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-68m2q" event={"ID":"6d7eb4cc-08fe-4ad8-9400-dcecbcbf45ed","Type":"ContainerDied","Data":"b2209aa300c7f0ee756c912114748b8a966a960030b29a9dd69e95cee816937b"} Jan 27 15:33:45 crc kubenswrapper[4697]: I0127 15:33:45.436888 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-68m2q" event={"ID":"6d7eb4cc-08fe-4ad8-9400-dcecbcbf45ed","Type":"ContainerStarted","Data":"f56f14c65c0b10f7cac1e603f7bfd6ec1643668ed53cf847ff4803a4ec145601"} Jan 27 15:33:46 crc kubenswrapper[4697]: I0127 15:33:46.898395 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 27 15:33:46 crc kubenswrapper[4697]: I0127 15:33:46.902054 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 27 15:33:46 crc kubenswrapper[4697]: I0127 15:33:46.909884 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 27 15:33:47 crc kubenswrapper[4697]: I0127 15:33:47.459344 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-68m2q" event={"ID":"6d7eb4cc-08fe-4ad8-9400-dcecbcbf45ed","Type":"ContainerStarted","Data":"5711fadc928dbeeb7ad1b1153e0ec09b62f8fa6bdb64ce4b82d597ca8acb1c87"} Jan 27 15:33:47 crc kubenswrapper[4697]: I0127 15:33:47.466677 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 27 15:33:49 crc kubenswrapper[4697]: I0127 15:33:49.500741 4697 generic.go:334] "Generic (PLEG): container finished" podID="6d7eb4cc-08fe-4ad8-9400-dcecbcbf45ed" containerID="5711fadc928dbeeb7ad1b1153e0ec09b62f8fa6bdb64ce4b82d597ca8acb1c87" exitCode=0 Jan 27 15:33:49 crc kubenswrapper[4697]: I0127 15:33:49.500815 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-68m2q" event={"ID":"6d7eb4cc-08fe-4ad8-9400-dcecbcbf45ed","Type":"ContainerDied","Data":"5711fadc928dbeeb7ad1b1153e0ec09b62f8fa6bdb64ce4b82d597ca8acb1c87"} Jan 27 15:33:49 crc kubenswrapper[4697]: I0127 15:33:49.703842 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 27 15:33:49 crc kubenswrapper[4697]: I0127 15:33:49.704462 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 27 15:33:49 crc kubenswrapper[4697]: I0127 15:33:49.705616 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 27 15:33:49 crc kubenswrapper[4697]: I0127 15:33:49.729801 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 27 15:33:49 crc kubenswrapper[4697]: I0127 15:33:49.986891 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-7j258" Jan 27 15:33:50 crc kubenswrapper[4697]: I0127 15:33:50.035982 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-7j258" Jan 27 15:33:50 crc kubenswrapper[4697]: I0127 15:33:50.511349 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 27 15:33:50 crc kubenswrapper[4697]: I0127 15:33:50.519252 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 27 15:33:50 crc kubenswrapper[4697]: I0127 15:33:50.828098 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7j258"] Jan 27 15:33:51 crc kubenswrapper[4697]: I0127 15:33:51.552167 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-68m2q" event={"ID":"6d7eb4cc-08fe-4ad8-9400-dcecbcbf45ed","Type":"ContainerStarted","Data":"10e7989d10606cbf3de2693cf0d7c8aaa718cec83714e7576e80a09c5e746dbd"} Jan 27 15:33:51 crc kubenswrapper[4697]: I0127 15:33:51.552532 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-7j258" podUID="b4da5f97-06a4-452e-9f5e-87c97c5bf1f5" containerName="registry-server" containerID="cri-o://49a8f4cc8f5bbc6e5d3bd8d63ad50c5e8e600539e0cb37b1d39713bd944703c3" gracePeriod=2 Jan 27 15:33:51 crc kubenswrapper[4697]: I0127 15:33:51.594370 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-68m2q" podStartSLOduration=3.636855471 podStartE2EDuration="8.594334564s" podCreationTimestamp="2026-01-27 15:33:43 +0000 UTC" firstStartedPulling="2026-01-27 15:33:45.438558644 +0000 UTC m=+1521.610958425" lastFinishedPulling="2026-01-27 15:33:50.396037737 +0000 UTC m=+1526.568437518" observedRunningTime="2026-01-27 15:33:51.589120077 +0000 UTC m=+1527.761519858" watchObservedRunningTime="2026-01-27 15:33:51.594334564 +0000 UTC m=+1527.766734345" Jan 27 15:33:52 crc kubenswrapper[4697]: I0127 15:33:52.139163 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7j258" Jan 27 15:33:52 crc kubenswrapper[4697]: I0127 15:33:52.225221 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4da5f97-06a4-452e-9f5e-87c97c5bf1f5-utilities\") pod \"b4da5f97-06a4-452e-9f5e-87c97c5bf1f5\" (UID: \"b4da5f97-06a4-452e-9f5e-87c97c5bf1f5\") " Jan 27 15:33:52 crc kubenswrapper[4697]: I0127 15:33:52.225342 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dvrck\" (UniqueName: \"kubernetes.io/projected/b4da5f97-06a4-452e-9f5e-87c97c5bf1f5-kube-api-access-dvrck\") pod \"b4da5f97-06a4-452e-9f5e-87c97c5bf1f5\" (UID: \"b4da5f97-06a4-452e-9f5e-87c97c5bf1f5\") " Jan 27 15:33:52 crc kubenswrapper[4697]: I0127 15:33:52.225422 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4da5f97-06a4-452e-9f5e-87c97c5bf1f5-catalog-content\") pod \"b4da5f97-06a4-452e-9f5e-87c97c5bf1f5\" (UID: \"b4da5f97-06a4-452e-9f5e-87c97c5bf1f5\") " Jan 27 15:33:52 crc kubenswrapper[4697]: I0127 15:33:52.225732 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b4da5f97-06a4-452e-9f5e-87c97c5bf1f5-utilities" (OuterVolumeSpecName: "utilities") pod "b4da5f97-06a4-452e-9f5e-87c97c5bf1f5" (UID: "b4da5f97-06a4-452e-9f5e-87c97c5bf1f5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:33:52 crc kubenswrapper[4697]: I0127 15:33:52.226262 4697 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4da5f97-06a4-452e-9f5e-87c97c5bf1f5-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 15:33:52 crc kubenswrapper[4697]: I0127 15:33:52.236289 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4da5f97-06a4-452e-9f5e-87c97c5bf1f5-kube-api-access-dvrck" (OuterVolumeSpecName: "kube-api-access-dvrck") pod "b4da5f97-06a4-452e-9f5e-87c97c5bf1f5" (UID: "b4da5f97-06a4-452e-9f5e-87c97c5bf1f5"). InnerVolumeSpecName "kube-api-access-dvrck". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:33:52 crc kubenswrapper[4697]: I0127 15:33:52.327907 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dvrck\" (UniqueName: \"kubernetes.io/projected/b4da5f97-06a4-452e-9f5e-87c97c5bf1f5-kube-api-access-dvrck\") on node \"crc\" DevicePath \"\"" Jan 27 15:33:52 crc kubenswrapper[4697]: I0127 15:33:52.353852 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b4da5f97-06a4-452e-9f5e-87c97c5bf1f5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b4da5f97-06a4-452e-9f5e-87c97c5bf1f5" (UID: "b4da5f97-06a4-452e-9f5e-87c97c5bf1f5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:33:52 crc kubenswrapper[4697]: I0127 15:33:52.429935 4697 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4da5f97-06a4-452e-9f5e-87c97c5bf1f5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 15:33:52 crc kubenswrapper[4697]: I0127 15:33:52.580507 4697 generic.go:334] "Generic (PLEG): container finished" podID="b4da5f97-06a4-452e-9f5e-87c97c5bf1f5" containerID="49a8f4cc8f5bbc6e5d3bd8d63ad50c5e8e600539e0cb37b1d39713bd944703c3" exitCode=0 Jan 27 15:33:52 crc kubenswrapper[4697]: I0127 15:33:52.580624 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7j258" Jan 27 15:33:52 crc kubenswrapper[4697]: I0127 15:33:52.583681 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7j258" event={"ID":"b4da5f97-06a4-452e-9f5e-87c97c5bf1f5","Type":"ContainerDied","Data":"49a8f4cc8f5bbc6e5d3bd8d63ad50c5e8e600539e0cb37b1d39713bd944703c3"} Jan 27 15:33:52 crc kubenswrapper[4697]: I0127 15:33:52.583721 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7j258" event={"ID":"b4da5f97-06a4-452e-9f5e-87c97c5bf1f5","Type":"ContainerDied","Data":"2e9bb9c5460149a2c391b02c74b231c146bacd04bb2f91f1f234ffc1b54b746a"} Jan 27 15:33:52 crc kubenswrapper[4697]: I0127 15:33:52.583739 4697 scope.go:117] "RemoveContainer" containerID="49a8f4cc8f5bbc6e5d3bd8d63ad50c5e8e600539e0cb37b1d39713bd944703c3" Jan 27 15:33:52 crc kubenswrapper[4697]: I0127 15:33:52.626329 4697 scope.go:117] "RemoveContainer" containerID="d879a71a97b56e74aa6db971b1c472a6c257bfb43d15a2933584b4488fc577a3" Jan 27 15:33:52 crc kubenswrapper[4697]: I0127 15:33:52.642190 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7j258"] Jan 27 15:33:52 crc kubenswrapper[4697]: I0127 15:33:52.660648 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-7j258"] Jan 27 15:33:52 crc kubenswrapper[4697]: I0127 15:33:52.767954 4697 scope.go:117] "RemoveContainer" containerID="93faab635d469c12b2ccee2910326b2a5f9fb1297387e5c99b7c450f07f44e2a" Jan 27 15:33:52 crc kubenswrapper[4697]: I0127 15:33:52.822775 4697 scope.go:117] "RemoveContainer" containerID="49a8f4cc8f5bbc6e5d3bd8d63ad50c5e8e600539e0cb37b1d39713bd944703c3" Jan 27 15:33:52 crc kubenswrapper[4697]: E0127 15:33:52.823235 4697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"49a8f4cc8f5bbc6e5d3bd8d63ad50c5e8e600539e0cb37b1d39713bd944703c3\": container with ID starting with 49a8f4cc8f5bbc6e5d3bd8d63ad50c5e8e600539e0cb37b1d39713bd944703c3 not found: ID does not exist" containerID="49a8f4cc8f5bbc6e5d3bd8d63ad50c5e8e600539e0cb37b1d39713bd944703c3" Jan 27 15:33:52 crc kubenswrapper[4697]: I0127 15:33:52.823280 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49a8f4cc8f5bbc6e5d3bd8d63ad50c5e8e600539e0cb37b1d39713bd944703c3"} err="failed to get container status \"49a8f4cc8f5bbc6e5d3bd8d63ad50c5e8e600539e0cb37b1d39713bd944703c3\": rpc error: code = NotFound desc = could not find container \"49a8f4cc8f5bbc6e5d3bd8d63ad50c5e8e600539e0cb37b1d39713bd944703c3\": container with ID starting with 49a8f4cc8f5bbc6e5d3bd8d63ad50c5e8e600539e0cb37b1d39713bd944703c3 not found: ID does not exist" Jan 27 15:33:52 crc kubenswrapper[4697]: I0127 15:33:52.823307 4697 scope.go:117] "RemoveContainer" containerID="d879a71a97b56e74aa6db971b1c472a6c257bfb43d15a2933584b4488fc577a3" Jan 27 15:33:52 crc kubenswrapper[4697]: E0127 15:33:52.823571 4697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d879a71a97b56e74aa6db971b1c472a6c257bfb43d15a2933584b4488fc577a3\": container with ID starting with d879a71a97b56e74aa6db971b1c472a6c257bfb43d15a2933584b4488fc577a3 not found: ID does not exist" containerID="d879a71a97b56e74aa6db971b1c472a6c257bfb43d15a2933584b4488fc577a3" Jan 27 15:33:52 crc kubenswrapper[4697]: I0127 15:33:52.823612 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d879a71a97b56e74aa6db971b1c472a6c257bfb43d15a2933584b4488fc577a3"} err="failed to get container status \"d879a71a97b56e74aa6db971b1c472a6c257bfb43d15a2933584b4488fc577a3\": rpc error: code = NotFound desc = could not find container \"d879a71a97b56e74aa6db971b1c472a6c257bfb43d15a2933584b4488fc577a3\": container with ID starting with d879a71a97b56e74aa6db971b1c472a6c257bfb43d15a2933584b4488fc577a3 not found: ID does not exist" Jan 27 15:33:52 crc kubenswrapper[4697]: I0127 15:33:52.823634 4697 scope.go:117] "RemoveContainer" containerID="93faab635d469c12b2ccee2910326b2a5f9fb1297387e5c99b7c450f07f44e2a" Jan 27 15:33:52 crc kubenswrapper[4697]: E0127 15:33:52.824015 4697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"93faab635d469c12b2ccee2910326b2a5f9fb1297387e5c99b7c450f07f44e2a\": container with ID starting with 93faab635d469c12b2ccee2910326b2a5f9fb1297387e5c99b7c450f07f44e2a not found: ID does not exist" containerID="93faab635d469c12b2ccee2910326b2a5f9fb1297387e5c99b7c450f07f44e2a" Jan 27 15:33:52 crc kubenswrapper[4697]: I0127 15:33:52.824045 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93faab635d469c12b2ccee2910326b2a5f9fb1297387e5c99b7c450f07f44e2a"} err="failed to get container status \"93faab635d469c12b2ccee2910326b2a5f9fb1297387e5c99b7c450f07f44e2a\": rpc error: code = NotFound desc = could not find container \"93faab635d469c12b2ccee2910326b2a5f9fb1297387e5c99b7c450f07f44e2a\": container with ID starting with 93faab635d469c12b2ccee2910326b2a5f9fb1297387e5c99b7c450f07f44e2a not found: ID does not exist" Jan 27 15:33:54 crc kubenswrapper[4697]: I0127 15:33:54.072768 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-68m2q" Jan 27 15:33:54 crc kubenswrapper[4697]: I0127 15:33:54.073164 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-68m2q" Jan 27 15:33:54 crc kubenswrapper[4697]: I0127 15:33:54.124580 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-68m2q" Jan 27 15:33:54 crc kubenswrapper[4697]: I0127 15:33:54.581335 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4da5f97-06a4-452e-9f5e-87c97c5bf1f5" path="/var/lib/kubelet/pods/b4da5f97-06a4-452e-9f5e-87c97c5bf1f5/volumes" Jan 27 15:33:55 crc kubenswrapper[4697]: I0127 15:33:55.109297 4697 patch_prober.go:28] interesting pod/machine-config-daemon-wz495 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 15:33:55 crc kubenswrapper[4697]: I0127 15:33:55.109658 4697 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 15:33:55 crc kubenswrapper[4697]: I0127 15:33:55.109699 4697 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wz495" Jan 27 15:33:55 crc kubenswrapper[4697]: I0127 15:33:55.110432 4697 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1041a01976f73e6dbbf881bb74cdc0195408ed73fc04fdd6c07635790ef653fc"} pod="openshift-machine-config-operator/machine-config-daemon-wz495" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 15:33:55 crc kubenswrapper[4697]: I0127 15:33:55.110501 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" containerName="machine-config-daemon" containerID="cri-o://1041a01976f73e6dbbf881bb74cdc0195408ed73fc04fdd6c07635790ef653fc" gracePeriod=600 Jan 27 15:33:55 crc kubenswrapper[4697]: E0127 15:33:55.232469 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wz495_openshift-machine-config-operator(e9bec8bc-b2a6-4865-83ca-692ae5c022a6)\"" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" Jan 27 15:33:55 crc kubenswrapper[4697]: I0127 15:33:55.626809 4697 generic.go:334] "Generic (PLEG): container finished" podID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" containerID="1041a01976f73e6dbbf881bb74cdc0195408ed73fc04fdd6c07635790ef653fc" exitCode=0 Jan 27 15:33:55 crc kubenswrapper[4697]: I0127 15:33:55.626860 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wz495" event={"ID":"e9bec8bc-b2a6-4865-83ca-692ae5c022a6","Type":"ContainerDied","Data":"1041a01976f73e6dbbf881bb74cdc0195408ed73fc04fdd6c07635790ef653fc"} Jan 27 15:33:55 crc kubenswrapper[4697]: I0127 15:33:55.626896 4697 scope.go:117] "RemoveContainer" containerID="5f08c1e0b4fdd3c835b2715925dd8d1fa9438edf0fb56dd634b6fc87424d2b5d" Jan 27 15:33:55 crc kubenswrapper[4697]: I0127 15:33:55.627481 4697 scope.go:117] "RemoveContainer" containerID="1041a01976f73e6dbbf881bb74cdc0195408ed73fc04fdd6c07635790ef653fc" Jan 27 15:33:55 crc kubenswrapper[4697]: E0127 15:33:55.627769 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wz495_openshift-machine-config-operator(e9bec8bc-b2a6-4865-83ca-692ae5c022a6)\"" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" Jan 27 15:33:59 crc kubenswrapper[4697]: I0127 15:33:59.295322 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 27 15:34:00 crc kubenswrapper[4697]: I0127 15:34:00.236550 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 27 15:34:04 crc kubenswrapper[4697]: I0127 15:34:04.202180 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-68m2q" Jan 27 15:34:04 crc kubenswrapper[4697]: I0127 15:34:04.267316 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="abff1f2e-e0f3-4730-888c-2e2d8464f624" containerName="rabbitmq" containerID="cri-o://489c8c931a994429732b5b400a22535c3856ed191c25e0569f38c1a130722991" gracePeriod=604796 Jan 27 15:34:04 crc kubenswrapper[4697]: I0127 15:34:04.329276 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-68m2q"] Jan 27 15:34:04 crc kubenswrapper[4697]: I0127 15:34:04.703660 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-68m2q" podUID="6d7eb4cc-08fe-4ad8-9400-dcecbcbf45ed" containerName="registry-server" containerID="cri-o://10e7989d10606cbf3de2693cf0d7c8aaa718cec83714e7576e80a09c5e746dbd" gracePeriod=2 Jan 27 15:34:05 crc kubenswrapper[4697]: I0127 15:34:05.251826 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-68m2q" Jan 27 15:34:05 crc kubenswrapper[4697]: I0127 15:34:05.281209 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d7eb4cc-08fe-4ad8-9400-dcecbcbf45ed-catalog-content\") pod \"6d7eb4cc-08fe-4ad8-9400-dcecbcbf45ed\" (UID: \"6d7eb4cc-08fe-4ad8-9400-dcecbcbf45ed\") " Jan 27 15:34:05 crc kubenswrapper[4697]: I0127 15:34:05.281437 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkt8f\" (UniqueName: \"kubernetes.io/projected/6d7eb4cc-08fe-4ad8-9400-dcecbcbf45ed-kube-api-access-zkt8f\") pod \"6d7eb4cc-08fe-4ad8-9400-dcecbcbf45ed\" (UID: \"6d7eb4cc-08fe-4ad8-9400-dcecbcbf45ed\") " Jan 27 15:34:05 crc kubenswrapper[4697]: I0127 15:34:05.281480 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d7eb4cc-08fe-4ad8-9400-dcecbcbf45ed-utilities\") pod \"6d7eb4cc-08fe-4ad8-9400-dcecbcbf45ed\" (UID: \"6d7eb4cc-08fe-4ad8-9400-dcecbcbf45ed\") " Jan 27 15:34:05 crc kubenswrapper[4697]: I0127 15:34:05.283511 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6d7eb4cc-08fe-4ad8-9400-dcecbcbf45ed-utilities" (OuterVolumeSpecName: "utilities") pod "6d7eb4cc-08fe-4ad8-9400-dcecbcbf45ed" (UID: "6d7eb4cc-08fe-4ad8-9400-dcecbcbf45ed"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:34:05 crc kubenswrapper[4697]: I0127 15:34:05.292161 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d7eb4cc-08fe-4ad8-9400-dcecbcbf45ed-kube-api-access-zkt8f" (OuterVolumeSpecName: "kube-api-access-zkt8f") pod "6d7eb4cc-08fe-4ad8-9400-dcecbcbf45ed" (UID: "6d7eb4cc-08fe-4ad8-9400-dcecbcbf45ed"). InnerVolumeSpecName "kube-api-access-zkt8f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:34:05 crc kubenswrapper[4697]: I0127 15:34:05.322601 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6d7eb4cc-08fe-4ad8-9400-dcecbcbf45ed-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6d7eb4cc-08fe-4ad8-9400-dcecbcbf45ed" (UID: "6d7eb4cc-08fe-4ad8-9400-dcecbcbf45ed"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:34:05 crc kubenswrapper[4697]: I0127 15:34:05.330523 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="eda501db-ef38-4c1f-b2d6-3e009fe24e40" containerName="rabbitmq" containerID="cri-o://b99a323fb1faec530a3f0d6f4c8ee524ea60d2eceda116d7699ad05c31946607" gracePeriod=604795 Jan 27 15:34:05 crc kubenswrapper[4697]: I0127 15:34:05.383401 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkt8f\" (UniqueName: \"kubernetes.io/projected/6d7eb4cc-08fe-4ad8-9400-dcecbcbf45ed-kube-api-access-zkt8f\") on node \"crc\" DevicePath \"\"" Jan 27 15:34:05 crc kubenswrapper[4697]: I0127 15:34:05.383610 4697 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d7eb4cc-08fe-4ad8-9400-dcecbcbf45ed-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 15:34:05 crc kubenswrapper[4697]: I0127 15:34:05.383736 4697 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d7eb4cc-08fe-4ad8-9400-dcecbcbf45ed-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 15:34:05 crc kubenswrapper[4697]: I0127 15:34:05.714651 4697 generic.go:334] "Generic (PLEG): container finished" podID="6d7eb4cc-08fe-4ad8-9400-dcecbcbf45ed" containerID="10e7989d10606cbf3de2693cf0d7c8aaa718cec83714e7576e80a09c5e746dbd" exitCode=0 Jan 27 15:34:05 crc kubenswrapper[4697]: I0127 15:34:05.714727 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-68m2q" Jan 27 15:34:05 crc kubenswrapper[4697]: I0127 15:34:05.714745 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-68m2q" event={"ID":"6d7eb4cc-08fe-4ad8-9400-dcecbcbf45ed","Type":"ContainerDied","Data":"10e7989d10606cbf3de2693cf0d7c8aaa718cec83714e7576e80a09c5e746dbd"} Jan 27 15:34:05 crc kubenswrapper[4697]: I0127 15:34:05.720848 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-68m2q" event={"ID":"6d7eb4cc-08fe-4ad8-9400-dcecbcbf45ed","Type":"ContainerDied","Data":"f56f14c65c0b10f7cac1e603f7bfd6ec1643668ed53cf847ff4803a4ec145601"} Jan 27 15:34:05 crc kubenswrapper[4697]: I0127 15:34:05.720878 4697 scope.go:117] "RemoveContainer" containerID="10e7989d10606cbf3de2693cf0d7c8aaa718cec83714e7576e80a09c5e746dbd" Jan 27 15:34:05 crc kubenswrapper[4697]: I0127 15:34:05.752834 4697 scope.go:117] "RemoveContainer" containerID="5711fadc928dbeeb7ad1b1153e0ec09b62f8fa6bdb64ce4b82d597ca8acb1c87" Jan 27 15:34:05 crc kubenswrapper[4697]: I0127 15:34:05.755595 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-68m2q"] Jan 27 15:34:05 crc kubenswrapper[4697]: I0127 15:34:05.764234 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-68m2q"] Jan 27 15:34:05 crc kubenswrapper[4697]: I0127 15:34:05.785444 4697 scope.go:117] "RemoveContainer" containerID="b2209aa300c7f0ee756c912114748b8a966a960030b29a9dd69e95cee816937b" Jan 27 15:34:05 crc kubenswrapper[4697]: I0127 15:34:05.818604 4697 scope.go:117] "RemoveContainer" containerID="10e7989d10606cbf3de2693cf0d7c8aaa718cec83714e7576e80a09c5e746dbd" Jan 27 15:34:05 crc kubenswrapper[4697]: E0127 15:34:05.819629 4697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"10e7989d10606cbf3de2693cf0d7c8aaa718cec83714e7576e80a09c5e746dbd\": container with ID starting with 10e7989d10606cbf3de2693cf0d7c8aaa718cec83714e7576e80a09c5e746dbd not found: ID does not exist" containerID="10e7989d10606cbf3de2693cf0d7c8aaa718cec83714e7576e80a09c5e746dbd" Jan 27 15:34:05 crc kubenswrapper[4697]: I0127 15:34:05.819703 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10e7989d10606cbf3de2693cf0d7c8aaa718cec83714e7576e80a09c5e746dbd"} err="failed to get container status \"10e7989d10606cbf3de2693cf0d7c8aaa718cec83714e7576e80a09c5e746dbd\": rpc error: code = NotFound desc = could not find container \"10e7989d10606cbf3de2693cf0d7c8aaa718cec83714e7576e80a09c5e746dbd\": container with ID starting with 10e7989d10606cbf3de2693cf0d7c8aaa718cec83714e7576e80a09c5e746dbd not found: ID does not exist" Jan 27 15:34:05 crc kubenswrapper[4697]: I0127 15:34:05.819727 4697 scope.go:117] "RemoveContainer" containerID="5711fadc928dbeeb7ad1b1153e0ec09b62f8fa6bdb64ce4b82d597ca8acb1c87" Jan 27 15:34:05 crc kubenswrapper[4697]: E0127 15:34:05.820006 4697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5711fadc928dbeeb7ad1b1153e0ec09b62f8fa6bdb64ce4b82d597ca8acb1c87\": container with ID starting with 5711fadc928dbeeb7ad1b1153e0ec09b62f8fa6bdb64ce4b82d597ca8acb1c87 not found: ID does not exist" containerID="5711fadc928dbeeb7ad1b1153e0ec09b62f8fa6bdb64ce4b82d597ca8acb1c87" Jan 27 15:34:05 crc kubenswrapper[4697]: I0127 15:34:05.820053 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5711fadc928dbeeb7ad1b1153e0ec09b62f8fa6bdb64ce4b82d597ca8acb1c87"} err="failed to get container status \"5711fadc928dbeeb7ad1b1153e0ec09b62f8fa6bdb64ce4b82d597ca8acb1c87\": rpc error: code = NotFound desc = could not find container \"5711fadc928dbeeb7ad1b1153e0ec09b62f8fa6bdb64ce4b82d597ca8acb1c87\": container with ID starting with 5711fadc928dbeeb7ad1b1153e0ec09b62f8fa6bdb64ce4b82d597ca8acb1c87 not found: ID does not exist" Jan 27 15:34:05 crc kubenswrapper[4697]: I0127 15:34:05.820072 4697 scope.go:117] "RemoveContainer" containerID="b2209aa300c7f0ee756c912114748b8a966a960030b29a9dd69e95cee816937b" Jan 27 15:34:05 crc kubenswrapper[4697]: E0127 15:34:05.820309 4697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b2209aa300c7f0ee756c912114748b8a966a960030b29a9dd69e95cee816937b\": container with ID starting with b2209aa300c7f0ee756c912114748b8a966a960030b29a9dd69e95cee816937b not found: ID does not exist" containerID="b2209aa300c7f0ee756c912114748b8a966a960030b29a9dd69e95cee816937b" Jan 27 15:34:05 crc kubenswrapper[4697]: I0127 15:34:05.820358 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2209aa300c7f0ee756c912114748b8a966a960030b29a9dd69e95cee816937b"} err="failed to get container status \"b2209aa300c7f0ee756c912114748b8a966a960030b29a9dd69e95cee816937b\": rpc error: code = NotFound desc = could not find container \"b2209aa300c7f0ee756c912114748b8a966a960030b29a9dd69e95cee816937b\": container with ID starting with b2209aa300c7f0ee756c912114748b8a966a960030b29a9dd69e95cee816937b not found: ID does not exist" Jan 27 15:34:06 crc kubenswrapper[4697]: I0127 15:34:06.580545 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d7eb4cc-08fe-4ad8-9400-dcecbcbf45ed" path="/var/lib/kubelet/pods/6d7eb4cc-08fe-4ad8-9400-dcecbcbf45ed/volumes" Jan 27 15:34:10 crc kubenswrapper[4697]: I0127 15:34:10.573825 4697 scope.go:117] "RemoveContainer" containerID="1041a01976f73e6dbbf881bb74cdc0195408ed73fc04fdd6c07635790ef653fc" Jan 27 15:34:10 crc kubenswrapper[4697]: E0127 15:34:10.575120 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wz495_openshift-machine-config-operator(e9bec8bc-b2a6-4865-83ca-692ae5c022a6)\"" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" Jan 27 15:34:10 crc kubenswrapper[4697]: I0127 15:34:10.772606 4697 generic.go:334] "Generic (PLEG): container finished" podID="abff1f2e-e0f3-4730-888c-2e2d8464f624" containerID="489c8c931a994429732b5b400a22535c3856ed191c25e0569f38c1a130722991" exitCode=0 Jan 27 15:34:10 crc kubenswrapper[4697]: I0127 15:34:10.772972 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"abff1f2e-e0f3-4730-888c-2e2d8464f624","Type":"ContainerDied","Data":"489c8c931a994429732b5b400a22535c3856ed191c25e0569f38c1a130722991"} Jan 27 15:34:10 crc kubenswrapper[4697]: I0127 15:34:10.867078 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 27 15:34:10 crc kubenswrapper[4697]: I0127 15:34:10.904908 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/abff1f2e-e0f3-4730-888c-2e2d8464f624-rabbitmq-confd\") pod \"abff1f2e-e0f3-4730-888c-2e2d8464f624\" (UID: \"abff1f2e-e0f3-4730-888c-2e2d8464f624\") " Jan 27 15:34:10 crc kubenswrapper[4697]: I0127 15:34:10.904995 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"abff1f2e-e0f3-4730-888c-2e2d8464f624\" (UID: \"abff1f2e-e0f3-4730-888c-2e2d8464f624\") " Jan 27 15:34:10 crc kubenswrapper[4697]: I0127 15:34:10.905923 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/abff1f2e-e0f3-4730-888c-2e2d8464f624-config-data\") pod \"abff1f2e-e0f3-4730-888c-2e2d8464f624\" (UID: \"abff1f2e-e0f3-4730-888c-2e2d8464f624\") " Jan 27 15:34:10 crc kubenswrapper[4697]: I0127 15:34:10.905955 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/abff1f2e-e0f3-4730-888c-2e2d8464f624-plugins-conf\") pod \"abff1f2e-e0f3-4730-888c-2e2d8464f624\" (UID: \"abff1f2e-e0f3-4730-888c-2e2d8464f624\") " Jan 27 15:34:10 crc kubenswrapper[4697]: I0127 15:34:10.906241 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8npts\" (UniqueName: \"kubernetes.io/projected/abff1f2e-e0f3-4730-888c-2e2d8464f624-kube-api-access-8npts\") pod \"abff1f2e-e0f3-4730-888c-2e2d8464f624\" (UID: \"abff1f2e-e0f3-4730-888c-2e2d8464f624\") " Jan 27 15:34:10 crc kubenswrapper[4697]: I0127 15:34:10.906269 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/abff1f2e-e0f3-4730-888c-2e2d8464f624-server-conf\") pod \"abff1f2e-e0f3-4730-888c-2e2d8464f624\" (UID: \"abff1f2e-e0f3-4730-888c-2e2d8464f624\") " Jan 27 15:34:10 crc kubenswrapper[4697]: I0127 15:34:10.906299 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/abff1f2e-e0f3-4730-888c-2e2d8464f624-rabbitmq-plugins\") pod \"abff1f2e-e0f3-4730-888c-2e2d8464f624\" (UID: \"abff1f2e-e0f3-4730-888c-2e2d8464f624\") " Jan 27 15:34:10 crc kubenswrapper[4697]: I0127 15:34:10.906408 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/abff1f2e-e0f3-4730-888c-2e2d8464f624-erlang-cookie-secret\") pod \"abff1f2e-e0f3-4730-888c-2e2d8464f624\" (UID: \"abff1f2e-e0f3-4730-888c-2e2d8464f624\") " Jan 27 15:34:10 crc kubenswrapper[4697]: I0127 15:34:10.906441 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/abff1f2e-e0f3-4730-888c-2e2d8464f624-rabbitmq-tls\") pod \"abff1f2e-e0f3-4730-888c-2e2d8464f624\" (UID: \"abff1f2e-e0f3-4730-888c-2e2d8464f624\") " Jan 27 15:34:10 crc kubenswrapper[4697]: I0127 15:34:10.906489 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/abff1f2e-e0f3-4730-888c-2e2d8464f624-rabbitmq-erlang-cookie\") pod \"abff1f2e-e0f3-4730-888c-2e2d8464f624\" (UID: \"abff1f2e-e0f3-4730-888c-2e2d8464f624\") " Jan 27 15:34:10 crc kubenswrapper[4697]: I0127 15:34:10.906547 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/abff1f2e-e0f3-4730-888c-2e2d8464f624-pod-info\") pod \"abff1f2e-e0f3-4730-888c-2e2d8464f624\" (UID: \"abff1f2e-e0f3-4730-888c-2e2d8464f624\") " Jan 27 15:34:10 crc kubenswrapper[4697]: I0127 15:34:10.909088 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/abff1f2e-e0f3-4730-888c-2e2d8464f624-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "abff1f2e-e0f3-4730-888c-2e2d8464f624" (UID: "abff1f2e-e0f3-4730-888c-2e2d8464f624"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:34:10 crc kubenswrapper[4697]: I0127 15:34:10.910692 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/abff1f2e-e0f3-4730-888c-2e2d8464f624-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "abff1f2e-e0f3-4730-888c-2e2d8464f624" (UID: "abff1f2e-e0f3-4730-888c-2e2d8464f624"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:34:10 crc kubenswrapper[4697]: I0127 15:34:10.918531 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/abff1f2e-e0f3-4730-888c-2e2d8464f624-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "abff1f2e-e0f3-4730-888c-2e2d8464f624" (UID: "abff1f2e-e0f3-4730-888c-2e2d8464f624"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:34:10 crc kubenswrapper[4697]: I0127 15:34:10.922793 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/abff1f2e-e0f3-4730-888c-2e2d8464f624-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "abff1f2e-e0f3-4730-888c-2e2d8464f624" (UID: "abff1f2e-e0f3-4730-888c-2e2d8464f624"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:34:10 crc kubenswrapper[4697]: I0127 15:34:10.924595 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "persistence") pod "abff1f2e-e0f3-4730-888c-2e2d8464f624" (UID: "abff1f2e-e0f3-4730-888c-2e2d8464f624"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 27 15:34:10 crc kubenswrapper[4697]: I0127 15:34:10.927733 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/abff1f2e-e0f3-4730-888c-2e2d8464f624-kube-api-access-8npts" (OuterVolumeSpecName: "kube-api-access-8npts") pod "abff1f2e-e0f3-4730-888c-2e2d8464f624" (UID: "abff1f2e-e0f3-4730-888c-2e2d8464f624"). InnerVolumeSpecName "kube-api-access-8npts". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:34:10 crc kubenswrapper[4697]: I0127 15:34:10.931030 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/abff1f2e-e0f3-4730-888c-2e2d8464f624-pod-info" (OuterVolumeSpecName: "pod-info") pod "abff1f2e-e0f3-4730-888c-2e2d8464f624" (UID: "abff1f2e-e0f3-4730-888c-2e2d8464f624"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 27 15:34:10 crc kubenswrapper[4697]: I0127 15:34:10.941127 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/abff1f2e-e0f3-4730-888c-2e2d8464f624-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "abff1f2e-e0f3-4730-888c-2e2d8464f624" (UID: "abff1f2e-e0f3-4730-888c-2e2d8464f624"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:34:10 crc kubenswrapper[4697]: I0127 15:34:10.996386 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/abff1f2e-e0f3-4730-888c-2e2d8464f624-config-data" (OuterVolumeSpecName: "config-data") pod "abff1f2e-e0f3-4730-888c-2e2d8464f624" (UID: "abff1f2e-e0f3-4730-888c-2e2d8464f624"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:34:11 crc kubenswrapper[4697]: I0127 15:34:11.013053 4697 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Jan 27 15:34:11 crc kubenswrapper[4697]: I0127 15:34:11.013088 4697 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/abff1f2e-e0f3-4730-888c-2e2d8464f624-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 15:34:11 crc kubenswrapper[4697]: I0127 15:34:11.013097 4697 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/abff1f2e-e0f3-4730-888c-2e2d8464f624-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 27 15:34:11 crc kubenswrapper[4697]: I0127 15:34:11.013107 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8npts\" (UniqueName: \"kubernetes.io/projected/abff1f2e-e0f3-4730-888c-2e2d8464f624-kube-api-access-8npts\") on node \"crc\" DevicePath \"\"" Jan 27 15:34:11 crc kubenswrapper[4697]: I0127 15:34:11.013116 4697 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/abff1f2e-e0f3-4730-888c-2e2d8464f624-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 27 15:34:11 crc kubenswrapper[4697]: I0127 15:34:11.013126 4697 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/abff1f2e-e0f3-4730-888c-2e2d8464f624-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 27 15:34:11 crc kubenswrapper[4697]: I0127 15:34:11.013143 4697 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/abff1f2e-e0f3-4730-888c-2e2d8464f624-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 27 15:34:11 crc kubenswrapper[4697]: I0127 15:34:11.013154 4697 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/abff1f2e-e0f3-4730-888c-2e2d8464f624-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 27 15:34:11 crc kubenswrapper[4697]: I0127 15:34:11.013164 4697 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/abff1f2e-e0f3-4730-888c-2e2d8464f624-pod-info\") on node \"crc\" DevicePath \"\"" Jan 27 15:34:11 crc kubenswrapper[4697]: I0127 15:34:11.072500 4697 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Jan 27 15:34:11 crc kubenswrapper[4697]: I0127 15:34:11.076473 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/abff1f2e-e0f3-4730-888c-2e2d8464f624-server-conf" (OuterVolumeSpecName: "server-conf") pod "abff1f2e-e0f3-4730-888c-2e2d8464f624" (UID: "abff1f2e-e0f3-4730-888c-2e2d8464f624"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:34:11 crc kubenswrapper[4697]: I0127 15:34:11.116163 4697 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Jan 27 15:34:11 crc kubenswrapper[4697]: I0127 15:34:11.116195 4697 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/abff1f2e-e0f3-4730-888c-2e2d8464f624-server-conf\") on node \"crc\" DevicePath \"\"" Jan 27 15:34:11 crc kubenswrapper[4697]: I0127 15:34:11.163418 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/abff1f2e-e0f3-4730-888c-2e2d8464f624-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "abff1f2e-e0f3-4730-888c-2e2d8464f624" (UID: "abff1f2e-e0f3-4730-888c-2e2d8464f624"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:34:11 crc kubenswrapper[4697]: I0127 15:34:11.218046 4697 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/abff1f2e-e0f3-4730-888c-2e2d8464f624-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 27 15:34:11 crc kubenswrapper[4697]: I0127 15:34:11.787406 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"abff1f2e-e0f3-4730-888c-2e2d8464f624","Type":"ContainerDied","Data":"e75716073c3165834ccf2316d02758f3237b18de338a5c2bafcbcc880dff5652"} Jan 27 15:34:11 crc kubenswrapper[4697]: I0127 15:34:11.787480 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 27 15:34:11 crc kubenswrapper[4697]: I0127 15:34:11.787837 4697 scope.go:117] "RemoveContainer" containerID="489c8c931a994429732b5b400a22535c3856ed191c25e0569f38c1a130722991" Jan 27 15:34:11 crc kubenswrapper[4697]: I0127 15:34:11.805324 4697 generic.go:334] "Generic (PLEG): container finished" podID="eda501db-ef38-4c1f-b2d6-3e009fe24e40" containerID="b99a323fb1faec530a3f0d6f4c8ee524ea60d2eceda116d7699ad05c31946607" exitCode=0 Jan 27 15:34:11 crc kubenswrapper[4697]: I0127 15:34:11.805386 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"eda501db-ef38-4c1f-b2d6-3e009fe24e40","Type":"ContainerDied","Data":"b99a323fb1faec530a3f0d6f4c8ee524ea60d2eceda116d7699ad05c31946607"} Jan 27 15:34:11 crc kubenswrapper[4697]: I0127 15:34:11.847902 4697 scope.go:117] "RemoveContainer" containerID="e08609f1f8f84cd80d5406d5f5667af27d378fc1d71f72faca38aca992d8af76" Jan 27 15:34:11 crc kubenswrapper[4697]: I0127 15:34:11.886240 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 27 15:34:11 crc kubenswrapper[4697]: I0127 15:34:11.921440 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 27 15:34:11 crc kubenswrapper[4697]: I0127 15:34:11.928232 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 27 15:34:11 crc kubenswrapper[4697]: E0127 15:34:11.930224 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d7eb4cc-08fe-4ad8-9400-dcecbcbf45ed" containerName="extract-utilities" Jan 27 15:34:11 crc kubenswrapper[4697]: I0127 15:34:11.930253 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d7eb4cc-08fe-4ad8-9400-dcecbcbf45ed" containerName="extract-utilities" Jan 27 15:34:11 crc kubenswrapper[4697]: E0127 15:34:11.930297 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d7eb4cc-08fe-4ad8-9400-dcecbcbf45ed" containerName="registry-server" Jan 27 15:34:11 crc kubenswrapper[4697]: I0127 15:34:11.930306 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d7eb4cc-08fe-4ad8-9400-dcecbcbf45ed" containerName="registry-server" Jan 27 15:34:11 crc kubenswrapper[4697]: E0127 15:34:11.930326 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4da5f97-06a4-452e-9f5e-87c97c5bf1f5" containerName="extract-content" Jan 27 15:34:11 crc kubenswrapper[4697]: I0127 15:34:11.930334 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4da5f97-06a4-452e-9f5e-87c97c5bf1f5" containerName="extract-content" Jan 27 15:34:11 crc kubenswrapper[4697]: E0127 15:34:11.930369 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abff1f2e-e0f3-4730-888c-2e2d8464f624" containerName="setup-container" Jan 27 15:34:11 crc kubenswrapper[4697]: I0127 15:34:11.930378 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="abff1f2e-e0f3-4730-888c-2e2d8464f624" containerName="setup-container" Jan 27 15:34:11 crc kubenswrapper[4697]: E0127 15:34:11.932492 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4da5f97-06a4-452e-9f5e-87c97c5bf1f5" containerName="registry-server" Jan 27 15:34:11 crc kubenswrapper[4697]: I0127 15:34:11.932534 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4da5f97-06a4-452e-9f5e-87c97c5bf1f5" containerName="registry-server" Jan 27 15:34:11 crc kubenswrapper[4697]: E0127 15:34:11.932582 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4da5f97-06a4-452e-9f5e-87c97c5bf1f5" containerName="extract-utilities" Jan 27 15:34:11 crc kubenswrapper[4697]: I0127 15:34:11.932592 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4da5f97-06a4-452e-9f5e-87c97c5bf1f5" containerName="extract-utilities" Jan 27 15:34:11 crc kubenswrapper[4697]: E0127 15:34:11.932623 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d7eb4cc-08fe-4ad8-9400-dcecbcbf45ed" containerName="extract-content" Jan 27 15:34:11 crc kubenswrapper[4697]: I0127 15:34:11.932632 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d7eb4cc-08fe-4ad8-9400-dcecbcbf45ed" containerName="extract-content" Jan 27 15:34:11 crc kubenswrapper[4697]: E0127 15:34:11.953741 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abff1f2e-e0f3-4730-888c-2e2d8464f624" containerName="rabbitmq" Jan 27 15:34:11 crc kubenswrapper[4697]: I0127 15:34:11.953777 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="abff1f2e-e0f3-4730-888c-2e2d8464f624" containerName="rabbitmq" Jan 27 15:34:11 crc kubenswrapper[4697]: I0127 15:34:11.954622 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="abff1f2e-e0f3-4730-888c-2e2d8464f624" containerName="rabbitmq" Jan 27 15:34:11 crc kubenswrapper[4697]: I0127 15:34:11.954651 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d7eb4cc-08fe-4ad8-9400-dcecbcbf45ed" containerName="registry-server" Jan 27 15:34:11 crc kubenswrapper[4697]: I0127 15:34:11.954667 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4da5f97-06a4-452e-9f5e-87c97c5bf1f5" containerName="registry-server" Jan 27 15:34:11 crc kubenswrapper[4697]: I0127 15:34:11.960125 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 27 15:34:11 crc kubenswrapper[4697]: I0127 15:34:11.989285 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-lclcv" Jan 27 15:34:11 crc kubenswrapper[4697]: I0127 15:34:11.992723 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 27 15:34:11 crc kubenswrapper[4697]: I0127 15:34:11.993068 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 27 15:34:11 crc kubenswrapper[4697]: I0127 15:34:11.993492 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 27 15:34:11 crc kubenswrapper[4697]: I0127 15:34:11.993732 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 27 15:34:11 crc kubenswrapper[4697]: I0127 15:34:11.994022 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 27 15:34:11 crc kubenswrapper[4697]: I0127 15:34:11.994226 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 27 15:34:12 crc kubenswrapper[4697]: I0127 15:34:12.023779 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 27 15:34:12 crc kubenswrapper[4697]: I0127 15:34:12.072249 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b9b87d14-1e98-448a-9b9c-3c47e4782ede-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"b9b87d14-1e98-448a-9b9c-3c47e4782ede\") " pod="openstack/rabbitmq-server-0" Jan 27 15:34:12 crc kubenswrapper[4697]: I0127 15:34:12.073271 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b9b87d14-1e98-448a-9b9c-3c47e4782ede-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"b9b87d14-1e98-448a-9b9c-3c47e4782ede\") " pod="openstack/rabbitmq-server-0" Jan 27 15:34:12 crc kubenswrapper[4697]: I0127 15:34:12.073614 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b9b87d14-1e98-448a-9b9c-3c47e4782ede-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"b9b87d14-1e98-448a-9b9c-3c47e4782ede\") " pod="openstack/rabbitmq-server-0" Jan 27 15:34:12 crc kubenswrapper[4697]: I0127 15:34:12.074007 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b9b87d14-1e98-448a-9b9c-3c47e4782ede-config-data\") pod \"rabbitmq-server-0\" (UID: \"b9b87d14-1e98-448a-9b9c-3c47e4782ede\") " pod="openstack/rabbitmq-server-0" Jan 27 15:34:12 crc kubenswrapper[4697]: I0127 15:34:12.074926 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b9b87d14-1e98-448a-9b9c-3c47e4782ede-server-conf\") pod \"rabbitmq-server-0\" (UID: \"b9b87d14-1e98-448a-9b9c-3c47e4782ede\") " pod="openstack/rabbitmq-server-0" Jan 27 15:34:12 crc kubenswrapper[4697]: I0127 15:34:12.074965 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b9b87d14-1e98-448a-9b9c-3c47e4782ede-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"b9b87d14-1e98-448a-9b9c-3c47e4782ede\") " pod="openstack/rabbitmq-server-0" Jan 27 15:34:12 crc kubenswrapper[4697]: I0127 15:34:12.075005 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"b9b87d14-1e98-448a-9b9c-3c47e4782ede\") " pod="openstack/rabbitmq-server-0" Jan 27 15:34:12 crc kubenswrapper[4697]: I0127 15:34:12.075034 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b9b87d14-1e98-448a-9b9c-3c47e4782ede-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"b9b87d14-1e98-448a-9b9c-3c47e4782ede\") " pod="openstack/rabbitmq-server-0" Jan 27 15:34:12 crc kubenswrapper[4697]: I0127 15:34:12.075078 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b9b87d14-1e98-448a-9b9c-3c47e4782ede-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"b9b87d14-1e98-448a-9b9c-3c47e4782ede\") " pod="openstack/rabbitmq-server-0" Jan 27 15:34:12 crc kubenswrapper[4697]: I0127 15:34:12.075103 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8nbgm\" (UniqueName: \"kubernetes.io/projected/b9b87d14-1e98-448a-9b9c-3c47e4782ede-kube-api-access-8nbgm\") pod \"rabbitmq-server-0\" (UID: \"b9b87d14-1e98-448a-9b9c-3c47e4782ede\") " pod="openstack/rabbitmq-server-0" Jan 27 15:34:12 crc kubenswrapper[4697]: I0127 15:34:12.075181 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b9b87d14-1e98-448a-9b9c-3c47e4782ede-pod-info\") pod \"rabbitmq-server-0\" (UID: \"b9b87d14-1e98-448a-9b9c-3c47e4782ede\") " pod="openstack/rabbitmq-server-0" Jan 27 15:34:12 crc kubenswrapper[4697]: I0127 15:34:12.144471 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 27 15:34:12 crc kubenswrapper[4697]: I0127 15:34:12.179413 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b9b87d14-1e98-448a-9b9c-3c47e4782ede-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"b9b87d14-1e98-448a-9b9c-3c47e4782ede\") " pod="openstack/rabbitmq-server-0" Jan 27 15:34:12 crc kubenswrapper[4697]: I0127 15:34:12.179561 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b9b87d14-1e98-448a-9b9c-3c47e4782ede-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"b9b87d14-1e98-448a-9b9c-3c47e4782ede\") " pod="openstack/rabbitmq-server-0" Jan 27 15:34:12 crc kubenswrapper[4697]: I0127 15:34:12.179621 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b9b87d14-1e98-448a-9b9c-3c47e4782ede-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"b9b87d14-1e98-448a-9b9c-3c47e4782ede\") " pod="openstack/rabbitmq-server-0" Jan 27 15:34:12 crc kubenswrapper[4697]: I0127 15:34:12.179672 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b9b87d14-1e98-448a-9b9c-3c47e4782ede-config-data\") pod \"rabbitmq-server-0\" (UID: \"b9b87d14-1e98-448a-9b9c-3c47e4782ede\") " pod="openstack/rabbitmq-server-0" Jan 27 15:34:12 crc kubenswrapper[4697]: I0127 15:34:12.179741 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b9b87d14-1e98-448a-9b9c-3c47e4782ede-server-conf\") pod \"rabbitmq-server-0\" (UID: \"b9b87d14-1e98-448a-9b9c-3c47e4782ede\") " pod="openstack/rabbitmq-server-0" Jan 27 15:34:12 crc kubenswrapper[4697]: I0127 15:34:12.179771 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b9b87d14-1e98-448a-9b9c-3c47e4782ede-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"b9b87d14-1e98-448a-9b9c-3c47e4782ede\") " pod="openstack/rabbitmq-server-0" Jan 27 15:34:12 crc kubenswrapper[4697]: I0127 15:34:12.183839 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"b9b87d14-1e98-448a-9b9c-3c47e4782ede\") " pod="openstack/rabbitmq-server-0" Jan 27 15:34:12 crc kubenswrapper[4697]: I0127 15:34:12.183895 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b9b87d14-1e98-448a-9b9c-3c47e4782ede-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"b9b87d14-1e98-448a-9b9c-3c47e4782ede\") " pod="openstack/rabbitmq-server-0" Jan 27 15:34:12 crc kubenswrapper[4697]: I0127 15:34:12.183956 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b9b87d14-1e98-448a-9b9c-3c47e4782ede-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"b9b87d14-1e98-448a-9b9c-3c47e4782ede\") " pod="openstack/rabbitmq-server-0" Jan 27 15:34:12 crc kubenswrapper[4697]: I0127 15:34:12.183985 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8nbgm\" (UniqueName: \"kubernetes.io/projected/b9b87d14-1e98-448a-9b9c-3c47e4782ede-kube-api-access-8nbgm\") pod \"rabbitmq-server-0\" (UID: \"b9b87d14-1e98-448a-9b9c-3c47e4782ede\") " pod="openstack/rabbitmq-server-0" Jan 27 15:34:12 crc kubenswrapper[4697]: I0127 15:34:12.184065 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b9b87d14-1e98-448a-9b9c-3c47e4782ede-pod-info\") pod \"rabbitmq-server-0\" (UID: \"b9b87d14-1e98-448a-9b9c-3c47e4782ede\") " pod="openstack/rabbitmq-server-0" Jan 27 15:34:12 crc kubenswrapper[4697]: I0127 15:34:12.184328 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b9b87d14-1e98-448a-9b9c-3c47e4782ede-config-data\") pod \"rabbitmq-server-0\" (UID: \"b9b87d14-1e98-448a-9b9c-3c47e4782ede\") " pod="openstack/rabbitmq-server-0" Jan 27 15:34:12 crc kubenswrapper[4697]: I0127 15:34:12.184702 4697 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"b9b87d14-1e98-448a-9b9c-3c47e4782ede\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/rabbitmq-server-0" Jan 27 15:34:12 crc kubenswrapper[4697]: I0127 15:34:12.188474 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b9b87d14-1e98-448a-9b9c-3c47e4782ede-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"b9b87d14-1e98-448a-9b9c-3c47e4782ede\") " pod="openstack/rabbitmq-server-0" Jan 27 15:34:12 crc kubenswrapper[4697]: I0127 15:34:12.189550 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b9b87d14-1e98-448a-9b9c-3c47e4782ede-server-conf\") pod \"rabbitmq-server-0\" (UID: \"b9b87d14-1e98-448a-9b9c-3c47e4782ede\") " pod="openstack/rabbitmq-server-0" Jan 27 15:34:12 crc kubenswrapper[4697]: I0127 15:34:12.182572 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b9b87d14-1e98-448a-9b9c-3c47e4782ede-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"b9b87d14-1e98-448a-9b9c-3c47e4782ede\") " pod="openstack/rabbitmq-server-0" Jan 27 15:34:12 crc kubenswrapper[4697]: I0127 15:34:12.199118 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b9b87d14-1e98-448a-9b9c-3c47e4782ede-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"b9b87d14-1e98-448a-9b9c-3c47e4782ede\") " pod="openstack/rabbitmq-server-0" Jan 27 15:34:12 crc kubenswrapper[4697]: I0127 15:34:12.207397 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b9b87d14-1e98-448a-9b9c-3c47e4782ede-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"b9b87d14-1e98-448a-9b9c-3c47e4782ede\") " pod="openstack/rabbitmq-server-0" Jan 27 15:34:12 crc kubenswrapper[4697]: I0127 15:34:12.222299 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b9b87d14-1e98-448a-9b9c-3c47e4782ede-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"b9b87d14-1e98-448a-9b9c-3c47e4782ede\") " pod="openstack/rabbitmq-server-0" Jan 27 15:34:12 crc kubenswrapper[4697]: I0127 15:34:12.228882 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nbgm\" (UniqueName: \"kubernetes.io/projected/b9b87d14-1e98-448a-9b9c-3c47e4782ede-kube-api-access-8nbgm\") pod \"rabbitmq-server-0\" (UID: \"b9b87d14-1e98-448a-9b9c-3c47e4782ede\") " pod="openstack/rabbitmq-server-0" Jan 27 15:34:12 crc kubenswrapper[4697]: I0127 15:34:12.233391 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b9b87d14-1e98-448a-9b9c-3c47e4782ede-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"b9b87d14-1e98-448a-9b9c-3c47e4782ede\") " pod="openstack/rabbitmq-server-0" Jan 27 15:34:12 crc kubenswrapper[4697]: I0127 15:34:12.236018 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b9b87d14-1e98-448a-9b9c-3c47e4782ede-pod-info\") pod \"rabbitmq-server-0\" (UID: \"b9b87d14-1e98-448a-9b9c-3c47e4782ede\") " pod="openstack/rabbitmq-server-0" Jan 27 15:34:12 crc kubenswrapper[4697]: I0127 15:34:12.285885 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/eda501db-ef38-4c1f-b2d6-3e009fe24e40-rabbitmq-confd\") pod \"eda501db-ef38-4c1f-b2d6-3e009fe24e40\" (UID: \"eda501db-ef38-4c1f-b2d6-3e009fe24e40\") " Jan 27 15:34:12 crc kubenswrapper[4697]: I0127 15:34:12.285950 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/eda501db-ef38-4c1f-b2d6-3e009fe24e40-plugins-conf\") pod \"eda501db-ef38-4c1f-b2d6-3e009fe24e40\" (UID: \"eda501db-ef38-4c1f-b2d6-3e009fe24e40\") " Jan 27 15:34:12 crc kubenswrapper[4697]: I0127 15:34:12.286002 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/eda501db-ef38-4c1f-b2d6-3e009fe24e40-pod-info\") pod \"eda501db-ef38-4c1f-b2d6-3e009fe24e40\" (UID: \"eda501db-ef38-4c1f-b2d6-3e009fe24e40\") " Jan 27 15:34:12 crc kubenswrapper[4697]: I0127 15:34:12.286040 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lhpwh\" (UniqueName: \"kubernetes.io/projected/eda501db-ef38-4c1f-b2d6-3e009fe24e40-kube-api-access-lhpwh\") pod \"eda501db-ef38-4c1f-b2d6-3e009fe24e40\" (UID: \"eda501db-ef38-4c1f-b2d6-3e009fe24e40\") " Jan 27 15:34:12 crc kubenswrapper[4697]: I0127 15:34:12.286080 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/eda501db-ef38-4c1f-b2d6-3e009fe24e40-erlang-cookie-secret\") pod \"eda501db-ef38-4c1f-b2d6-3e009fe24e40\" (UID: \"eda501db-ef38-4c1f-b2d6-3e009fe24e40\") " Jan 27 15:34:12 crc kubenswrapper[4697]: I0127 15:34:12.286104 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/eda501db-ef38-4c1f-b2d6-3e009fe24e40-rabbitmq-tls\") pod \"eda501db-ef38-4c1f-b2d6-3e009fe24e40\" (UID: \"eda501db-ef38-4c1f-b2d6-3e009fe24e40\") " Jan 27 15:34:12 crc kubenswrapper[4697]: I0127 15:34:12.286139 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/eda501db-ef38-4c1f-b2d6-3e009fe24e40-rabbitmq-plugins\") pod \"eda501db-ef38-4c1f-b2d6-3e009fe24e40\" (UID: \"eda501db-ef38-4c1f-b2d6-3e009fe24e40\") " Jan 27 15:34:12 crc kubenswrapper[4697]: I0127 15:34:12.286161 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/eda501db-ef38-4c1f-b2d6-3e009fe24e40-server-conf\") pod \"eda501db-ef38-4c1f-b2d6-3e009fe24e40\" (UID: \"eda501db-ef38-4c1f-b2d6-3e009fe24e40\") " Jan 27 15:34:12 crc kubenswrapper[4697]: I0127 15:34:12.286195 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/eda501db-ef38-4c1f-b2d6-3e009fe24e40-rabbitmq-erlang-cookie\") pod \"eda501db-ef38-4c1f-b2d6-3e009fe24e40\" (UID: \"eda501db-ef38-4c1f-b2d6-3e009fe24e40\") " Jan 27 15:34:12 crc kubenswrapper[4697]: I0127 15:34:12.286240 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"eda501db-ef38-4c1f-b2d6-3e009fe24e40\" (UID: \"eda501db-ef38-4c1f-b2d6-3e009fe24e40\") " Jan 27 15:34:12 crc kubenswrapper[4697]: I0127 15:34:12.286265 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/eda501db-ef38-4c1f-b2d6-3e009fe24e40-config-data\") pod \"eda501db-ef38-4c1f-b2d6-3e009fe24e40\" (UID: \"eda501db-ef38-4c1f-b2d6-3e009fe24e40\") " Jan 27 15:34:12 crc kubenswrapper[4697]: I0127 15:34:12.295369 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eda501db-ef38-4c1f-b2d6-3e009fe24e40-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "eda501db-ef38-4c1f-b2d6-3e009fe24e40" (UID: "eda501db-ef38-4c1f-b2d6-3e009fe24e40"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:34:12 crc kubenswrapper[4697]: I0127 15:34:12.313877 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eda501db-ef38-4c1f-b2d6-3e009fe24e40-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "eda501db-ef38-4c1f-b2d6-3e009fe24e40" (UID: "eda501db-ef38-4c1f-b2d6-3e009fe24e40"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:34:12 crc kubenswrapper[4697]: I0127 15:34:12.314564 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"b9b87d14-1e98-448a-9b9c-3c47e4782ede\") " pod="openstack/rabbitmq-server-0" Jan 27 15:34:12 crc kubenswrapper[4697]: I0127 15:34:12.323246 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eda501db-ef38-4c1f-b2d6-3e009fe24e40-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "eda501db-ef38-4c1f-b2d6-3e009fe24e40" (UID: "eda501db-ef38-4c1f-b2d6-3e009fe24e40"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:34:12 crc kubenswrapper[4697]: I0127 15:34:12.323769 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/eda501db-ef38-4c1f-b2d6-3e009fe24e40-pod-info" (OuterVolumeSpecName: "pod-info") pod "eda501db-ef38-4c1f-b2d6-3e009fe24e40" (UID: "eda501db-ef38-4c1f-b2d6-3e009fe24e40"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 27 15:34:12 crc kubenswrapper[4697]: I0127 15:34:12.324891 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eda501db-ef38-4c1f-b2d6-3e009fe24e40-kube-api-access-lhpwh" (OuterVolumeSpecName: "kube-api-access-lhpwh") pod "eda501db-ef38-4c1f-b2d6-3e009fe24e40" (UID: "eda501db-ef38-4c1f-b2d6-3e009fe24e40"). InnerVolumeSpecName "kube-api-access-lhpwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:34:12 crc kubenswrapper[4697]: I0127 15:34:12.323671 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eda501db-ef38-4c1f-b2d6-3e009fe24e40-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "eda501db-ef38-4c1f-b2d6-3e009fe24e40" (UID: "eda501db-ef38-4c1f-b2d6-3e009fe24e40"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:34:12 crc kubenswrapper[4697]: I0127 15:34:12.338084 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eda501db-ef38-4c1f-b2d6-3e009fe24e40-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "eda501db-ef38-4c1f-b2d6-3e009fe24e40" (UID: "eda501db-ef38-4c1f-b2d6-3e009fe24e40"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:34:12 crc kubenswrapper[4697]: I0127 15:34:12.338719 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "persistence") pod "eda501db-ef38-4c1f-b2d6-3e009fe24e40" (UID: "eda501db-ef38-4c1f-b2d6-3e009fe24e40"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 27 15:34:12 crc kubenswrapper[4697]: I0127 15:34:12.352438 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eda501db-ef38-4c1f-b2d6-3e009fe24e40-config-data" (OuterVolumeSpecName: "config-data") pod "eda501db-ef38-4c1f-b2d6-3e009fe24e40" (UID: "eda501db-ef38-4c1f-b2d6-3e009fe24e40"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:34:12 crc kubenswrapper[4697]: I0127 15:34:12.389562 4697 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/eda501db-ef38-4c1f-b2d6-3e009fe24e40-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 15:34:12 crc kubenswrapper[4697]: I0127 15:34:12.389629 4697 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Jan 27 15:34:12 crc kubenswrapper[4697]: I0127 15:34:12.389642 4697 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/eda501db-ef38-4c1f-b2d6-3e009fe24e40-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 27 15:34:12 crc kubenswrapper[4697]: I0127 15:34:12.389654 4697 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/eda501db-ef38-4c1f-b2d6-3e009fe24e40-pod-info\") on node \"crc\" DevicePath \"\"" Jan 27 15:34:12 crc kubenswrapper[4697]: I0127 15:34:12.389669 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lhpwh\" (UniqueName: \"kubernetes.io/projected/eda501db-ef38-4c1f-b2d6-3e009fe24e40-kube-api-access-lhpwh\") on node \"crc\" DevicePath \"\"" Jan 27 15:34:12 crc kubenswrapper[4697]: I0127 15:34:12.389683 4697 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/eda501db-ef38-4c1f-b2d6-3e009fe24e40-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 27 15:34:12 crc kubenswrapper[4697]: I0127 15:34:12.389706 4697 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/eda501db-ef38-4c1f-b2d6-3e009fe24e40-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 27 15:34:12 crc kubenswrapper[4697]: I0127 15:34:12.389716 4697 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/eda501db-ef38-4c1f-b2d6-3e009fe24e40-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 27 15:34:12 crc kubenswrapper[4697]: I0127 15:34:12.389727 4697 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/eda501db-ef38-4c1f-b2d6-3e009fe24e40-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 27 15:34:12 crc kubenswrapper[4697]: I0127 15:34:12.423520 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eda501db-ef38-4c1f-b2d6-3e009fe24e40-server-conf" (OuterVolumeSpecName: "server-conf") pod "eda501db-ef38-4c1f-b2d6-3e009fe24e40" (UID: "eda501db-ef38-4c1f-b2d6-3e009fe24e40"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:34:12 crc kubenswrapper[4697]: I0127 15:34:12.438059 4697 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Jan 27 15:34:12 crc kubenswrapper[4697]: I0127 15:34:12.473056 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eda501db-ef38-4c1f-b2d6-3e009fe24e40-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "eda501db-ef38-4c1f-b2d6-3e009fe24e40" (UID: "eda501db-ef38-4c1f-b2d6-3e009fe24e40"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:34:12 crc kubenswrapper[4697]: I0127 15:34:12.492023 4697 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Jan 27 15:34:12 crc kubenswrapper[4697]: I0127 15:34:12.492061 4697 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/eda501db-ef38-4c1f-b2d6-3e009fe24e40-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 27 15:34:12 crc kubenswrapper[4697]: I0127 15:34:12.492072 4697 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/eda501db-ef38-4c1f-b2d6-3e009fe24e40-server-conf\") on node \"crc\" DevicePath \"\"" Jan 27 15:34:12 crc kubenswrapper[4697]: I0127 15:34:12.579432 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="abff1f2e-e0f3-4730-888c-2e2d8464f624" path="/var/lib/kubelet/pods/abff1f2e-e0f3-4730-888c-2e2d8464f624/volumes" Jan 27 15:34:12 crc kubenswrapper[4697]: I0127 15:34:12.603300 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 27 15:34:12 crc kubenswrapper[4697]: I0127 15:34:12.838764 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"eda501db-ef38-4c1f-b2d6-3e009fe24e40","Type":"ContainerDied","Data":"9a6535abaa4529826966095518341edcfcb18c0fa2342438e3d28ce02fc465b4"} Jan 27 15:34:12 crc kubenswrapper[4697]: I0127 15:34:12.839171 4697 scope.go:117] "RemoveContainer" containerID="b99a323fb1faec530a3f0d6f4c8ee524ea60d2eceda116d7699ad05c31946607" Jan 27 15:34:12 crc kubenswrapper[4697]: I0127 15:34:12.839328 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 27 15:34:12 crc kubenswrapper[4697]: I0127 15:34:12.922304 4697 scope.go:117] "RemoveContainer" containerID="840022435db03186405d1974a393a665897eadcc1c7df67f122cbcc886b3f4cc" Jan 27 15:34:12 crc kubenswrapper[4697]: I0127 15:34:12.928517 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 27 15:34:12 crc kubenswrapper[4697]: I0127 15:34:12.949029 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 27 15:34:12 crc kubenswrapper[4697]: I0127 15:34:12.993371 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 27 15:34:12 crc kubenswrapper[4697]: E0127 15:34:12.994097 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eda501db-ef38-4c1f-b2d6-3e009fe24e40" containerName="rabbitmq" Jan 27 15:34:12 crc kubenswrapper[4697]: I0127 15:34:12.994120 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="eda501db-ef38-4c1f-b2d6-3e009fe24e40" containerName="rabbitmq" Jan 27 15:34:12 crc kubenswrapper[4697]: E0127 15:34:12.994432 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eda501db-ef38-4c1f-b2d6-3e009fe24e40" containerName="setup-container" Jan 27 15:34:12 crc kubenswrapper[4697]: I0127 15:34:12.994447 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="eda501db-ef38-4c1f-b2d6-3e009fe24e40" containerName="setup-container" Jan 27 15:34:12 crc kubenswrapper[4697]: I0127 15:34:12.995491 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="eda501db-ef38-4c1f-b2d6-3e009fe24e40" containerName="rabbitmq" Jan 27 15:34:12 crc kubenswrapper[4697]: I0127 15:34:12.999383 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 27 15:34:13 crc kubenswrapper[4697]: I0127 15:34:13.007508 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 27 15:34:13 crc kubenswrapper[4697]: I0127 15:34:13.007531 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 27 15:34:13 crc kubenswrapper[4697]: I0127 15:34:13.007721 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 27 15:34:13 crc kubenswrapper[4697]: I0127 15:34:13.007874 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 27 15:34:13 crc kubenswrapper[4697]: I0127 15:34:13.007976 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-k6rbz" Jan 27 15:34:13 crc kubenswrapper[4697]: I0127 15:34:13.008078 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 27 15:34:13 crc kubenswrapper[4697]: I0127 15:34:13.008209 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 27 15:34:13 crc kubenswrapper[4697]: I0127 15:34:13.019410 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 27 15:34:13 crc kubenswrapper[4697]: I0127 15:34:13.077644 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 27 15:34:13 crc kubenswrapper[4697]: I0127 15:34:13.133632 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"e1aa709a-61ff-458d-a4b9-ca6d06bc537c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 15:34:13 crc kubenswrapper[4697]: I0127 15:34:13.133715 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwtvj\" (UniqueName: \"kubernetes.io/projected/e1aa709a-61ff-458d-a4b9-ca6d06bc537c-kube-api-access-dwtvj\") pod \"rabbitmq-cell1-server-0\" (UID: \"e1aa709a-61ff-458d-a4b9-ca6d06bc537c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 15:34:13 crc kubenswrapper[4697]: I0127 15:34:13.133744 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e1aa709a-61ff-458d-a4b9-ca6d06bc537c-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"e1aa709a-61ff-458d-a4b9-ca6d06bc537c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 15:34:13 crc kubenswrapper[4697]: I0127 15:34:13.133766 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e1aa709a-61ff-458d-a4b9-ca6d06bc537c-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"e1aa709a-61ff-458d-a4b9-ca6d06bc537c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 15:34:13 crc kubenswrapper[4697]: I0127 15:34:13.133808 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e1aa709a-61ff-458d-a4b9-ca6d06bc537c-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"e1aa709a-61ff-458d-a4b9-ca6d06bc537c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 15:34:13 crc kubenswrapper[4697]: I0127 15:34:13.133969 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e1aa709a-61ff-458d-a4b9-ca6d06bc537c-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e1aa709a-61ff-458d-a4b9-ca6d06bc537c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 15:34:13 crc kubenswrapper[4697]: I0127 15:34:13.134052 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e1aa709a-61ff-458d-a4b9-ca6d06bc537c-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e1aa709a-61ff-458d-a4b9-ca6d06bc537c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 15:34:13 crc kubenswrapper[4697]: I0127 15:34:13.134119 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e1aa709a-61ff-458d-a4b9-ca6d06bc537c-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"e1aa709a-61ff-458d-a4b9-ca6d06bc537c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 15:34:13 crc kubenswrapper[4697]: I0127 15:34:13.134285 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e1aa709a-61ff-458d-a4b9-ca6d06bc537c-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"e1aa709a-61ff-458d-a4b9-ca6d06bc537c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 15:34:13 crc kubenswrapper[4697]: I0127 15:34:13.134423 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e1aa709a-61ff-458d-a4b9-ca6d06bc537c-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"e1aa709a-61ff-458d-a4b9-ca6d06bc537c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 15:34:13 crc kubenswrapper[4697]: I0127 15:34:13.134453 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e1aa709a-61ff-458d-a4b9-ca6d06bc537c-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"e1aa709a-61ff-458d-a4b9-ca6d06bc537c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 15:34:13 crc kubenswrapper[4697]: I0127 15:34:13.235774 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"e1aa709a-61ff-458d-a4b9-ca6d06bc537c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 15:34:13 crc kubenswrapper[4697]: I0127 15:34:13.235895 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwtvj\" (UniqueName: \"kubernetes.io/projected/e1aa709a-61ff-458d-a4b9-ca6d06bc537c-kube-api-access-dwtvj\") pod \"rabbitmq-cell1-server-0\" (UID: \"e1aa709a-61ff-458d-a4b9-ca6d06bc537c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 15:34:13 crc kubenswrapper[4697]: I0127 15:34:13.235932 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e1aa709a-61ff-458d-a4b9-ca6d06bc537c-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"e1aa709a-61ff-458d-a4b9-ca6d06bc537c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 15:34:13 crc kubenswrapper[4697]: I0127 15:34:13.235954 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e1aa709a-61ff-458d-a4b9-ca6d06bc537c-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"e1aa709a-61ff-458d-a4b9-ca6d06bc537c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 15:34:13 crc kubenswrapper[4697]: I0127 15:34:13.235992 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e1aa709a-61ff-458d-a4b9-ca6d06bc537c-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"e1aa709a-61ff-458d-a4b9-ca6d06bc537c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 15:34:13 crc kubenswrapper[4697]: I0127 15:34:13.236019 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e1aa709a-61ff-458d-a4b9-ca6d06bc537c-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e1aa709a-61ff-458d-a4b9-ca6d06bc537c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 15:34:13 crc kubenswrapper[4697]: I0127 15:34:13.236048 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e1aa709a-61ff-458d-a4b9-ca6d06bc537c-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e1aa709a-61ff-458d-a4b9-ca6d06bc537c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 15:34:13 crc kubenswrapper[4697]: I0127 15:34:13.236082 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e1aa709a-61ff-458d-a4b9-ca6d06bc537c-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"e1aa709a-61ff-458d-a4b9-ca6d06bc537c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 15:34:13 crc kubenswrapper[4697]: I0127 15:34:13.236132 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e1aa709a-61ff-458d-a4b9-ca6d06bc537c-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"e1aa709a-61ff-458d-a4b9-ca6d06bc537c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 15:34:13 crc kubenswrapper[4697]: I0127 15:34:13.236257 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e1aa709a-61ff-458d-a4b9-ca6d06bc537c-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"e1aa709a-61ff-458d-a4b9-ca6d06bc537c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 15:34:13 crc kubenswrapper[4697]: I0127 15:34:13.236285 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e1aa709a-61ff-458d-a4b9-ca6d06bc537c-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"e1aa709a-61ff-458d-a4b9-ca6d06bc537c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 15:34:13 crc kubenswrapper[4697]: I0127 15:34:13.236816 4697 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"e1aa709a-61ff-458d-a4b9-ca6d06bc537c\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/rabbitmq-cell1-server-0" Jan 27 15:34:13 crc kubenswrapper[4697]: I0127 15:34:13.236917 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e1aa709a-61ff-458d-a4b9-ca6d06bc537c-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"e1aa709a-61ff-458d-a4b9-ca6d06bc537c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 15:34:13 crc kubenswrapper[4697]: I0127 15:34:13.237816 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e1aa709a-61ff-458d-a4b9-ca6d06bc537c-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e1aa709a-61ff-458d-a4b9-ca6d06bc537c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 15:34:13 crc kubenswrapper[4697]: I0127 15:34:13.237993 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e1aa709a-61ff-458d-a4b9-ca6d06bc537c-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e1aa709a-61ff-458d-a4b9-ca6d06bc537c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 15:34:13 crc kubenswrapper[4697]: I0127 15:34:13.238126 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e1aa709a-61ff-458d-a4b9-ca6d06bc537c-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"e1aa709a-61ff-458d-a4b9-ca6d06bc537c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 15:34:13 crc kubenswrapper[4697]: I0127 15:34:13.242320 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e1aa709a-61ff-458d-a4b9-ca6d06bc537c-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"e1aa709a-61ff-458d-a4b9-ca6d06bc537c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 15:34:13 crc kubenswrapper[4697]: I0127 15:34:13.243543 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e1aa709a-61ff-458d-a4b9-ca6d06bc537c-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"e1aa709a-61ff-458d-a4b9-ca6d06bc537c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 15:34:13 crc kubenswrapper[4697]: I0127 15:34:13.243608 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e1aa709a-61ff-458d-a4b9-ca6d06bc537c-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"e1aa709a-61ff-458d-a4b9-ca6d06bc537c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 15:34:13 crc kubenswrapper[4697]: I0127 15:34:13.244400 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e1aa709a-61ff-458d-a4b9-ca6d06bc537c-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"e1aa709a-61ff-458d-a4b9-ca6d06bc537c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 15:34:13 crc kubenswrapper[4697]: I0127 15:34:13.246990 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e1aa709a-61ff-458d-a4b9-ca6d06bc537c-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"e1aa709a-61ff-458d-a4b9-ca6d06bc537c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 15:34:13 crc kubenswrapper[4697]: I0127 15:34:13.253942 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwtvj\" (UniqueName: \"kubernetes.io/projected/e1aa709a-61ff-458d-a4b9-ca6d06bc537c-kube-api-access-dwtvj\") pod \"rabbitmq-cell1-server-0\" (UID: \"e1aa709a-61ff-458d-a4b9-ca6d06bc537c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 15:34:13 crc kubenswrapper[4697]: I0127 15:34:13.267713 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"e1aa709a-61ff-458d-a4b9-ca6d06bc537c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 15:34:13 crc kubenswrapper[4697]: I0127 15:34:13.330457 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 27 15:34:13 crc kubenswrapper[4697]: I0127 15:34:13.807814 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 27 15:34:13 crc kubenswrapper[4697]: W0127 15:34:13.811729 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode1aa709a_61ff_458d_a4b9_ca6d06bc537c.slice/crio-289053c3ba5be42f03a909a0141e9c121a3c94202055a6859903c7f72ce588c2 WatchSource:0}: Error finding container 289053c3ba5be42f03a909a0141e9c121a3c94202055a6859903c7f72ce588c2: Status 404 returned error can't find the container with id 289053c3ba5be42f03a909a0141e9c121a3c94202055a6859903c7f72ce588c2 Jan 27 15:34:13 crc kubenswrapper[4697]: I0127 15:34:13.851251 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b9b87d14-1e98-448a-9b9c-3c47e4782ede","Type":"ContainerStarted","Data":"2831d34dc1a18a44e3635f75955adcd568aacca1b6aa25a770db48147cbeecc5"} Jan 27 15:34:13 crc kubenswrapper[4697]: I0127 15:34:13.857074 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e1aa709a-61ff-458d-a4b9-ca6d06bc537c","Type":"ContainerStarted","Data":"289053c3ba5be42f03a909a0141e9c121a3c94202055a6859903c7f72ce588c2"} Jan 27 15:34:14 crc kubenswrapper[4697]: I0127 15:34:14.429404 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-zxpjs"] Jan 27 15:34:14 crc kubenswrapper[4697]: I0127 15:34:14.431726 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-zxpjs" Jan 27 15:34:14 crc kubenswrapper[4697]: I0127 15:34:14.450307 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Jan 27 15:34:14 crc kubenswrapper[4697]: I0127 15:34:14.450635 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-zxpjs"] Jan 27 15:34:14 crc kubenswrapper[4697]: I0127 15:34:14.578045 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fac6e364-46b2-43a9-9224-a46e97774be2-dns-svc\") pod \"dnsmasq-dns-79bd4cc8c9-zxpjs\" (UID: \"fac6e364-46b2-43a9-9224-a46e97774be2\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-zxpjs" Jan 27 15:34:14 crc kubenswrapper[4697]: I0127 15:34:14.578137 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djdd2\" (UniqueName: \"kubernetes.io/projected/fac6e364-46b2-43a9-9224-a46e97774be2-kube-api-access-djdd2\") pod \"dnsmasq-dns-79bd4cc8c9-zxpjs\" (UID: \"fac6e364-46b2-43a9-9224-a46e97774be2\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-zxpjs" Jan 27 15:34:14 crc kubenswrapper[4697]: I0127 15:34:14.578173 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fac6e364-46b2-43a9-9224-a46e97774be2-config\") pod \"dnsmasq-dns-79bd4cc8c9-zxpjs\" (UID: \"fac6e364-46b2-43a9-9224-a46e97774be2\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-zxpjs" Jan 27 15:34:14 crc kubenswrapper[4697]: I0127 15:34:14.578283 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fac6e364-46b2-43a9-9224-a46e97774be2-dns-swift-storage-0\") pod \"dnsmasq-dns-79bd4cc8c9-zxpjs\" (UID: \"fac6e364-46b2-43a9-9224-a46e97774be2\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-zxpjs" Jan 27 15:34:14 crc kubenswrapper[4697]: I0127 15:34:14.578329 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fac6e364-46b2-43a9-9224-a46e97774be2-ovsdbserver-sb\") pod \"dnsmasq-dns-79bd4cc8c9-zxpjs\" (UID: \"fac6e364-46b2-43a9-9224-a46e97774be2\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-zxpjs" Jan 27 15:34:14 crc kubenswrapper[4697]: I0127 15:34:14.578420 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fac6e364-46b2-43a9-9224-a46e97774be2-ovsdbserver-nb\") pod \"dnsmasq-dns-79bd4cc8c9-zxpjs\" (UID: \"fac6e364-46b2-43a9-9224-a46e97774be2\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-zxpjs" Jan 27 15:34:14 crc kubenswrapper[4697]: I0127 15:34:14.578457 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/fac6e364-46b2-43a9-9224-a46e97774be2-openstack-edpm-ipam\") pod \"dnsmasq-dns-79bd4cc8c9-zxpjs\" (UID: \"fac6e364-46b2-43a9-9224-a46e97774be2\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-zxpjs" Jan 27 15:34:14 crc kubenswrapper[4697]: I0127 15:34:14.580691 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eda501db-ef38-4c1f-b2d6-3e009fe24e40" path="/var/lib/kubelet/pods/eda501db-ef38-4c1f-b2d6-3e009fe24e40/volumes" Jan 27 15:34:14 crc kubenswrapper[4697]: I0127 15:34:14.680341 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fac6e364-46b2-43a9-9224-a46e97774be2-dns-swift-storage-0\") pod \"dnsmasq-dns-79bd4cc8c9-zxpjs\" (UID: \"fac6e364-46b2-43a9-9224-a46e97774be2\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-zxpjs" Jan 27 15:34:14 crc kubenswrapper[4697]: I0127 15:34:14.680432 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fac6e364-46b2-43a9-9224-a46e97774be2-ovsdbserver-sb\") pod \"dnsmasq-dns-79bd4cc8c9-zxpjs\" (UID: \"fac6e364-46b2-43a9-9224-a46e97774be2\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-zxpjs" Jan 27 15:34:14 crc kubenswrapper[4697]: I0127 15:34:14.680475 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fac6e364-46b2-43a9-9224-a46e97774be2-ovsdbserver-nb\") pod \"dnsmasq-dns-79bd4cc8c9-zxpjs\" (UID: \"fac6e364-46b2-43a9-9224-a46e97774be2\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-zxpjs" Jan 27 15:34:14 crc kubenswrapper[4697]: I0127 15:34:14.680505 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/fac6e364-46b2-43a9-9224-a46e97774be2-openstack-edpm-ipam\") pod \"dnsmasq-dns-79bd4cc8c9-zxpjs\" (UID: \"fac6e364-46b2-43a9-9224-a46e97774be2\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-zxpjs" Jan 27 15:34:14 crc kubenswrapper[4697]: I0127 15:34:14.680587 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fac6e364-46b2-43a9-9224-a46e97774be2-dns-svc\") pod \"dnsmasq-dns-79bd4cc8c9-zxpjs\" (UID: \"fac6e364-46b2-43a9-9224-a46e97774be2\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-zxpjs" Jan 27 15:34:14 crc kubenswrapper[4697]: I0127 15:34:14.680650 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djdd2\" (UniqueName: \"kubernetes.io/projected/fac6e364-46b2-43a9-9224-a46e97774be2-kube-api-access-djdd2\") pod \"dnsmasq-dns-79bd4cc8c9-zxpjs\" (UID: \"fac6e364-46b2-43a9-9224-a46e97774be2\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-zxpjs" Jan 27 15:34:14 crc kubenswrapper[4697]: I0127 15:34:14.680678 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fac6e364-46b2-43a9-9224-a46e97774be2-config\") pod \"dnsmasq-dns-79bd4cc8c9-zxpjs\" (UID: \"fac6e364-46b2-43a9-9224-a46e97774be2\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-zxpjs" Jan 27 15:34:14 crc kubenswrapper[4697]: I0127 15:34:14.681732 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/fac6e364-46b2-43a9-9224-a46e97774be2-openstack-edpm-ipam\") pod \"dnsmasq-dns-79bd4cc8c9-zxpjs\" (UID: \"fac6e364-46b2-43a9-9224-a46e97774be2\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-zxpjs" Jan 27 15:34:14 crc kubenswrapper[4697]: I0127 15:34:14.681923 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fac6e364-46b2-43a9-9224-a46e97774be2-config\") pod \"dnsmasq-dns-79bd4cc8c9-zxpjs\" (UID: \"fac6e364-46b2-43a9-9224-a46e97774be2\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-zxpjs" Jan 27 15:34:14 crc kubenswrapper[4697]: I0127 15:34:14.682068 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fac6e364-46b2-43a9-9224-a46e97774be2-dns-svc\") pod \"dnsmasq-dns-79bd4cc8c9-zxpjs\" (UID: \"fac6e364-46b2-43a9-9224-a46e97774be2\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-zxpjs" Jan 27 15:34:14 crc kubenswrapper[4697]: I0127 15:34:14.682240 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fac6e364-46b2-43a9-9224-a46e97774be2-ovsdbserver-sb\") pod \"dnsmasq-dns-79bd4cc8c9-zxpjs\" (UID: \"fac6e364-46b2-43a9-9224-a46e97774be2\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-zxpjs" Jan 27 15:34:14 crc kubenswrapper[4697]: I0127 15:34:14.682419 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fac6e364-46b2-43a9-9224-a46e97774be2-dns-swift-storage-0\") pod \"dnsmasq-dns-79bd4cc8c9-zxpjs\" (UID: \"fac6e364-46b2-43a9-9224-a46e97774be2\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-zxpjs" Jan 27 15:34:14 crc kubenswrapper[4697]: I0127 15:34:14.683022 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fac6e364-46b2-43a9-9224-a46e97774be2-ovsdbserver-nb\") pod \"dnsmasq-dns-79bd4cc8c9-zxpjs\" (UID: \"fac6e364-46b2-43a9-9224-a46e97774be2\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-zxpjs" Jan 27 15:34:14 crc kubenswrapper[4697]: I0127 15:34:14.716949 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djdd2\" (UniqueName: \"kubernetes.io/projected/fac6e364-46b2-43a9-9224-a46e97774be2-kube-api-access-djdd2\") pod \"dnsmasq-dns-79bd4cc8c9-zxpjs\" (UID: \"fac6e364-46b2-43a9-9224-a46e97774be2\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-zxpjs" Jan 27 15:34:14 crc kubenswrapper[4697]: I0127 15:34:14.763412 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-zxpjs" Jan 27 15:34:14 crc kubenswrapper[4697]: I0127 15:34:14.887850 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b9b87d14-1e98-448a-9b9c-3c47e4782ede","Type":"ContainerStarted","Data":"33e86b7ecb244682362e80a4774c35327658ec37be9310292b46eb74800be01a"} Jan 27 15:34:15 crc kubenswrapper[4697]: I0127 15:34:15.243770 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-zxpjs"] Jan 27 15:34:15 crc kubenswrapper[4697]: W0127 15:34:15.259680 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfac6e364_46b2_43a9_9224_a46e97774be2.slice/crio-8aee82b9c2bb6322b9c8e01a8f7f44e5a43f552304305f1e85f4466b5263386e WatchSource:0}: Error finding container 8aee82b9c2bb6322b9c8e01a8f7f44e5a43f552304305f1e85f4466b5263386e: Status 404 returned error can't find the container with id 8aee82b9c2bb6322b9c8e01a8f7f44e5a43f552304305f1e85f4466b5263386e Jan 27 15:34:15 crc kubenswrapper[4697]: I0127 15:34:15.911255 4697 generic.go:334] "Generic (PLEG): container finished" podID="fac6e364-46b2-43a9-9224-a46e97774be2" containerID="324f33fb01224c968dca774f1df8ac04a4ed1a6f7f48ea022a8987bf3505db21" exitCode=0 Jan 27 15:34:15 crc kubenswrapper[4697]: I0127 15:34:15.911635 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-zxpjs" event={"ID":"fac6e364-46b2-43a9-9224-a46e97774be2","Type":"ContainerDied","Data":"324f33fb01224c968dca774f1df8ac04a4ed1a6f7f48ea022a8987bf3505db21"} Jan 27 15:34:15 crc kubenswrapper[4697]: I0127 15:34:15.911670 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-zxpjs" event={"ID":"fac6e364-46b2-43a9-9224-a46e97774be2","Type":"ContainerStarted","Data":"8aee82b9c2bb6322b9c8e01a8f7f44e5a43f552304305f1e85f4466b5263386e"} Jan 27 15:34:15 crc kubenswrapper[4697]: I0127 15:34:15.919555 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e1aa709a-61ff-458d-a4b9-ca6d06bc537c","Type":"ContainerStarted","Data":"52f02509f50930219a00ce8f13766ed222279c37deb2dd1fee50d38fc4710594"} Jan 27 15:34:16 crc kubenswrapper[4697]: I0127 15:34:16.931740 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-zxpjs" event={"ID":"fac6e364-46b2-43a9-9224-a46e97774be2","Type":"ContainerStarted","Data":"1691f364e382643bd367ffecee4e87b2f755732b8ce682aee28a2d537a0a2482"} Jan 27 15:34:16 crc kubenswrapper[4697]: I0127 15:34:16.932161 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-79bd4cc8c9-zxpjs" Jan 27 15:34:16 crc kubenswrapper[4697]: I0127 15:34:16.965289 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-79bd4cc8c9-zxpjs" podStartSLOduration=2.9652646799999998 podStartE2EDuration="2.96526468s" podCreationTimestamp="2026-01-27 15:34:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:34:16.951904295 +0000 UTC m=+1553.124304076" watchObservedRunningTime="2026-01-27 15:34:16.96526468 +0000 UTC m=+1553.137664471" Jan 27 15:34:21 crc kubenswrapper[4697]: I0127 15:34:21.568983 4697 scope.go:117] "RemoveContainer" containerID="1041a01976f73e6dbbf881bb74cdc0195408ed73fc04fdd6c07635790ef653fc" Jan 27 15:34:21 crc kubenswrapper[4697]: E0127 15:34:21.569627 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wz495_openshift-machine-config-operator(e9bec8bc-b2a6-4865-83ca-692ae5c022a6)\"" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" Jan 27 15:34:24 crc kubenswrapper[4697]: I0127 15:34:24.765618 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-79bd4cc8c9-zxpjs" Jan 27 15:34:24 crc kubenswrapper[4697]: I0127 15:34:24.834476 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-5nhp7"] Jan 27 15:34:24 crc kubenswrapper[4697]: I0127 15:34:24.834805 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-89c5cd4d5-5nhp7" podUID="ad285a43-a79d-4383-acc4-208659eeffe1" containerName="dnsmasq-dns" containerID="cri-o://14a501270d6427fe8a09df1aca2ed85a57b25173f8511970b1f7b82a8efd75d3" gracePeriod=10 Jan 27 15:34:25 crc kubenswrapper[4697]: I0127 15:34:25.012082 4697 generic.go:334] "Generic (PLEG): container finished" podID="ad285a43-a79d-4383-acc4-208659eeffe1" containerID="14a501270d6427fe8a09df1aca2ed85a57b25173f8511970b1f7b82a8efd75d3" exitCode=0 Jan 27 15:34:25 crc kubenswrapper[4697]: I0127 15:34:25.012123 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-5nhp7" event={"ID":"ad285a43-a79d-4383-acc4-208659eeffe1","Type":"ContainerDied","Data":"14a501270d6427fe8a09df1aca2ed85a57b25173f8511970b1f7b82a8efd75d3"} Jan 27 15:34:25 crc kubenswrapper[4697]: I0127 15:34:25.058883 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6ff66b85ff-jdz29"] Jan 27 15:34:25 crc kubenswrapper[4697]: I0127 15:34:25.061861 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6ff66b85ff-jdz29" Jan 27 15:34:25 crc kubenswrapper[4697]: I0127 15:34:25.093867 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/56c582a3-145c-4300-8680-1720a7581f60-ovsdbserver-nb\") pod \"dnsmasq-dns-6ff66b85ff-jdz29\" (UID: \"56c582a3-145c-4300-8680-1720a7581f60\") " pod="openstack/dnsmasq-dns-6ff66b85ff-jdz29" Jan 27 15:34:25 crc kubenswrapper[4697]: I0127 15:34:25.093963 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/56c582a3-145c-4300-8680-1720a7581f60-ovsdbserver-sb\") pod \"dnsmasq-dns-6ff66b85ff-jdz29\" (UID: \"56c582a3-145c-4300-8680-1720a7581f60\") " pod="openstack/dnsmasq-dns-6ff66b85ff-jdz29" Jan 27 15:34:25 crc kubenswrapper[4697]: I0127 15:34:25.094029 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4c84p\" (UniqueName: \"kubernetes.io/projected/56c582a3-145c-4300-8680-1720a7581f60-kube-api-access-4c84p\") pod \"dnsmasq-dns-6ff66b85ff-jdz29\" (UID: \"56c582a3-145c-4300-8680-1720a7581f60\") " pod="openstack/dnsmasq-dns-6ff66b85ff-jdz29" Jan 27 15:34:25 crc kubenswrapper[4697]: I0127 15:34:25.094077 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56c582a3-145c-4300-8680-1720a7581f60-config\") pod \"dnsmasq-dns-6ff66b85ff-jdz29\" (UID: \"56c582a3-145c-4300-8680-1720a7581f60\") " pod="openstack/dnsmasq-dns-6ff66b85ff-jdz29" Jan 27 15:34:25 crc kubenswrapper[4697]: I0127 15:34:25.094107 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/56c582a3-145c-4300-8680-1720a7581f60-openstack-edpm-ipam\") pod \"dnsmasq-dns-6ff66b85ff-jdz29\" (UID: \"56c582a3-145c-4300-8680-1720a7581f60\") " pod="openstack/dnsmasq-dns-6ff66b85ff-jdz29" Jan 27 15:34:25 crc kubenswrapper[4697]: I0127 15:34:25.094206 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/56c582a3-145c-4300-8680-1720a7581f60-dns-svc\") pod \"dnsmasq-dns-6ff66b85ff-jdz29\" (UID: \"56c582a3-145c-4300-8680-1720a7581f60\") " pod="openstack/dnsmasq-dns-6ff66b85ff-jdz29" Jan 27 15:34:25 crc kubenswrapper[4697]: I0127 15:34:25.094307 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/56c582a3-145c-4300-8680-1720a7581f60-dns-swift-storage-0\") pod \"dnsmasq-dns-6ff66b85ff-jdz29\" (UID: \"56c582a3-145c-4300-8680-1720a7581f60\") " pod="openstack/dnsmasq-dns-6ff66b85ff-jdz29" Jan 27 15:34:25 crc kubenswrapper[4697]: I0127 15:34:25.117856 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6ff66b85ff-jdz29"] Jan 27 15:34:25 crc kubenswrapper[4697]: I0127 15:34:25.195760 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/56c582a3-145c-4300-8680-1720a7581f60-openstack-edpm-ipam\") pod \"dnsmasq-dns-6ff66b85ff-jdz29\" (UID: \"56c582a3-145c-4300-8680-1720a7581f60\") " pod="openstack/dnsmasq-dns-6ff66b85ff-jdz29" Jan 27 15:34:25 crc kubenswrapper[4697]: I0127 15:34:25.195901 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/56c582a3-145c-4300-8680-1720a7581f60-dns-svc\") pod \"dnsmasq-dns-6ff66b85ff-jdz29\" (UID: \"56c582a3-145c-4300-8680-1720a7581f60\") " pod="openstack/dnsmasq-dns-6ff66b85ff-jdz29" Jan 27 15:34:25 crc kubenswrapper[4697]: I0127 15:34:25.197913 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/56c582a3-145c-4300-8680-1720a7581f60-dns-swift-storage-0\") pod \"dnsmasq-dns-6ff66b85ff-jdz29\" (UID: \"56c582a3-145c-4300-8680-1720a7581f60\") " pod="openstack/dnsmasq-dns-6ff66b85ff-jdz29" Jan 27 15:34:25 crc kubenswrapper[4697]: I0127 15:34:25.197987 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/56c582a3-145c-4300-8680-1720a7581f60-ovsdbserver-nb\") pod \"dnsmasq-dns-6ff66b85ff-jdz29\" (UID: \"56c582a3-145c-4300-8680-1720a7581f60\") " pod="openstack/dnsmasq-dns-6ff66b85ff-jdz29" Jan 27 15:34:25 crc kubenswrapper[4697]: I0127 15:34:25.198055 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/56c582a3-145c-4300-8680-1720a7581f60-ovsdbserver-sb\") pod \"dnsmasq-dns-6ff66b85ff-jdz29\" (UID: \"56c582a3-145c-4300-8680-1720a7581f60\") " pod="openstack/dnsmasq-dns-6ff66b85ff-jdz29" Jan 27 15:34:25 crc kubenswrapper[4697]: I0127 15:34:25.198133 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4c84p\" (UniqueName: \"kubernetes.io/projected/56c582a3-145c-4300-8680-1720a7581f60-kube-api-access-4c84p\") pod \"dnsmasq-dns-6ff66b85ff-jdz29\" (UID: \"56c582a3-145c-4300-8680-1720a7581f60\") " pod="openstack/dnsmasq-dns-6ff66b85ff-jdz29" Jan 27 15:34:25 crc kubenswrapper[4697]: I0127 15:34:25.198203 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56c582a3-145c-4300-8680-1720a7581f60-config\") pod \"dnsmasq-dns-6ff66b85ff-jdz29\" (UID: \"56c582a3-145c-4300-8680-1720a7581f60\") " pod="openstack/dnsmasq-dns-6ff66b85ff-jdz29" Jan 27 15:34:25 crc kubenswrapper[4697]: I0127 15:34:25.198547 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/56c582a3-145c-4300-8680-1720a7581f60-dns-svc\") pod \"dnsmasq-dns-6ff66b85ff-jdz29\" (UID: \"56c582a3-145c-4300-8680-1720a7581f60\") " pod="openstack/dnsmasq-dns-6ff66b85ff-jdz29" Jan 27 15:34:25 crc kubenswrapper[4697]: I0127 15:34:25.199105 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56c582a3-145c-4300-8680-1720a7581f60-config\") pod \"dnsmasq-dns-6ff66b85ff-jdz29\" (UID: \"56c582a3-145c-4300-8680-1720a7581f60\") " pod="openstack/dnsmasq-dns-6ff66b85ff-jdz29" Jan 27 15:34:25 crc kubenswrapper[4697]: I0127 15:34:25.199127 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/56c582a3-145c-4300-8680-1720a7581f60-ovsdbserver-nb\") pod \"dnsmasq-dns-6ff66b85ff-jdz29\" (UID: \"56c582a3-145c-4300-8680-1720a7581f60\") " pod="openstack/dnsmasq-dns-6ff66b85ff-jdz29" Jan 27 15:34:25 crc kubenswrapper[4697]: I0127 15:34:25.197100 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/56c582a3-145c-4300-8680-1720a7581f60-openstack-edpm-ipam\") pod \"dnsmasq-dns-6ff66b85ff-jdz29\" (UID: \"56c582a3-145c-4300-8680-1720a7581f60\") " pod="openstack/dnsmasq-dns-6ff66b85ff-jdz29" Jan 27 15:34:25 crc kubenswrapper[4697]: I0127 15:34:25.199682 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/56c582a3-145c-4300-8680-1720a7581f60-dns-swift-storage-0\") pod \"dnsmasq-dns-6ff66b85ff-jdz29\" (UID: \"56c582a3-145c-4300-8680-1720a7581f60\") " pod="openstack/dnsmasq-dns-6ff66b85ff-jdz29" Jan 27 15:34:25 crc kubenswrapper[4697]: I0127 15:34:25.199774 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/56c582a3-145c-4300-8680-1720a7581f60-ovsdbserver-sb\") pod \"dnsmasq-dns-6ff66b85ff-jdz29\" (UID: \"56c582a3-145c-4300-8680-1720a7581f60\") " pod="openstack/dnsmasq-dns-6ff66b85ff-jdz29" Jan 27 15:34:25 crc kubenswrapper[4697]: I0127 15:34:25.241392 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4c84p\" (UniqueName: \"kubernetes.io/projected/56c582a3-145c-4300-8680-1720a7581f60-kube-api-access-4c84p\") pod \"dnsmasq-dns-6ff66b85ff-jdz29\" (UID: \"56c582a3-145c-4300-8680-1720a7581f60\") " pod="openstack/dnsmasq-dns-6ff66b85ff-jdz29" Jan 27 15:34:25 crc kubenswrapper[4697]: I0127 15:34:25.383926 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6ff66b85ff-jdz29" Jan 27 15:34:25 crc kubenswrapper[4697]: I0127 15:34:25.512929 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-5nhp7" Jan 27 15:34:25 crc kubenswrapper[4697]: I0127 15:34:25.706197 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad285a43-a79d-4383-acc4-208659eeffe1-config\") pod \"ad285a43-a79d-4383-acc4-208659eeffe1\" (UID: \"ad285a43-a79d-4383-acc4-208659eeffe1\") " Jan 27 15:34:25 crc kubenswrapper[4697]: I0127 15:34:25.706549 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ad285a43-a79d-4383-acc4-208659eeffe1-ovsdbserver-sb\") pod \"ad285a43-a79d-4383-acc4-208659eeffe1\" (UID: \"ad285a43-a79d-4383-acc4-208659eeffe1\") " Jan 27 15:34:25 crc kubenswrapper[4697]: I0127 15:34:25.706655 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ad285a43-a79d-4383-acc4-208659eeffe1-ovsdbserver-nb\") pod \"ad285a43-a79d-4383-acc4-208659eeffe1\" (UID: \"ad285a43-a79d-4383-acc4-208659eeffe1\") " Jan 27 15:34:25 crc kubenswrapper[4697]: I0127 15:34:25.706728 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ad285a43-a79d-4383-acc4-208659eeffe1-dns-svc\") pod \"ad285a43-a79d-4383-acc4-208659eeffe1\" (UID: \"ad285a43-a79d-4383-acc4-208659eeffe1\") " Jan 27 15:34:25 crc kubenswrapper[4697]: I0127 15:34:25.706768 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ad285a43-a79d-4383-acc4-208659eeffe1-dns-swift-storage-0\") pod \"ad285a43-a79d-4383-acc4-208659eeffe1\" (UID: \"ad285a43-a79d-4383-acc4-208659eeffe1\") " Jan 27 15:34:25 crc kubenswrapper[4697]: I0127 15:34:25.706903 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-64xht\" (UniqueName: \"kubernetes.io/projected/ad285a43-a79d-4383-acc4-208659eeffe1-kube-api-access-64xht\") pod \"ad285a43-a79d-4383-acc4-208659eeffe1\" (UID: \"ad285a43-a79d-4383-acc4-208659eeffe1\") " Jan 27 15:34:25 crc kubenswrapper[4697]: I0127 15:34:25.712901 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad285a43-a79d-4383-acc4-208659eeffe1-kube-api-access-64xht" (OuterVolumeSpecName: "kube-api-access-64xht") pod "ad285a43-a79d-4383-acc4-208659eeffe1" (UID: "ad285a43-a79d-4383-acc4-208659eeffe1"). InnerVolumeSpecName "kube-api-access-64xht". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:34:25 crc kubenswrapper[4697]: I0127 15:34:25.757200 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad285a43-a79d-4383-acc4-208659eeffe1-config" (OuterVolumeSpecName: "config") pod "ad285a43-a79d-4383-acc4-208659eeffe1" (UID: "ad285a43-a79d-4383-acc4-208659eeffe1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:34:25 crc kubenswrapper[4697]: I0127 15:34:25.765748 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad285a43-a79d-4383-acc4-208659eeffe1-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ad285a43-a79d-4383-acc4-208659eeffe1" (UID: "ad285a43-a79d-4383-acc4-208659eeffe1"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:34:25 crc kubenswrapper[4697]: I0127 15:34:25.767461 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad285a43-a79d-4383-acc4-208659eeffe1-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ad285a43-a79d-4383-acc4-208659eeffe1" (UID: "ad285a43-a79d-4383-acc4-208659eeffe1"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:34:25 crc kubenswrapper[4697]: I0127 15:34:25.768234 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad285a43-a79d-4383-acc4-208659eeffe1-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ad285a43-a79d-4383-acc4-208659eeffe1" (UID: "ad285a43-a79d-4383-acc4-208659eeffe1"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:34:25 crc kubenswrapper[4697]: I0127 15:34:25.784246 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad285a43-a79d-4383-acc4-208659eeffe1-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "ad285a43-a79d-4383-acc4-208659eeffe1" (UID: "ad285a43-a79d-4383-acc4-208659eeffe1"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:34:25 crc kubenswrapper[4697]: I0127 15:34:25.809235 4697 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ad285a43-a79d-4383-acc4-208659eeffe1-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 15:34:25 crc kubenswrapper[4697]: I0127 15:34:25.809268 4697 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ad285a43-a79d-4383-acc4-208659eeffe1-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 27 15:34:25 crc kubenswrapper[4697]: I0127 15:34:25.809280 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-64xht\" (UniqueName: \"kubernetes.io/projected/ad285a43-a79d-4383-acc4-208659eeffe1-kube-api-access-64xht\") on node \"crc\" DevicePath \"\"" Jan 27 15:34:25 crc kubenswrapper[4697]: I0127 15:34:25.809288 4697 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad285a43-a79d-4383-acc4-208659eeffe1-config\") on node \"crc\" DevicePath \"\"" Jan 27 15:34:25 crc kubenswrapper[4697]: I0127 15:34:25.809296 4697 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ad285a43-a79d-4383-acc4-208659eeffe1-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 15:34:25 crc kubenswrapper[4697]: I0127 15:34:25.809305 4697 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ad285a43-a79d-4383-acc4-208659eeffe1-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 15:34:25 crc kubenswrapper[4697]: I0127 15:34:25.904002 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6ff66b85ff-jdz29"] Jan 27 15:34:26 crc kubenswrapper[4697]: I0127 15:34:26.023164 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6ff66b85ff-jdz29" event={"ID":"56c582a3-145c-4300-8680-1720a7581f60","Type":"ContainerStarted","Data":"bc0c0293d37733dad36c8d9572947a770a779f5c09ba06af0f05aef489a148f0"} Jan 27 15:34:26 crc kubenswrapper[4697]: I0127 15:34:26.026368 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-5nhp7" event={"ID":"ad285a43-a79d-4383-acc4-208659eeffe1","Type":"ContainerDied","Data":"77c9d1b6d77316d3742d6f4ea316ba2e11b382dcf646f208db8aa4eb3407a2ae"} Jan 27 15:34:26 crc kubenswrapper[4697]: I0127 15:34:26.026408 4697 scope.go:117] "RemoveContainer" containerID="14a501270d6427fe8a09df1aca2ed85a57b25173f8511970b1f7b82a8efd75d3" Jan 27 15:34:26 crc kubenswrapper[4697]: I0127 15:34:26.026546 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-5nhp7" Jan 27 15:34:26 crc kubenswrapper[4697]: I0127 15:34:26.083608 4697 scope.go:117] "RemoveContainer" containerID="e3766533ac8ecc34aafefc8a5e93e5930e56d4e278a4d15a8f7553c108db9431" Jan 27 15:34:26 crc kubenswrapper[4697]: I0127 15:34:26.121319 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-5nhp7"] Jan 27 15:34:26 crc kubenswrapper[4697]: I0127 15:34:26.131879 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-5nhp7"] Jan 27 15:34:26 crc kubenswrapper[4697]: I0127 15:34:26.577671 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad285a43-a79d-4383-acc4-208659eeffe1" path="/var/lib/kubelet/pods/ad285a43-a79d-4383-acc4-208659eeffe1/volumes" Jan 27 15:34:27 crc kubenswrapper[4697]: I0127 15:34:27.041348 4697 generic.go:334] "Generic (PLEG): container finished" podID="56c582a3-145c-4300-8680-1720a7581f60" containerID="334e5630539e20b0c6964de3662a99374abbc7b7d30476a2418ae5d3be58983a" exitCode=0 Jan 27 15:34:27 crc kubenswrapper[4697]: I0127 15:34:27.041407 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6ff66b85ff-jdz29" event={"ID":"56c582a3-145c-4300-8680-1720a7581f60","Type":"ContainerDied","Data":"334e5630539e20b0c6964de3662a99374abbc7b7d30476a2418ae5d3be58983a"} Jan 27 15:34:28 crc kubenswrapper[4697]: I0127 15:34:28.053401 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6ff66b85ff-jdz29" event={"ID":"56c582a3-145c-4300-8680-1720a7581f60","Type":"ContainerStarted","Data":"ad8916067aedb2e0ee3177c7d2789b64405d9aeb952b684b5d72a34c1c58f53f"} Jan 27 15:34:28 crc kubenswrapper[4697]: I0127 15:34:28.054191 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6ff66b85ff-jdz29" Jan 27 15:34:28 crc kubenswrapper[4697]: I0127 15:34:28.081058 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6ff66b85ff-jdz29" podStartSLOduration=3.081040645 podStartE2EDuration="3.081040645s" podCreationTimestamp="2026-01-27 15:34:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:34:28.078236427 +0000 UTC m=+1564.250636228" watchObservedRunningTime="2026-01-27 15:34:28.081040645 +0000 UTC m=+1564.253440426" Jan 27 15:34:32 crc kubenswrapper[4697]: E0127 15:34:32.854974 4697 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/system.slice/rpm-ostreed.service\": RecentStats: unable to find data in memory cache]" Jan 27 15:34:33 crc kubenswrapper[4697]: I0127 15:34:33.568742 4697 scope.go:117] "RemoveContainer" containerID="1041a01976f73e6dbbf881bb74cdc0195408ed73fc04fdd6c07635790ef653fc" Jan 27 15:34:33 crc kubenswrapper[4697]: E0127 15:34:33.569011 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wz495_openshift-machine-config-operator(e9bec8bc-b2a6-4865-83ca-692ae5c022a6)\"" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" Jan 27 15:34:35 crc kubenswrapper[4697]: I0127 15:34:35.386065 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6ff66b85ff-jdz29" Jan 27 15:34:35 crc kubenswrapper[4697]: I0127 15:34:35.495392 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-zxpjs"] Jan 27 15:34:35 crc kubenswrapper[4697]: I0127 15:34:35.496056 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-79bd4cc8c9-zxpjs" podUID="fac6e364-46b2-43a9-9224-a46e97774be2" containerName="dnsmasq-dns" containerID="cri-o://1691f364e382643bd367ffecee4e87b2f755732b8ce682aee28a2d537a0a2482" gracePeriod=10 Jan 27 15:34:36 crc kubenswrapper[4697]: I0127 15:34:36.016401 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-zxpjs" Jan 27 15:34:36 crc kubenswrapper[4697]: I0127 15:34:36.103028 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fac6e364-46b2-43a9-9224-a46e97774be2-ovsdbserver-nb\") pod \"fac6e364-46b2-43a9-9224-a46e97774be2\" (UID: \"fac6e364-46b2-43a9-9224-a46e97774be2\") " Jan 27 15:34:36 crc kubenswrapper[4697]: I0127 15:34:36.103153 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fac6e364-46b2-43a9-9224-a46e97774be2-dns-svc\") pod \"fac6e364-46b2-43a9-9224-a46e97774be2\" (UID: \"fac6e364-46b2-43a9-9224-a46e97774be2\") " Jan 27 15:34:36 crc kubenswrapper[4697]: I0127 15:34:36.103239 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-djdd2\" (UniqueName: \"kubernetes.io/projected/fac6e364-46b2-43a9-9224-a46e97774be2-kube-api-access-djdd2\") pod \"fac6e364-46b2-43a9-9224-a46e97774be2\" (UID: \"fac6e364-46b2-43a9-9224-a46e97774be2\") " Jan 27 15:34:36 crc kubenswrapper[4697]: I0127 15:34:36.103265 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/fac6e364-46b2-43a9-9224-a46e97774be2-openstack-edpm-ipam\") pod \"fac6e364-46b2-43a9-9224-a46e97774be2\" (UID: \"fac6e364-46b2-43a9-9224-a46e97774be2\") " Jan 27 15:34:36 crc kubenswrapper[4697]: I0127 15:34:36.103304 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fac6e364-46b2-43a9-9224-a46e97774be2-config\") pod \"fac6e364-46b2-43a9-9224-a46e97774be2\" (UID: \"fac6e364-46b2-43a9-9224-a46e97774be2\") " Jan 27 15:34:36 crc kubenswrapper[4697]: I0127 15:34:36.103387 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fac6e364-46b2-43a9-9224-a46e97774be2-dns-swift-storage-0\") pod \"fac6e364-46b2-43a9-9224-a46e97774be2\" (UID: \"fac6e364-46b2-43a9-9224-a46e97774be2\") " Jan 27 15:34:36 crc kubenswrapper[4697]: I0127 15:34:36.103470 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fac6e364-46b2-43a9-9224-a46e97774be2-ovsdbserver-sb\") pod \"fac6e364-46b2-43a9-9224-a46e97774be2\" (UID: \"fac6e364-46b2-43a9-9224-a46e97774be2\") " Jan 27 15:34:36 crc kubenswrapper[4697]: I0127 15:34:36.137468 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fac6e364-46b2-43a9-9224-a46e97774be2-kube-api-access-djdd2" (OuterVolumeSpecName: "kube-api-access-djdd2") pod "fac6e364-46b2-43a9-9224-a46e97774be2" (UID: "fac6e364-46b2-43a9-9224-a46e97774be2"). InnerVolumeSpecName "kube-api-access-djdd2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:34:36 crc kubenswrapper[4697]: I0127 15:34:36.138910 4697 generic.go:334] "Generic (PLEG): container finished" podID="fac6e364-46b2-43a9-9224-a46e97774be2" containerID="1691f364e382643bd367ffecee4e87b2f755732b8ce682aee28a2d537a0a2482" exitCode=0 Jan 27 15:34:36 crc kubenswrapper[4697]: I0127 15:34:36.138950 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-zxpjs" event={"ID":"fac6e364-46b2-43a9-9224-a46e97774be2","Type":"ContainerDied","Data":"1691f364e382643bd367ffecee4e87b2f755732b8ce682aee28a2d537a0a2482"} Jan 27 15:34:36 crc kubenswrapper[4697]: I0127 15:34:36.138975 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-zxpjs" event={"ID":"fac6e364-46b2-43a9-9224-a46e97774be2","Type":"ContainerDied","Data":"8aee82b9c2bb6322b9c8e01a8f7f44e5a43f552304305f1e85f4466b5263386e"} Jan 27 15:34:36 crc kubenswrapper[4697]: I0127 15:34:36.138990 4697 scope.go:117] "RemoveContainer" containerID="1691f364e382643bd367ffecee4e87b2f755732b8ce682aee28a2d537a0a2482" Jan 27 15:34:36 crc kubenswrapper[4697]: I0127 15:34:36.139119 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-zxpjs" Jan 27 15:34:36 crc kubenswrapper[4697]: I0127 15:34:36.215193 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-djdd2\" (UniqueName: \"kubernetes.io/projected/fac6e364-46b2-43a9-9224-a46e97774be2-kube-api-access-djdd2\") on node \"crc\" DevicePath \"\"" Jan 27 15:34:36 crc kubenswrapper[4697]: I0127 15:34:36.228233 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fac6e364-46b2-43a9-9224-a46e97774be2-config" (OuterVolumeSpecName: "config") pod "fac6e364-46b2-43a9-9224-a46e97774be2" (UID: "fac6e364-46b2-43a9-9224-a46e97774be2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:34:36 crc kubenswrapper[4697]: I0127 15:34:36.244323 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fac6e364-46b2-43a9-9224-a46e97774be2-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "fac6e364-46b2-43a9-9224-a46e97774be2" (UID: "fac6e364-46b2-43a9-9224-a46e97774be2"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:34:36 crc kubenswrapper[4697]: I0127 15:34:36.246507 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fac6e364-46b2-43a9-9224-a46e97774be2-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "fac6e364-46b2-43a9-9224-a46e97774be2" (UID: "fac6e364-46b2-43a9-9224-a46e97774be2"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:34:36 crc kubenswrapper[4697]: I0127 15:34:36.252351 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fac6e364-46b2-43a9-9224-a46e97774be2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "fac6e364-46b2-43a9-9224-a46e97774be2" (UID: "fac6e364-46b2-43a9-9224-a46e97774be2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:34:36 crc kubenswrapper[4697]: I0127 15:34:36.271983 4697 scope.go:117] "RemoveContainer" containerID="324f33fb01224c968dca774f1df8ac04a4ed1a6f7f48ea022a8987bf3505db21" Jan 27 15:34:36 crc kubenswrapper[4697]: I0127 15:34:36.276293 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fac6e364-46b2-43a9-9224-a46e97774be2-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "fac6e364-46b2-43a9-9224-a46e97774be2" (UID: "fac6e364-46b2-43a9-9224-a46e97774be2"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:34:36 crc kubenswrapper[4697]: I0127 15:34:36.317711 4697 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/fac6e364-46b2-43a9-9224-a46e97774be2-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 15:34:36 crc kubenswrapper[4697]: I0127 15:34:36.317752 4697 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fac6e364-46b2-43a9-9224-a46e97774be2-config\") on node \"crc\" DevicePath \"\"" Jan 27 15:34:36 crc kubenswrapper[4697]: I0127 15:34:36.317761 4697 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fac6e364-46b2-43a9-9224-a46e97774be2-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 27 15:34:36 crc kubenswrapper[4697]: I0127 15:34:36.317772 4697 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fac6e364-46b2-43a9-9224-a46e97774be2-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 15:34:36 crc kubenswrapper[4697]: I0127 15:34:36.317798 4697 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fac6e364-46b2-43a9-9224-a46e97774be2-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 15:34:36 crc kubenswrapper[4697]: I0127 15:34:36.323692 4697 scope.go:117] "RemoveContainer" containerID="1691f364e382643bd367ffecee4e87b2f755732b8ce682aee28a2d537a0a2482" Jan 27 15:34:36 crc kubenswrapper[4697]: E0127 15:34:36.327158 4697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1691f364e382643bd367ffecee4e87b2f755732b8ce682aee28a2d537a0a2482\": container with ID starting with 1691f364e382643bd367ffecee4e87b2f755732b8ce682aee28a2d537a0a2482 not found: ID does not exist" containerID="1691f364e382643bd367ffecee4e87b2f755732b8ce682aee28a2d537a0a2482" Jan 27 15:34:36 crc kubenswrapper[4697]: I0127 15:34:36.327199 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1691f364e382643bd367ffecee4e87b2f755732b8ce682aee28a2d537a0a2482"} err="failed to get container status \"1691f364e382643bd367ffecee4e87b2f755732b8ce682aee28a2d537a0a2482\": rpc error: code = NotFound desc = could not find container \"1691f364e382643bd367ffecee4e87b2f755732b8ce682aee28a2d537a0a2482\": container with ID starting with 1691f364e382643bd367ffecee4e87b2f755732b8ce682aee28a2d537a0a2482 not found: ID does not exist" Jan 27 15:34:36 crc kubenswrapper[4697]: I0127 15:34:36.327225 4697 scope.go:117] "RemoveContainer" containerID="324f33fb01224c968dca774f1df8ac04a4ed1a6f7f48ea022a8987bf3505db21" Jan 27 15:34:36 crc kubenswrapper[4697]: E0127 15:34:36.327950 4697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"324f33fb01224c968dca774f1df8ac04a4ed1a6f7f48ea022a8987bf3505db21\": container with ID starting with 324f33fb01224c968dca774f1df8ac04a4ed1a6f7f48ea022a8987bf3505db21 not found: ID does not exist" containerID="324f33fb01224c968dca774f1df8ac04a4ed1a6f7f48ea022a8987bf3505db21" Jan 27 15:34:36 crc kubenswrapper[4697]: I0127 15:34:36.327978 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"324f33fb01224c968dca774f1df8ac04a4ed1a6f7f48ea022a8987bf3505db21"} err="failed to get container status \"324f33fb01224c968dca774f1df8ac04a4ed1a6f7f48ea022a8987bf3505db21\": rpc error: code = NotFound desc = could not find container \"324f33fb01224c968dca774f1df8ac04a4ed1a6f7f48ea022a8987bf3505db21\": container with ID starting with 324f33fb01224c968dca774f1df8ac04a4ed1a6f7f48ea022a8987bf3505db21 not found: ID does not exist" Jan 27 15:34:36 crc kubenswrapper[4697]: I0127 15:34:36.341394 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fac6e364-46b2-43a9-9224-a46e97774be2-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "fac6e364-46b2-43a9-9224-a46e97774be2" (UID: "fac6e364-46b2-43a9-9224-a46e97774be2"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:34:36 crc kubenswrapper[4697]: I0127 15:34:36.419368 4697 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fac6e364-46b2-43a9-9224-a46e97774be2-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 15:34:36 crc kubenswrapper[4697]: I0127 15:34:36.493661 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-zxpjs"] Jan 27 15:34:36 crc kubenswrapper[4697]: I0127 15:34:36.508185 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-zxpjs"] Jan 27 15:34:36 crc kubenswrapper[4697]: I0127 15:34:36.581495 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fac6e364-46b2-43a9-9224-a46e97774be2" path="/var/lib/kubelet/pods/fac6e364-46b2-43a9-9224-a46e97774be2/volumes" Jan 27 15:34:45 crc kubenswrapper[4697]: I0127 15:34:44.999838 4697 scope.go:117] "RemoveContainer" containerID="b37152d1cfebc0bfb73c67e235ff7c5ea59c99acd5875d5a03d755ff23ac9fe2" Jan 27 15:34:45 crc kubenswrapper[4697]: I0127 15:34:45.038041 4697 scope.go:117] "RemoveContainer" containerID="149ead3ebf8a54efc657986d8158df9b8a0b93f70d2ec71a6dc4f1748fec1467" Jan 27 15:34:45 crc kubenswrapper[4697]: I0127 15:34:45.084452 4697 scope.go:117] "RemoveContainer" containerID="a3851091dd16121e6a0150dc86e85c5f20ee73c74922a8b289249e30c8a4bf63" Jan 27 15:34:45 crc kubenswrapper[4697]: I0127 15:34:45.117073 4697 scope.go:117] "RemoveContainer" containerID="bd61487fd854802c2e320e4d05469eb46e19091ab34cfe84eec437e4c137a414" Jan 27 15:34:45 crc kubenswrapper[4697]: I0127 15:34:45.569290 4697 scope.go:117] "RemoveContainer" containerID="1041a01976f73e6dbbf881bb74cdc0195408ed73fc04fdd6c07635790ef653fc" Jan 27 15:34:45 crc kubenswrapper[4697]: E0127 15:34:45.569989 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wz495_openshift-machine-config-operator(e9bec8bc-b2a6-4865-83ca-692ae5c022a6)\"" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" Jan 27 15:34:47 crc kubenswrapper[4697]: I0127 15:34:47.261816 4697 generic.go:334] "Generic (PLEG): container finished" podID="b9b87d14-1e98-448a-9b9c-3c47e4782ede" containerID="33e86b7ecb244682362e80a4774c35327658ec37be9310292b46eb74800be01a" exitCode=0 Jan 27 15:34:47 crc kubenswrapper[4697]: I0127 15:34:47.262011 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b9b87d14-1e98-448a-9b9c-3c47e4782ede","Type":"ContainerDied","Data":"33e86b7ecb244682362e80a4774c35327658ec37be9310292b46eb74800be01a"} Jan 27 15:34:47 crc kubenswrapper[4697]: I0127 15:34:47.265571 4697 generic.go:334] "Generic (PLEG): container finished" podID="e1aa709a-61ff-458d-a4b9-ca6d06bc537c" containerID="52f02509f50930219a00ce8f13766ed222279c37deb2dd1fee50d38fc4710594" exitCode=0 Jan 27 15:34:47 crc kubenswrapper[4697]: I0127 15:34:47.265621 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e1aa709a-61ff-458d-a4b9-ca6d06bc537c","Type":"ContainerDied","Data":"52f02509f50930219a00ce8f13766ed222279c37deb2dd1fee50d38fc4710594"} Jan 27 15:34:48 crc kubenswrapper[4697]: I0127 15:34:48.277221 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e1aa709a-61ff-458d-a4b9-ca6d06bc537c","Type":"ContainerStarted","Data":"35537ccaeadb952828bae73157a93869e1716213dab658e26a12cc3fedd47570"} Jan 27 15:34:48 crc kubenswrapper[4697]: I0127 15:34:48.278063 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 27 15:34:48 crc kubenswrapper[4697]: I0127 15:34:48.279993 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b9b87d14-1e98-448a-9b9c-3c47e4782ede","Type":"ContainerStarted","Data":"125cdcb2ba09eaaf5472f825a3d92000e2a88d1769652507e248e60a4b8028fa"} Jan 27 15:34:48 crc kubenswrapper[4697]: I0127 15:34:48.280204 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 27 15:34:48 crc kubenswrapper[4697]: I0127 15:34:48.309808 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=36.309791668 podStartE2EDuration="36.309791668s" podCreationTimestamp="2026-01-27 15:34:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:34:48.305623076 +0000 UTC m=+1584.478022867" watchObservedRunningTime="2026-01-27 15:34:48.309791668 +0000 UTC m=+1584.482191459" Jan 27 15:34:48 crc kubenswrapper[4697]: I0127 15:34:48.350427 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.350409108 podStartE2EDuration="37.350409108s" podCreationTimestamp="2026-01-27 15:34:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:34:48.337038391 +0000 UTC m=+1584.509438162" watchObservedRunningTime="2026-01-27 15:34:48.350409108 +0000 UTC m=+1584.522808889" Jan 27 15:34:56 crc kubenswrapper[4697]: I0127 15:34:56.724838 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-45gtt"] Jan 27 15:34:56 crc kubenswrapper[4697]: E0127 15:34:56.725851 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fac6e364-46b2-43a9-9224-a46e97774be2" containerName="init" Jan 27 15:34:56 crc kubenswrapper[4697]: I0127 15:34:56.725865 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="fac6e364-46b2-43a9-9224-a46e97774be2" containerName="init" Jan 27 15:34:56 crc kubenswrapper[4697]: E0127 15:34:56.725881 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad285a43-a79d-4383-acc4-208659eeffe1" containerName="dnsmasq-dns" Jan 27 15:34:56 crc kubenswrapper[4697]: I0127 15:34:56.725887 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad285a43-a79d-4383-acc4-208659eeffe1" containerName="dnsmasq-dns" Jan 27 15:34:56 crc kubenswrapper[4697]: E0127 15:34:56.725934 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fac6e364-46b2-43a9-9224-a46e97774be2" containerName="dnsmasq-dns" Jan 27 15:34:56 crc kubenswrapper[4697]: I0127 15:34:56.725944 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="fac6e364-46b2-43a9-9224-a46e97774be2" containerName="dnsmasq-dns" Jan 27 15:34:56 crc kubenswrapper[4697]: E0127 15:34:56.725958 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad285a43-a79d-4383-acc4-208659eeffe1" containerName="init" Jan 27 15:34:56 crc kubenswrapper[4697]: I0127 15:34:56.725965 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad285a43-a79d-4383-acc4-208659eeffe1" containerName="init" Jan 27 15:34:56 crc kubenswrapper[4697]: I0127 15:34:56.726188 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="fac6e364-46b2-43a9-9224-a46e97774be2" containerName="dnsmasq-dns" Jan 27 15:34:56 crc kubenswrapper[4697]: I0127 15:34:56.726210 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad285a43-a79d-4383-acc4-208659eeffe1" containerName="dnsmasq-dns" Jan 27 15:34:56 crc kubenswrapper[4697]: I0127 15:34:56.727621 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-45gtt" Jan 27 15:34:56 crc kubenswrapper[4697]: I0127 15:34:56.730661 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 15:34:56 crc kubenswrapper[4697]: I0127 15:34:56.730723 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ctbjc" Jan 27 15:34:56 crc kubenswrapper[4697]: I0127 15:34:56.731010 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 15:34:56 crc kubenswrapper[4697]: I0127 15:34:56.731172 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 15:34:56 crc kubenswrapper[4697]: I0127 15:34:56.797392 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-45gtt"] Jan 27 15:34:56 crc kubenswrapper[4697]: I0127 15:34:56.911891 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6eb281af-668c-4872-8100-3a9db4eb4c5a-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-45gtt\" (UID: \"6eb281af-668c-4872-8100-3a9db4eb4c5a\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-45gtt" Jan 27 15:34:56 crc kubenswrapper[4697]: I0127 15:34:56.911955 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6eb281af-668c-4872-8100-3a9db4eb4c5a-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-45gtt\" (UID: \"6eb281af-668c-4872-8100-3a9db4eb4c5a\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-45gtt" Jan 27 15:34:56 crc kubenswrapper[4697]: I0127 15:34:56.912044 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6eb281af-668c-4872-8100-3a9db4eb4c5a-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-45gtt\" (UID: \"6eb281af-668c-4872-8100-3a9db4eb4c5a\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-45gtt" Jan 27 15:34:56 crc kubenswrapper[4697]: I0127 15:34:56.912196 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qv2d\" (UniqueName: \"kubernetes.io/projected/6eb281af-668c-4872-8100-3a9db4eb4c5a-kube-api-access-6qv2d\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-45gtt\" (UID: \"6eb281af-668c-4872-8100-3a9db4eb4c5a\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-45gtt" Jan 27 15:34:57 crc kubenswrapper[4697]: I0127 15:34:57.013671 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6eb281af-668c-4872-8100-3a9db4eb4c5a-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-45gtt\" (UID: \"6eb281af-668c-4872-8100-3a9db4eb4c5a\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-45gtt" Jan 27 15:34:57 crc kubenswrapper[4697]: I0127 15:34:57.013975 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6eb281af-668c-4872-8100-3a9db4eb4c5a-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-45gtt\" (UID: \"6eb281af-668c-4872-8100-3a9db4eb4c5a\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-45gtt" Jan 27 15:34:57 crc kubenswrapper[4697]: I0127 15:34:57.014017 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6eb281af-668c-4872-8100-3a9db4eb4c5a-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-45gtt\" (UID: \"6eb281af-668c-4872-8100-3a9db4eb4c5a\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-45gtt" Jan 27 15:34:57 crc kubenswrapper[4697]: I0127 15:34:57.014131 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6qv2d\" (UniqueName: \"kubernetes.io/projected/6eb281af-668c-4872-8100-3a9db4eb4c5a-kube-api-access-6qv2d\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-45gtt\" (UID: \"6eb281af-668c-4872-8100-3a9db4eb4c5a\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-45gtt" Jan 27 15:34:57 crc kubenswrapper[4697]: I0127 15:34:57.019452 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6eb281af-668c-4872-8100-3a9db4eb4c5a-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-45gtt\" (UID: \"6eb281af-668c-4872-8100-3a9db4eb4c5a\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-45gtt" Jan 27 15:34:57 crc kubenswrapper[4697]: I0127 15:34:57.021424 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6eb281af-668c-4872-8100-3a9db4eb4c5a-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-45gtt\" (UID: \"6eb281af-668c-4872-8100-3a9db4eb4c5a\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-45gtt" Jan 27 15:34:57 crc kubenswrapper[4697]: I0127 15:34:57.025413 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6eb281af-668c-4872-8100-3a9db4eb4c5a-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-45gtt\" (UID: \"6eb281af-668c-4872-8100-3a9db4eb4c5a\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-45gtt" Jan 27 15:34:57 crc kubenswrapper[4697]: I0127 15:34:57.032059 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6qv2d\" (UniqueName: \"kubernetes.io/projected/6eb281af-668c-4872-8100-3a9db4eb4c5a-kube-api-access-6qv2d\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-45gtt\" (UID: \"6eb281af-668c-4872-8100-3a9db4eb4c5a\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-45gtt" Jan 27 15:34:57 crc kubenswrapper[4697]: I0127 15:34:57.047876 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-45gtt" Jan 27 15:34:57 crc kubenswrapper[4697]: I0127 15:34:57.845139 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-45gtt"] Jan 27 15:34:58 crc kubenswrapper[4697]: I0127 15:34:58.374328 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-45gtt" event={"ID":"6eb281af-668c-4872-8100-3a9db4eb4c5a","Type":"ContainerStarted","Data":"f1018bb2b2e22cfadf1167e83c8740f84a3803cec9a21d41e0526f97674297bf"} Jan 27 15:35:00 crc kubenswrapper[4697]: I0127 15:35:00.568072 4697 scope.go:117] "RemoveContainer" containerID="1041a01976f73e6dbbf881bb74cdc0195408ed73fc04fdd6c07635790ef653fc" Jan 27 15:35:00 crc kubenswrapper[4697]: E0127 15:35:00.568573 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wz495_openshift-machine-config-operator(e9bec8bc-b2a6-4865-83ca-692ae5c022a6)\"" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" Jan 27 15:35:02 crc kubenswrapper[4697]: I0127 15:35:02.608808 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 27 15:35:03 crc kubenswrapper[4697]: I0127 15:35:03.337769 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 27 15:35:11 crc kubenswrapper[4697]: E0127 15:35:11.460904 4697 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest" Jan 27 15:35:11 crc kubenswrapper[4697]: E0127 15:35:11.461538 4697 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 27 15:35:11 crc kubenswrapper[4697]: container &Container{Name:repo-setup-edpm-deployment-openstack-edpm-ipam,Image:quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest,Command:[],Args:[ansible-runner run /runner -p playbook.yaml -i repo-setup-edpm-deployment-openstack-edpm-ipam],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ANSIBLE_VERBOSITY,Value:2,ValueFrom:nil,},EnvVar{Name:RUNNER_PLAYBOOK,Value: Jan 27 15:35:11 crc kubenswrapper[4697]: - hosts: all Jan 27 15:35:11 crc kubenswrapper[4697]: strategy: linear Jan 27 15:35:11 crc kubenswrapper[4697]: tasks: Jan 27 15:35:11 crc kubenswrapper[4697]: - name: Enable podified-repos Jan 27 15:35:11 crc kubenswrapper[4697]: become: true Jan 27 15:35:11 crc kubenswrapper[4697]: ansible.builtin.shell: | Jan 27 15:35:11 crc kubenswrapper[4697]: set -euxo pipefail Jan 27 15:35:11 crc kubenswrapper[4697]: pushd /var/tmp Jan 27 15:35:11 crc kubenswrapper[4697]: curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz Jan 27 15:35:11 crc kubenswrapper[4697]: pushd repo-setup-main Jan 27 15:35:11 crc kubenswrapper[4697]: python3 -m venv ./venv Jan 27 15:35:11 crc kubenswrapper[4697]: PBR_VERSION=0.0.0 ./venv/bin/pip install ./ Jan 27 15:35:11 crc kubenswrapper[4697]: ./venv/bin/repo-setup current-podified -b antelope Jan 27 15:35:11 crc kubenswrapper[4697]: popd Jan 27 15:35:11 crc kubenswrapper[4697]: rm -rf repo-setup-main Jan 27 15:35:11 crc kubenswrapper[4697]: Jan 27 15:35:11 crc kubenswrapper[4697]: Jan 27 15:35:11 crc kubenswrapper[4697]: ,ValueFrom:nil,},EnvVar{Name:RUNNER_EXTRA_VARS,Value: Jan 27 15:35:11 crc kubenswrapper[4697]: edpm_override_hosts: openstack-edpm-ipam Jan 27 15:35:11 crc kubenswrapper[4697]: edpm_service_type: repo-setup Jan 27 15:35:11 crc kubenswrapper[4697]: Jan 27 15:35:11 crc kubenswrapper[4697]: Jan 27 15:35:11 crc kubenswrapper[4697]: ,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:repo-setup-combined-ca-bundle,ReadOnly:false,MountPath:/var/lib/openstack/cacerts/repo-setup,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key-openstack-edpm-ipam,ReadOnly:false,MountPath:/runner/env/ssh_key/ssh_key_openstack-edpm-ipam,SubPath:ssh_key_openstack-edpm-ipam,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:inventory,ReadOnly:false,MountPath:/runner/inventory/hosts,SubPath:inventory,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6qv2d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:openstack-aee-default-env,},Optional:*true,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod repo-setup-edpm-deployment-openstack-edpm-ipam-45gtt_openstack(6eb281af-668c-4872-8100-3a9db4eb4c5a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled Jan 27 15:35:11 crc kubenswrapper[4697]: > logger="UnhandledError" Jan 27 15:35:11 crc kubenswrapper[4697]: E0127 15:35:11.463272 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"repo-setup-edpm-deployment-openstack-edpm-ipam\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-45gtt" podUID="6eb281af-668c-4872-8100-3a9db4eb4c5a" Jan 27 15:35:11 crc kubenswrapper[4697]: E0127 15:35:11.537436 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"repo-setup-edpm-deployment-openstack-edpm-ipam\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest\\\"\"" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-45gtt" podUID="6eb281af-668c-4872-8100-3a9db4eb4c5a" Jan 27 15:35:14 crc kubenswrapper[4697]: I0127 15:35:14.577130 4697 scope.go:117] "RemoveContainer" containerID="1041a01976f73e6dbbf881bb74cdc0195408ed73fc04fdd6c07635790ef653fc" Jan 27 15:35:14 crc kubenswrapper[4697]: E0127 15:35:14.577847 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wz495_openshift-machine-config-operator(e9bec8bc-b2a6-4865-83ca-692ae5c022a6)\"" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" Jan 27 15:35:26 crc kubenswrapper[4697]: I0127 15:35:26.091344 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 15:35:26 crc kubenswrapper[4697]: I0127 15:35:26.683642 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-45gtt" event={"ID":"6eb281af-668c-4872-8100-3a9db4eb4c5a","Type":"ContainerStarted","Data":"c83508425554cd9d06e1a70770536633a9e817b717008f429ff2bc00039ad768"} Jan 27 15:35:26 crc kubenswrapper[4697]: I0127 15:35:26.706752 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-45gtt" podStartSLOduration=2.459414223 podStartE2EDuration="30.706730476s" podCreationTimestamp="2026-01-27 15:34:56 +0000 UTC" firstStartedPulling="2026-01-27 15:34:57.841896872 +0000 UTC m=+1594.014296653" lastFinishedPulling="2026-01-27 15:35:26.089213125 +0000 UTC m=+1622.261612906" observedRunningTime="2026-01-27 15:35:26.697663465 +0000 UTC m=+1622.870063246" watchObservedRunningTime="2026-01-27 15:35:26.706730476 +0000 UTC m=+1622.879130257" Jan 27 15:35:30 crc kubenswrapper[4697]: I0127 15:35:30.568449 4697 scope.go:117] "RemoveContainer" containerID="1041a01976f73e6dbbf881bb74cdc0195408ed73fc04fdd6c07635790ef653fc" Jan 27 15:35:30 crc kubenswrapper[4697]: E0127 15:35:30.569116 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wz495_openshift-machine-config-operator(e9bec8bc-b2a6-4865-83ca-692ae5c022a6)\"" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" Jan 27 15:35:41 crc kubenswrapper[4697]: I0127 15:35:41.818309 4697 generic.go:334] "Generic (PLEG): container finished" podID="6eb281af-668c-4872-8100-3a9db4eb4c5a" containerID="c83508425554cd9d06e1a70770536633a9e817b717008f429ff2bc00039ad768" exitCode=0 Jan 27 15:35:41 crc kubenswrapper[4697]: I0127 15:35:41.818538 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-45gtt" event={"ID":"6eb281af-668c-4872-8100-3a9db4eb4c5a","Type":"ContainerDied","Data":"c83508425554cd9d06e1a70770536633a9e817b717008f429ff2bc00039ad768"} Jan 27 15:35:42 crc kubenswrapper[4697]: I0127 15:35:42.569186 4697 scope.go:117] "RemoveContainer" containerID="1041a01976f73e6dbbf881bb74cdc0195408ed73fc04fdd6c07635790ef653fc" Jan 27 15:35:42 crc kubenswrapper[4697]: E0127 15:35:42.569505 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wz495_openshift-machine-config-operator(e9bec8bc-b2a6-4865-83ca-692ae5c022a6)\"" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" Jan 27 15:35:43 crc kubenswrapper[4697]: I0127 15:35:43.237384 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-45gtt" Jan 27 15:35:43 crc kubenswrapper[4697]: I0127 15:35:43.257264 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6eb281af-668c-4872-8100-3a9db4eb4c5a-inventory\") pod \"6eb281af-668c-4872-8100-3a9db4eb4c5a\" (UID: \"6eb281af-668c-4872-8100-3a9db4eb4c5a\") " Jan 27 15:35:43 crc kubenswrapper[4697]: I0127 15:35:43.257612 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6eb281af-668c-4872-8100-3a9db4eb4c5a-ssh-key-openstack-edpm-ipam\") pod \"6eb281af-668c-4872-8100-3a9db4eb4c5a\" (UID: \"6eb281af-668c-4872-8100-3a9db4eb4c5a\") " Jan 27 15:35:43 crc kubenswrapper[4697]: I0127 15:35:43.257702 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6qv2d\" (UniqueName: \"kubernetes.io/projected/6eb281af-668c-4872-8100-3a9db4eb4c5a-kube-api-access-6qv2d\") pod \"6eb281af-668c-4872-8100-3a9db4eb4c5a\" (UID: \"6eb281af-668c-4872-8100-3a9db4eb4c5a\") " Jan 27 15:35:43 crc kubenswrapper[4697]: I0127 15:35:43.257754 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6eb281af-668c-4872-8100-3a9db4eb4c5a-repo-setup-combined-ca-bundle\") pod \"6eb281af-668c-4872-8100-3a9db4eb4c5a\" (UID: \"6eb281af-668c-4872-8100-3a9db4eb4c5a\") " Jan 27 15:35:43 crc kubenswrapper[4697]: I0127 15:35:43.265139 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6eb281af-668c-4872-8100-3a9db4eb4c5a-kube-api-access-6qv2d" (OuterVolumeSpecName: "kube-api-access-6qv2d") pod "6eb281af-668c-4872-8100-3a9db4eb4c5a" (UID: "6eb281af-668c-4872-8100-3a9db4eb4c5a"). InnerVolumeSpecName "kube-api-access-6qv2d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:35:43 crc kubenswrapper[4697]: I0127 15:35:43.279665 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6eb281af-668c-4872-8100-3a9db4eb4c5a-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "6eb281af-668c-4872-8100-3a9db4eb4c5a" (UID: "6eb281af-668c-4872-8100-3a9db4eb4c5a"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:35:43 crc kubenswrapper[4697]: I0127 15:35:43.297904 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6eb281af-668c-4872-8100-3a9db4eb4c5a-inventory" (OuterVolumeSpecName: "inventory") pod "6eb281af-668c-4872-8100-3a9db4eb4c5a" (UID: "6eb281af-668c-4872-8100-3a9db4eb4c5a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:35:43 crc kubenswrapper[4697]: I0127 15:35:43.315179 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6eb281af-668c-4872-8100-3a9db4eb4c5a-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "6eb281af-668c-4872-8100-3a9db4eb4c5a" (UID: "6eb281af-668c-4872-8100-3a9db4eb4c5a"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:35:43 crc kubenswrapper[4697]: I0127 15:35:43.360627 4697 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6eb281af-668c-4872-8100-3a9db4eb4c5a-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 15:35:43 crc kubenswrapper[4697]: I0127 15:35:43.360672 4697 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6eb281af-668c-4872-8100-3a9db4eb4c5a-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 15:35:43 crc kubenswrapper[4697]: I0127 15:35:43.360685 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6qv2d\" (UniqueName: \"kubernetes.io/projected/6eb281af-668c-4872-8100-3a9db4eb4c5a-kube-api-access-6qv2d\") on node \"crc\" DevicePath \"\"" Jan 27 15:35:43 crc kubenswrapper[4697]: I0127 15:35:43.360697 4697 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6eb281af-668c-4872-8100-3a9db4eb4c5a-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 15:35:43 crc kubenswrapper[4697]: I0127 15:35:43.840400 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-45gtt" event={"ID":"6eb281af-668c-4872-8100-3a9db4eb4c5a","Type":"ContainerDied","Data":"f1018bb2b2e22cfadf1167e83c8740f84a3803cec9a21d41e0526f97674297bf"} Jan 27 15:35:43 crc kubenswrapper[4697]: I0127 15:35:43.840883 4697 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f1018bb2b2e22cfadf1167e83c8740f84a3803cec9a21d41e0526f97674297bf" Jan 27 15:35:43 crc kubenswrapper[4697]: I0127 15:35:43.840618 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-45gtt" Jan 27 15:35:43 crc kubenswrapper[4697]: I0127 15:35:43.947519 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-bhvqs"] Jan 27 15:35:43 crc kubenswrapper[4697]: E0127 15:35:43.948097 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6eb281af-668c-4872-8100-3a9db4eb4c5a" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 27 15:35:43 crc kubenswrapper[4697]: I0127 15:35:43.948122 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="6eb281af-668c-4872-8100-3a9db4eb4c5a" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 27 15:35:43 crc kubenswrapper[4697]: I0127 15:35:43.948396 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="6eb281af-668c-4872-8100-3a9db4eb4c5a" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 27 15:35:43 crc kubenswrapper[4697]: I0127 15:35:43.949190 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-bhvqs" Jan 27 15:35:43 crc kubenswrapper[4697]: I0127 15:35:43.953360 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ctbjc" Jan 27 15:35:43 crc kubenswrapper[4697]: I0127 15:35:43.953661 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 15:35:43 crc kubenswrapper[4697]: I0127 15:35:43.953817 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 15:35:43 crc kubenswrapper[4697]: I0127 15:35:43.958579 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 15:35:43 crc kubenswrapper[4697]: I0127 15:35:43.959656 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-bhvqs"] Jan 27 15:35:43 crc kubenswrapper[4697]: I0127 15:35:43.975120 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e7fe5183-36d1-4594-859b-b999146707ad-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-bhvqs\" (UID: \"e7fe5183-36d1-4594-859b-b999146707ad\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-bhvqs" Jan 27 15:35:43 crc kubenswrapper[4697]: I0127 15:35:43.975211 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ztlx\" (UniqueName: \"kubernetes.io/projected/e7fe5183-36d1-4594-859b-b999146707ad-kube-api-access-2ztlx\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-bhvqs\" (UID: \"e7fe5183-36d1-4594-859b-b999146707ad\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-bhvqs" Jan 27 15:35:43 crc kubenswrapper[4697]: I0127 15:35:43.975256 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e7fe5183-36d1-4594-859b-b999146707ad-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-bhvqs\" (UID: \"e7fe5183-36d1-4594-859b-b999146707ad\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-bhvqs" Jan 27 15:35:44 crc kubenswrapper[4697]: I0127 15:35:44.076687 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e7fe5183-36d1-4594-859b-b999146707ad-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-bhvqs\" (UID: \"e7fe5183-36d1-4594-859b-b999146707ad\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-bhvqs" Jan 27 15:35:44 crc kubenswrapper[4697]: I0127 15:35:44.076755 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ztlx\" (UniqueName: \"kubernetes.io/projected/e7fe5183-36d1-4594-859b-b999146707ad-kube-api-access-2ztlx\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-bhvqs\" (UID: \"e7fe5183-36d1-4594-859b-b999146707ad\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-bhvqs" Jan 27 15:35:44 crc kubenswrapper[4697]: I0127 15:35:44.076803 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e7fe5183-36d1-4594-859b-b999146707ad-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-bhvqs\" (UID: \"e7fe5183-36d1-4594-859b-b999146707ad\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-bhvqs" Jan 27 15:35:44 crc kubenswrapper[4697]: I0127 15:35:44.084010 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e7fe5183-36d1-4594-859b-b999146707ad-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-bhvqs\" (UID: \"e7fe5183-36d1-4594-859b-b999146707ad\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-bhvqs" Jan 27 15:35:44 crc kubenswrapper[4697]: I0127 15:35:44.085132 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e7fe5183-36d1-4594-859b-b999146707ad-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-bhvqs\" (UID: \"e7fe5183-36d1-4594-859b-b999146707ad\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-bhvqs" Jan 27 15:35:44 crc kubenswrapper[4697]: I0127 15:35:44.095399 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ztlx\" (UniqueName: \"kubernetes.io/projected/e7fe5183-36d1-4594-859b-b999146707ad-kube-api-access-2ztlx\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-bhvqs\" (UID: \"e7fe5183-36d1-4594-859b-b999146707ad\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-bhvqs" Jan 27 15:35:44 crc kubenswrapper[4697]: I0127 15:35:44.280352 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-bhvqs" Jan 27 15:35:44 crc kubenswrapper[4697]: I0127 15:35:44.834492 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-bhvqs"] Jan 27 15:35:44 crc kubenswrapper[4697]: I0127 15:35:44.845760 4697 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 15:35:44 crc kubenswrapper[4697]: I0127 15:35:44.854960 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-bhvqs" event={"ID":"e7fe5183-36d1-4594-859b-b999146707ad","Type":"ContainerStarted","Data":"6ce26c9f94e372f622387dafcb2a978228916b9529dcd7b3c0c9b3c8c50829a6"} Jan 27 15:35:45 crc kubenswrapper[4697]: I0127 15:35:45.411303 4697 scope.go:117] "RemoveContainer" containerID="41730bf612d1b12077a746b67d7a91f4a81462e0cc7cdf86fe3d450b1e672c0a" Jan 27 15:35:45 crc kubenswrapper[4697]: I0127 15:35:45.582374 4697 scope.go:117] "RemoveContainer" containerID="2a16194145ec6654978b540e58e66ba2b45b349503991b45948adac1968da332" Jan 27 15:35:45 crc kubenswrapper[4697]: I0127 15:35:45.870098 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-bhvqs" event={"ID":"e7fe5183-36d1-4594-859b-b999146707ad","Type":"ContainerStarted","Data":"326d976e5ac830ac54cb78db55f659b7d38d8fa9b222371d2526b1d29ee8c8a8"} Jan 27 15:35:45 crc kubenswrapper[4697]: I0127 15:35:45.895570 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-bhvqs" podStartSLOduration=2.334104456 podStartE2EDuration="2.895548011s" podCreationTimestamp="2026-01-27 15:35:43 +0000 UTC" firstStartedPulling="2026-01-27 15:35:44.845541798 +0000 UTC m=+1641.017941579" lastFinishedPulling="2026-01-27 15:35:45.406985353 +0000 UTC m=+1641.579385134" observedRunningTime="2026-01-27 15:35:45.888118539 +0000 UTC m=+1642.060518340" watchObservedRunningTime="2026-01-27 15:35:45.895548011 +0000 UTC m=+1642.067947792" Jan 27 15:35:48 crc kubenswrapper[4697]: I0127 15:35:48.901863 4697 generic.go:334] "Generic (PLEG): container finished" podID="e7fe5183-36d1-4594-859b-b999146707ad" containerID="326d976e5ac830ac54cb78db55f659b7d38d8fa9b222371d2526b1d29ee8c8a8" exitCode=0 Jan 27 15:35:48 crc kubenswrapper[4697]: I0127 15:35:48.901951 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-bhvqs" event={"ID":"e7fe5183-36d1-4594-859b-b999146707ad","Type":"ContainerDied","Data":"326d976e5ac830ac54cb78db55f659b7d38d8fa9b222371d2526b1d29ee8c8a8"} Jan 27 15:35:50 crc kubenswrapper[4697]: I0127 15:35:50.297177 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-bhvqs" Jan 27 15:35:50 crc kubenswrapper[4697]: I0127 15:35:50.332797 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2ztlx\" (UniqueName: \"kubernetes.io/projected/e7fe5183-36d1-4594-859b-b999146707ad-kube-api-access-2ztlx\") pod \"e7fe5183-36d1-4594-859b-b999146707ad\" (UID: \"e7fe5183-36d1-4594-859b-b999146707ad\") " Jan 27 15:35:50 crc kubenswrapper[4697]: I0127 15:35:50.333004 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e7fe5183-36d1-4594-859b-b999146707ad-ssh-key-openstack-edpm-ipam\") pod \"e7fe5183-36d1-4594-859b-b999146707ad\" (UID: \"e7fe5183-36d1-4594-859b-b999146707ad\") " Jan 27 15:35:50 crc kubenswrapper[4697]: I0127 15:35:50.333071 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e7fe5183-36d1-4594-859b-b999146707ad-inventory\") pod \"e7fe5183-36d1-4594-859b-b999146707ad\" (UID: \"e7fe5183-36d1-4594-859b-b999146707ad\") " Jan 27 15:35:50 crc kubenswrapper[4697]: I0127 15:35:50.341325 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7fe5183-36d1-4594-859b-b999146707ad-kube-api-access-2ztlx" (OuterVolumeSpecName: "kube-api-access-2ztlx") pod "e7fe5183-36d1-4594-859b-b999146707ad" (UID: "e7fe5183-36d1-4594-859b-b999146707ad"). InnerVolumeSpecName "kube-api-access-2ztlx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:35:50 crc kubenswrapper[4697]: I0127 15:35:50.359023 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7fe5183-36d1-4594-859b-b999146707ad-inventory" (OuterVolumeSpecName: "inventory") pod "e7fe5183-36d1-4594-859b-b999146707ad" (UID: "e7fe5183-36d1-4594-859b-b999146707ad"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:35:50 crc kubenswrapper[4697]: I0127 15:35:50.370082 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7fe5183-36d1-4594-859b-b999146707ad-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "e7fe5183-36d1-4594-859b-b999146707ad" (UID: "e7fe5183-36d1-4594-859b-b999146707ad"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:35:50 crc kubenswrapper[4697]: I0127 15:35:50.435490 4697 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e7fe5183-36d1-4594-859b-b999146707ad-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 15:35:50 crc kubenswrapper[4697]: I0127 15:35:50.435521 4697 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e7fe5183-36d1-4594-859b-b999146707ad-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 15:35:50 crc kubenswrapper[4697]: I0127 15:35:50.435531 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2ztlx\" (UniqueName: \"kubernetes.io/projected/e7fe5183-36d1-4594-859b-b999146707ad-kube-api-access-2ztlx\") on node \"crc\" DevicePath \"\"" Jan 27 15:35:50 crc kubenswrapper[4697]: I0127 15:35:50.925683 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-bhvqs" event={"ID":"e7fe5183-36d1-4594-859b-b999146707ad","Type":"ContainerDied","Data":"6ce26c9f94e372f622387dafcb2a978228916b9529dcd7b3c0c9b3c8c50829a6"} Jan 27 15:35:50 crc kubenswrapper[4697]: I0127 15:35:50.925727 4697 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6ce26c9f94e372f622387dafcb2a978228916b9529dcd7b3c0c9b3c8c50829a6" Jan 27 15:35:50 crc kubenswrapper[4697]: I0127 15:35:50.925811 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-bhvqs" Jan 27 15:35:51 crc kubenswrapper[4697]: I0127 15:35:51.007910 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rm5t2"] Jan 27 15:35:51 crc kubenswrapper[4697]: E0127 15:35:51.008328 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7fe5183-36d1-4594-859b-b999146707ad" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 27 15:35:51 crc kubenswrapper[4697]: I0127 15:35:51.008347 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7fe5183-36d1-4594-859b-b999146707ad" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 27 15:35:51 crc kubenswrapper[4697]: I0127 15:35:51.008549 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7fe5183-36d1-4594-859b-b999146707ad" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 27 15:35:51 crc kubenswrapper[4697]: I0127 15:35:51.009258 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rm5t2" Jan 27 15:35:51 crc kubenswrapper[4697]: I0127 15:35:51.017522 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 15:35:51 crc kubenswrapper[4697]: I0127 15:35:51.017750 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ctbjc" Jan 27 15:35:51 crc kubenswrapper[4697]: I0127 15:35:51.017920 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 15:35:51 crc kubenswrapper[4697]: I0127 15:35:51.017985 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 15:35:51 crc kubenswrapper[4697]: I0127 15:35:51.019534 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rm5t2"] Jan 27 15:35:51 crc kubenswrapper[4697]: I0127 15:35:51.047962 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e6db178e-d462-4895-84e2-10695b0df557-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-rm5t2\" (UID: \"e6db178e-d462-4895-84e2-10695b0df557\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rm5t2" Jan 27 15:35:51 crc kubenswrapper[4697]: I0127 15:35:51.048014 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e6db178e-d462-4895-84e2-10695b0df557-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-rm5t2\" (UID: \"e6db178e-d462-4895-84e2-10695b0df557\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rm5t2" Jan 27 15:35:51 crc kubenswrapper[4697]: I0127 15:35:51.048069 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rctld\" (UniqueName: \"kubernetes.io/projected/e6db178e-d462-4895-84e2-10695b0df557-kube-api-access-rctld\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-rm5t2\" (UID: \"e6db178e-d462-4895-84e2-10695b0df557\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rm5t2" Jan 27 15:35:51 crc kubenswrapper[4697]: I0127 15:35:51.048103 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6db178e-d462-4895-84e2-10695b0df557-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-rm5t2\" (UID: \"e6db178e-d462-4895-84e2-10695b0df557\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rm5t2" Jan 27 15:35:51 crc kubenswrapper[4697]: I0127 15:35:51.150040 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e6db178e-d462-4895-84e2-10695b0df557-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-rm5t2\" (UID: \"e6db178e-d462-4895-84e2-10695b0df557\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rm5t2" Jan 27 15:35:51 crc kubenswrapper[4697]: I0127 15:35:51.150099 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e6db178e-d462-4895-84e2-10695b0df557-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-rm5t2\" (UID: \"e6db178e-d462-4895-84e2-10695b0df557\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rm5t2" Jan 27 15:35:51 crc kubenswrapper[4697]: I0127 15:35:51.150185 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rctld\" (UniqueName: \"kubernetes.io/projected/e6db178e-d462-4895-84e2-10695b0df557-kube-api-access-rctld\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-rm5t2\" (UID: \"e6db178e-d462-4895-84e2-10695b0df557\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rm5t2" Jan 27 15:35:51 crc kubenswrapper[4697]: I0127 15:35:51.150238 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6db178e-d462-4895-84e2-10695b0df557-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-rm5t2\" (UID: \"e6db178e-d462-4895-84e2-10695b0df557\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rm5t2" Jan 27 15:35:51 crc kubenswrapper[4697]: I0127 15:35:51.156682 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e6db178e-d462-4895-84e2-10695b0df557-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-rm5t2\" (UID: \"e6db178e-d462-4895-84e2-10695b0df557\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rm5t2" Jan 27 15:35:51 crc kubenswrapper[4697]: I0127 15:35:51.156707 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e6db178e-d462-4895-84e2-10695b0df557-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-rm5t2\" (UID: \"e6db178e-d462-4895-84e2-10695b0df557\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rm5t2" Jan 27 15:35:51 crc kubenswrapper[4697]: I0127 15:35:51.157330 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6db178e-d462-4895-84e2-10695b0df557-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-rm5t2\" (UID: \"e6db178e-d462-4895-84e2-10695b0df557\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rm5t2" Jan 27 15:35:51 crc kubenswrapper[4697]: I0127 15:35:51.168851 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rctld\" (UniqueName: \"kubernetes.io/projected/e6db178e-d462-4895-84e2-10695b0df557-kube-api-access-rctld\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-rm5t2\" (UID: \"e6db178e-d462-4895-84e2-10695b0df557\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rm5t2" Jan 27 15:35:51 crc kubenswrapper[4697]: I0127 15:35:51.330322 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rm5t2" Jan 27 15:35:51 crc kubenswrapper[4697]: I0127 15:35:51.903391 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rm5t2"] Jan 27 15:35:51 crc kubenswrapper[4697]: I0127 15:35:51.948633 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rm5t2" event={"ID":"e6db178e-d462-4895-84e2-10695b0df557","Type":"ContainerStarted","Data":"71722b0223e307fb8d23b94c5d3bfb6f77b7f814b04c0dc6629e5f7bf69d2700"} Jan 27 15:35:52 crc kubenswrapper[4697]: I0127 15:35:52.961690 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rm5t2" event={"ID":"e6db178e-d462-4895-84e2-10695b0df557","Type":"ContainerStarted","Data":"a3c28fbcc5c059df5eb592e6a3f321d54eaf3fc48c41ff981937b3d94704081d"} Jan 27 15:35:53 crc kubenswrapper[4697]: I0127 15:35:53.009235 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rm5t2" podStartSLOduration=2.573422815 podStartE2EDuration="3.009214838s" podCreationTimestamp="2026-01-27 15:35:50 +0000 UTC" firstStartedPulling="2026-01-27 15:35:51.913256415 +0000 UTC m=+1648.085656196" lastFinishedPulling="2026-01-27 15:35:52.349048438 +0000 UTC m=+1648.521448219" observedRunningTime="2026-01-27 15:35:53.000577068 +0000 UTC m=+1649.172976869" watchObservedRunningTime="2026-01-27 15:35:53.009214838 +0000 UTC m=+1649.181614619" Jan 27 15:35:53 crc kubenswrapper[4697]: I0127 15:35:53.569016 4697 scope.go:117] "RemoveContainer" containerID="1041a01976f73e6dbbf881bb74cdc0195408ed73fc04fdd6c07635790ef653fc" Jan 27 15:35:53 crc kubenswrapper[4697]: E0127 15:35:53.569300 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wz495_openshift-machine-config-operator(e9bec8bc-b2a6-4865-83ca-692ae5c022a6)\"" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" Jan 27 15:36:04 crc kubenswrapper[4697]: I0127 15:36:04.575357 4697 scope.go:117] "RemoveContainer" containerID="1041a01976f73e6dbbf881bb74cdc0195408ed73fc04fdd6c07635790ef653fc" Jan 27 15:36:04 crc kubenswrapper[4697]: E0127 15:36:04.576135 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wz495_openshift-machine-config-operator(e9bec8bc-b2a6-4865-83ca-692ae5c022a6)\"" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" Jan 27 15:36:16 crc kubenswrapper[4697]: I0127 15:36:16.570341 4697 scope.go:117] "RemoveContainer" containerID="1041a01976f73e6dbbf881bb74cdc0195408ed73fc04fdd6c07635790ef653fc" Jan 27 15:36:16 crc kubenswrapper[4697]: E0127 15:36:16.571165 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wz495_openshift-machine-config-operator(e9bec8bc-b2a6-4865-83ca-692ae5c022a6)\"" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" Jan 27 15:36:31 crc kubenswrapper[4697]: I0127 15:36:31.568719 4697 scope.go:117] "RemoveContainer" containerID="1041a01976f73e6dbbf881bb74cdc0195408ed73fc04fdd6c07635790ef653fc" Jan 27 15:36:31 crc kubenswrapper[4697]: E0127 15:36:31.569559 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wz495_openshift-machine-config-operator(e9bec8bc-b2a6-4865-83ca-692ae5c022a6)\"" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" Jan 27 15:36:45 crc kubenswrapper[4697]: I0127 15:36:45.674434 4697 scope.go:117] "RemoveContainer" containerID="e1bc7cabbdb98a377574b8a469cc4e4671483fba677339c3a52dc210ff59a833" Jan 27 15:36:45 crc kubenswrapper[4697]: I0127 15:36:45.708071 4697 scope.go:117] "RemoveContainer" containerID="0295f2908baee5e13e870e56581d707ed325b1fdb7cd3f413492ba7d6b494cbf" Jan 27 15:36:45 crc kubenswrapper[4697]: I0127 15:36:45.730044 4697 scope.go:117] "RemoveContainer" containerID="7b4bd846e4014e621005731ea7aeb194539fae86bf87dc737dce2817f7b59901" Jan 27 15:36:45 crc kubenswrapper[4697]: I0127 15:36:45.750615 4697 scope.go:117] "RemoveContainer" containerID="05346ea7c19e9d70c44f136c721843ff7aa80b17d1f177377fd120c190703a13" Jan 27 15:36:45 crc kubenswrapper[4697]: I0127 15:36:45.774610 4697 scope.go:117] "RemoveContainer" containerID="e3b92ba9b2ba429d3d07e6fa1c8e9b14929d53f5fd171b9cad61fa2e0feff069" Jan 27 15:36:46 crc kubenswrapper[4697]: I0127 15:36:46.568098 4697 scope.go:117] "RemoveContainer" containerID="1041a01976f73e6dbbf881bb74cdc0195408ed73fc04fdd6c07635790ef653fc" Jan 27 15:36:46 crc kubenswrapper[4697]: E0127 15:36:46.568445 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wz495_openshift-machine-config-operator(e9bec8bc-b2a6-4865-83ca-692ae5c022a6)\"" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" Jan 27 15:37:01 crc kubenswrapper[4697]: I0127 15:37:01.569096 4697 scope.go:117] "RemoveContainer" containerID="1041a01976f73e6dbbf881bb74cdc0195408ed73fc04fdd6c07635790ef653fc" Jan 27 15:37:01 crc kubenswrapper[4697]: E0127 15:37:01.570479 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wz495_openshift-machine-config-operator(e9bec8bc-b2a6-4865-83ca-692ae5c022a6)\"" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" Jan 27 15:37:13 crc kubenswrapper[4697]: I0127 15:37:13.570428 4697 scope.go:117] "RemoveContainer" containerID="1041a01976f73e6dbbf881bb74cdc0195408ed73fc04fdd6c07635790ef653fc" Jan 27 15:37:13 crc kubenswrapper[4697]: E0127 15:37:13.572101 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wz495_openshift-machine-config-operator(e9bec8bc-b2a6-4865-83ca-692ae5c022a6)\"" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" Jan 27 15:37:27 crc kubenswrapper[4697]: I0127 15:37:27.568477 4697 scope.go:117] "RemoveContainer" containerID="1041a01976f73e6dbbf881bb74cdc0195408ed73fc04fdd6c07635790ef653fc" Jan 27 15:37:27 crc kubenswrapper[4697]: E0127 15:37:27.569370 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wz495_openshift-machine-config-operator(e9bec8bc-b2a6-4865-83ca-692ae5c022a6)\"" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" Jan 27 15:37:38 crc kubenswrapper[4697]: I0127 15:37:38.568996 4697 scope.go:117] "RemoveContainer" containerID="1041a01976f73e6dbbf881bb74cdc0195408ed73fc04fdd6c07635790ef653fc" Jan 27 15:37:38 crc kubenswrapper[4697]: E0127 15:37:38.569801 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wz495_openshift-machine-config-operator(e9bec8bc-b2a6-4865-83ca-692ae5c022a6)\"" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" Jan 27 15:37:45 crc kubenswrapper[4697]: I0127 15:37:45.835238 4697 scope.go:117] "RemoveContainer" containerID="02c13ce5699751ffc248a521ce8ea10f7ae72816264ac0d931dbf5406d971344" Jan 27 15:37:45 crc kubenswrapper[4697]: I0127 15:37:45.859088 4697 scope.go:117] "RemoveContainer" containerID="59128e65e95be31d74769bfebd00b67ea58744e50173e43384730053b920ada6" Jan 27 15:37:45 crc kubenswrapper[4697]: I0127 15:37:45.888285 4697 scope.go:117] "RemoveContainer" containerID="ab10c11cb891f533205f21d94ae30a3fcc03bf56b596c5d40083de3816ac850d" Jan 27 15:37:45 crc kubenswrapper[4697]: I0127 15:37:45.917552 4697 scope.go:117] "RemoveContainer" containerID="5b2ce5859050736dc922fbccbb00f7471aad78680e2038dafce29a03dd8cc67e" Jan 27 15:37:45 crc kubenswrapper[4697]: I0127 15:37:45.935969 4697 scope.go:117] "RemoveContainer" containerID="c27ba0856ff4174ec3f85fa411179971f8e544e3c50c06069f2f5693fca9e9fc" Jan 27 15:37:49 crc kubenswrapper[4697]: I0127 15:37:49.064400 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-ca1e-account-create-update-qktst"] Jan 27 15:37:49 crc kubenswrapper[4697]: I0127 15:37:49.074077 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-dk6vd"] Jan 27 15:37:49 crc kubenswrapper[4697]: I0127 15:37:49.082015 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-ca1e-account-create-update-qktst"] Jan 27 15:37:49 crc kubenswrapper[4697]: I0127 15:37:49.089332 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-dk6vd"] Jan 27 15:37:49 crc kubenswrapper[4697]: I0127 15:37:49.568353 4697 scope.go:117] "RemoveContainer" containerID="1041a01976f73e6dbbf881bb74cdc0195408ed73fc04fdd6c07635790ef653fc" Jan 27 15:37:49 crc kubenswrapper[4697]: E0127 15:37:49.568666 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wz495_openshift-machine-config-operator(e9bec8bc-b2a6-4865-83ca-692ae5c022a6)\"" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" Jan 27 15:37:50 crc kubenswrapper[4697]: I0127 15:37:50.581716 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b261bbe5-03e9-4ebe-a8d0-a375b87722df" path="/var/lib/kubelet/pods/b261bbe5-03e9-4ebe-a8d0-a375b87722df/volumes" Jan 27 15:37:50 crc kubenswrapper[4697]: I0127 15:37:50.582651 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0bb870a-beb1-45cc-a1b2-30f6692a4cb6" path="/var/lib/kubelet/pods/e0bb870a-beb1-45cc-a1b2-30f6692a4cb6/volumes" Jan 27 15:37:51 crc kubenswrapper[4697]: I0127 15:37:51.037263 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-hflbt"] Jan 27 15:37:51 crc kubenswrapper[4697]: I0127 15:37:51.045757 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-scmbs"] Jan 27 15:37:51 crc kubenswrapper[4697]: I0127 15:37:51.054979 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-958c-account-create-update-8gl6b"] Jan 27 15:37:51 crc kubenswrapper[4697]: I0127 15:37:51.065791 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-4e77-account-create-update-x64b2"] Jan 27 15:37:51 crc kubenswrapper[4697]: I0127 15:37:51.074723 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-hflbt"] Jan 27 15:37:51 crc kubenswrapper[4697]: I0127 15:37:51.086686 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-958c-account-create-update-8gl6b"] Jan 27 15:37:51 crc kubenswrapper[4697]: I0127 15:37:51.107383 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-4e77-account-create-update-x64b2"] Jan 27 15:37:51 crc kubenswrapper[4697]: I0127 15:37:51.120199 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-scmbs"] Jan 27 15:37:52 crc kubenswrapper[4697]: I0127 15:37:52.581578 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="35eda2ea-2c5d-446d-9065-cd7a9d12cd1e" path="/var/lib/kubelet/pods/35eda2ea-2c5d-446d-9065-cd7a9d12cd1e/volumes" Jan 27 15:37:52 crc kubenswrapper[4697]: I0127 15:37:52.584243 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59c870c6-b268-440d-a6a4-d1ea57382a67" path="/var/lib/kubelet/pods/59c870c6-b268-440d-a6a4-d1ea57382a67/volumes" Jan 27 15:37:52 crc kubenswrapper[4697]: I0127 15:37:52.585115 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e67f4cf-29a8-46ce-8ea6-757b703f82b1" path="/var/lib/kubelet/pods/5e67f4cf-29a8-46ce-8ea6-757b703f82b1/volumes" Jan 27 15:37:52 crc kubenswrapper[4697]: I0127 15:37:52.586927 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f8f1c9d2-cb07-4cd8-8614-30734cff2994" path="/var/lib/kubelet/pods/f8f1c9d2-cb07-4cd8-8614-30734cff2994/volumes" Jan 27 15:38:02 crc kubenswrapper[4697]: I0127 15:38:02.568709 4697 scope.go:117] "RemoveContainer" containerID="1041a01976f73e6dbbf881bb74cdc0195408ed73fc04fdd6c07635790ef653fc" Jan 27 15:38:02 crc kubenswrapper[4697]: E0127 15:38:02.569561 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wz495_openshift-machine-config-operator(e9bec8bc-b2a6-4865-83ca-692ae5c022a6)\"" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" Jan 27 15:38:14 crc kubenswrapper[4697]: I0127 15:38:14.038985 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-8q5r5"] Jan 27 15:38:14 crc kubenswrapper[4697]: I0127 15:38:14.048504 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-8q5r5"] Jan 27 15:38:14 crc kubenswrapper[4697]: I0127 15:38:14.599556 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="586cb948-6d70-4f31-b21b-9088567a2d5c" path="/var/lib/kubelet/pods/586cb948-6d70-4f31-b21b-9088567a2d5c/volumes" Jan 27 15:38:15 crc kubenswrapper[4697]: I0127 15:38:15.030258 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-g8zkt"] Jan 27 15:38:15 crc kubenswrapper[4697]: I0127 15:38:15.038471 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-ph6xf"] Jan 27 15:38:15 crc kubenswrapper[4697]: I0127 15:38:15.046334 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-ph6xf"] Jan 27 15:38:15 crc kubenswrapper[4697]: I0127 15:38:15.055317 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-g8zkt"] Jan 27 15:38:16 crc kubenswrapper[4697]: I0127 15:38:16.031208 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-02c8-account-create-update-2mlxf"] Jan 27 15:38:16 crc kubenswrapper[4697]: I0127 15:38:16.041634 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-m5xbv"] Jan 27 15:38:16 crc kubenswrapper[4697]: I0127 15:38:16.050585 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-a724-account-create-update-gldwc"] Jan 27 15:38:16 crc kubenswrapper[4697]: I0127 15:38:16.063469 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-m5xbv"] Jan 27 15:38:16 crc kubenswrapper[4697]: I0127 15:38:16.071072 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-a724-account-create-update-gldwc"] Jan 27 15:38:16 crc kubenswrapper[4697]: I0127 15:38:16.079217 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-02c8-account-create-update-2mlxf"] Jan 27 15:38:16 crc kubenswrapper[4697]: I0127 15:38:16.584492 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17ec00c7-c7ad-4705-bd75-386e42e74100" path="/var/lib/kubelet/pods/17ec00c7-c7ad-4705-bd75-386e42e74100/volumes" Jan 27 15:38:16 crc kubenswrapper[4697]: I0127 15:38:16.589310 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26234d1c-51cd-4578-9ec6-cf5a7e8dc55c" path="/var/lib/kubelet/pods/26234d1c-51cd-4578-9ec6-cf5a7e8dc55c/volumes" Jan 27 15:38:16 crc kubenswrapper[4697]: I0127 15:38:16.592995 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45de88c8-a3d8-4d43-84af-0f72cabc6057" path="/var/lib/kubelet/pods/45de88c8-a3d8-4d43-84af-0f72cabc6057/volumes" Jan 27 15:38:16 crc kubenswrapper[4697]: I0127 15:38:16.597076 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7812f067-3dde-40e3-9a87-7e5d5d7d9597" path="/var/lib/kubelet/pods/7812f067-3dde-40e3-9a87-7e5d5d7d9597/volumes" Jan 27 15:38:16 crc kubenswrapper[4697]: I0127 15:38:16.601612 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff0f2382-375f-49bd-84e7-2de103947c5e" path="/var/lib/kubelet/pods/ff0f2382-375f-49bd-84e7-2de103947c5e/volumes" Jan 27 15:38:17 crc kubenswrapper[4697]: I0127 15:38:17.048869 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-3ba9-account-create-update-d6wqk"] Jan 27 15:38:17 crc kubenswrapper[4697]: I0127 15:38:17.058870 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-3ba9-account-create-update-d6wqk"] Jan 27 15:38:17 crc kubenswrapper[4697]: I0127 15:38:17.568508 4697 scope.go:117] "RemoveContainer" containerID="1041a01976f73e6dbbf881bb74cdc0195408ed73fc04fdd6c07635790ef653fc" Jan 27 15:38:17 crc kubenswrapper[4697]: E0127 15:38:17.568774 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wz495_openshift-machine-config-operator(e9bec8bc-b2a6-4865-83ca-692ae5c022a6)\"" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" Jan 27 15:38:18 crc kubenswrapper[4697]: I0127 15:38:18.584525 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d0f61efb-e85d-4d6a-88d3-64b0a22dd759" path="/var/lib/kubelet/pods/d0f61efb-e85d-4d6a-88d3-64b0a22dd759/volumes" Jan 27 15:38:31 crc kubenswrapper[4697]: I0127 15:38:31.073751 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-4tjc9"] Jan 27 15:38:31 crc kubenswrapper[4697]: I0127 15:38:31.086819 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-4tjc9"] Jan 27 15:38:31 crc kubenswrapper[4697]: I0127 15:38:31.568961 4697 scope.go:117] "RemoveContainer" containerID="1041a01976f73e6dbbf881bb74cdc0195408ed73fc04fdd6c07635790ef653fc" Jan 27 15:38:31 crc kubenswrapper[4697]: E0127 15:38:31.569522 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wz495_openshift-machine-config-operator(e9bec8bc-b2a6-4865-83ca-692ae5c022a6)\"" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" Jan 27 15:38:32 crc kubenswrapper[4697]: I0127 15:38:32.580257 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8fe4958e-ea1c-4420-939b-1ff0c52690fa" path="/var/lib/kubelet/pods/8fe4958e-ea1c-4420-939b-1ff0c52690fa/volumes" Jan 27 15:38:43 crc kubenswrapper[4697]: I0127 15:38:43.568823 4697 scope.go:117] "RemoveContainer" containerID="1041a01976f73e6dbbf881bb74cdc0195408ed73fc04fdd6c07635790ef653fc" Jan 27 15:38:43 crc kubenswrapper[4697]: E0127 15:38:43.570490 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wz495_openshift-machine-config-operator(e9bec8bc-b2a6-4865-83ca-692ae5c022a6)\"" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" Jan 27 15:38:46 crc kubenswrapper[4697]: I0127 15:38:46.017968 4697 scope.go:117] "RemoveContainer" containerID="d574d3462c9c9ced213e0bc8a80d48e45569e0116e26de633b987ffd2ebb4464" Jan 27 15:38:46 crc kubenswrapper[4697]: I0127 15:38:46.052698 4697 scope.go:117] "RemoveContainer" containerID="f68b7eb1707fe08e5fd67cb3c11131787a1326e928789e44ace2163d4dacc47b" Jan 27 15:38:46 crc kubenswrapper[4697]: I0127 15:38:46.102484 4697 scope.go:117] "RemoveContainer" containerID="fbec09964c961e55efcb6ef5783c84a88f1c95e1a6aa9cc800f141dd8e5062db" Jan 27 15:38:46 crc kubenswrapper[4697]: I0127 15:38:46.150012 4697 scope.go:117] "RemoveContainer" containerID="22911e52044c925463a4f21a5c009c3a79669418899ee8ae8827d98621b6b6b5" Jan 27 15:38:46 crc kubenswrapper[4697]: I0127 15:38:46.192079 4697 scope.go:117] "RemoveContainer" containerID="c13661180bdf46ee7e077d53b2ad5b73dcd82175c3391648dd3cc79125a12f6a" Jan 27 15:38:46 crc kubenswrapper[4697]: I0127 15:38:46.238085 4697 scope.go:117] "RemoveContainer" containerID="a0bce1eef5ce6123d85beeabcc7eb474c6257112de00b49a05aab36470706ee8" Jan 27 15:38:46 crc kubenswrapper[4697]: I0127 15:38:46.277251 4697 scope.go:117] "RemoveContainer" containerID="1fdfa219d1e84f3159240b033968389ea916f9bb7c05d306752262ba3e6cebb9" Jan 27 15:38:46 crc kubenswrapper[4697]: I0127 15:38:46.306707 4697 scope.go:117] "RemoveContainer" containerID="7e0e9df04c17427c4702d056553fc24f66ea4348c32523995153df39a1c2af62" Jan 27 15:38:46 crc kubenswrapper[4697]: I0127 15:38:46.334007 4697 scope.go:117] "RemoveContainer" containerID="a95db96655817a0910af44fe172774ddcf550a70d788ba5d5ee344897c2bb36e" Jan 27 15:38:46 crc kubenswrapper[4697]: I0127 15:38:46.355885 4697 scope.go:117] "RemoveContainer" containerID="26d509663ef0b579cee9235ec1ed1efb1c8fabf44b0eefc9b39c6b9ae718c769" Jan 27 15:38:46 crc kubenswrapper[4697]: I0127 15:38:46.374592 4697 scope.go:117] "RemoveContainer" containerID="4aff731f7f170e72ef347567b2ac58245f47fdb51e0677cf01073f4767e3674f" Jan 27 15:38:46 crc kubenswrapper[4697]: I0127 15:38:46.399715 4697 scope.go:117] "RemoveContainer" containerID="c7a8042ba9f4bda68f0c264b663451c54f6fa0f486e1d7ef66e911f74e1f0a2d" Jan 27 15:38:46 crc kubenswrapper[4697]: I0127 15:38:46.424888 4697 scope.go:117] "RemoveContainer" containerID="62fd4c4b0a1857647cf9d9df3b31686fd3cb5d2d66c9bd518cb4bc39e28c6460" Jan 27 15:38:46 crc kubenswrapper[4697]: I0127 15:38:46.470687 4697 scope.go:117] "RemoveContainer" containerID="b1db229931a4ea3764a6e347740499b8491aa2d4b4fe54bfce8e0cdb98689a27" Jan 27 15:38:46 crc kubenswrapper[4697]: I0127 15:38:46.508964 4697 scope.go:117] "RemoveContainer" containerID="9b76737ca08d333ff296f533d7727d1b52c2558ce441d9f993c6440d4bf4857b" Jan 27 15:38:46 crc kubenswrapper[4697]: I0127 15:38:46.533473 4697 scope.go:117] "RemoveContainer" containerID="db2d41f8c7593cec6a2c4b6ed52c911f342ade7f340e0e5d2dc7aadf73e99594" Jan 27 15:38:54 crc kubenswrapper[4697]: I0127 15:38:54.574251 4697 scope.go:117] "RemoveContainer" containerID="1041a01976f73e6dbbf881bb74cdc0195408ed73fc04fdd6c07635790ef653fc" Jan 27 15:38:54 crc kubenswrapper[4697]: E0127 15:38:54.575036 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wz495_openshift-machine-config-operator(e9bec8bc-b2a6-4865-83ca-692ae5c022a6)\"" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" Jan 27 15:39:00 crc kubenswrapper[4697]: I0127 15:39:00.042140 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-xgxk5"] Jan 27 15:39:00 crc kubenswrapper[4697]: I0127 15:39:00.050679 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-xgxk5"] Jan 27 15:39:00 crc kubenswrapper[4697]: I0127 15:39:00.581743 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f75c5842-64d4-45c9-a282-b8fb8bea1af6" path="/var/lib/kubelet/pods/f75c5842-64d4-45c9-a282-b8fb8bea1af6/volumes" Jan 27 15:39:05 crc kubenswrapper[4697]: I0127 15:39:05.568447 4697 scope.go:117] "RemoveContainer" containerID="1041a01976f73e6dbbf881bb74cdc0195408ed73fc04fdd6c07635790ef653fc" Jan 27 15:39:05 crc kubenswrapper[4697]: I0127 15:39:05.829389 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wz495" event={"ID":"e9bec8bc-b2a6-4865-83ca-692ae5c022a6","Type":"ContainerStarted","Data":"8a39377f66792076ded24d1dd2009ec1f66f27328a943fc9055b637c8a864fd4"} Jan 27 15:39:20 crc kubenswrapper[4697]: I0127 15:39:20.031581 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-pbx5g"] Jan 27 15:39:20 crc kubenswrapper[4697]: I0127 15:39:20.039650 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-pbx5g"] Jan 27 15:39:20 crc kubenswrapper[4697]: I0127 15:39:20.580068 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8fac9142-cfe0-4849-b6d4-3315ce2475ef" path="/var/lib/kubelet/pods/8fac9142-cfe0-4849-b6d4-3315ce2475ef/volumes" Jan 27 15:39:36 crc kubenswrapper[4697]: I0127 15:39:36.042950 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-xc9hp"] Jan 27 15:39:36 crc kubenswrapper[4697]: I0127 15:39:36.051129 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-xc9hp"] Jan 27 15:39:36 crc kubenswrapper[4697]: I0127 15:39:36.581387 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c11e83f3-61e4-4f13-89e2-cf9209760247" path="/var/lib/kubelet/pods/c11e83f3-61e4-4f13-89e2-cf9209760247/volumes" Jan 27 15:39:44 crc kubenswrapper[4697]: I0127 15:39:44.029687 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-pdrgr"] Jan 27 15:39:44 crc kubenswrapper[4697]: I0127 15:39:44.039238 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-pdrgr"] Jan 27 15:39:44 crc kubenswrapper[4697]: I0127 15:39:44.581100 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="edc034d6-13db-4ae2-be4c-86e4dad22dc7" path="/var/lib/kubelet/pods/edc034d6-13db-4ae2-be4c-86e4dad22dc7/volumes" Jan 27 15:39:46 crc kubenswrapper[4697]: I0127 15:39:46.828487 4697 scope.go:117] "RemoveContainer" containerID="b415f2de3fd0a8e4b4e4315c410125fe1f25008c46f379f9a295599c46f44730" Jan 27 15:39:46 crc kubenswrapper[4697]: I0127 15:39:46.880177 4697 scope.go:117] "RemoveContainer" containerID="3b3e5994844f75460670ebf4405acc857c69bcbf2ea85d491c81da5c3d0a7c4f" Jan 27 15:39:46 crc kubenswrapper[4697]: I0127 15:39:46.913655 4697 scope.go:117] "RemoveContainer" containerID="80dc09b6ac5700456b759d3d7ebde4333d93b1f0223b1146f3edfbff995bf507" Jan 27 15:39:46 crc kubenswrapper[4697]: I0127 15:39:46.982158 4697 scope.go:117] "RemoveContainer" containerID="7c000ddf2638ad23872e6f733e1f5c537c95bf9ee2fd1f801eac52b3a2c28342" Jan 27 15:39:58 crc kubenswrapper[4697]: I0127 15:39:58.059162 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-5c6j2"] Jan 27 15:39:58 crc kubenswrapper[4697]: I0127 15:39:58.067300 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-5c6j2"] Jan 27 15:39:58 crc kubenswrapper[4697]: I0127 15:39:58.579574 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09a835cc-5807-48ce-a9f8-354d3182603f" path="/var/lib/kubelet/pods/09a835cc-5807-48ce-a9f8-354d3182603f/volumes" Jan 27 15:40:07 crc kubenswrapper[4697]: I0127 15:40:07.055078 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-n5g7m"] Jan 27 15:40:07 crc kubenswrapper[4697]: I0127 15:40:07.059218 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-n5g7m"] Jan 27 15:40:08 crc kubenswrapper[4697]: I0127 15:40:08.581754 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba2a2abf-806a-4708-8f03-9e68c85c6c6c" path="/var/lib/kubelet/pods/ba2a2abf-806a-4708-8f03-9e68c85c6c6c/volumes" Jan 27 15:40:13 crc kubenswrapper[4697]: I0127 15:40:13.066121 4697 generic.go:334] "Generic (PLEG): container finished" podID="e6db178e-d462-4895-84e2-10695b0df557" containerID="a3c28fbcc5c059df5eb592e6a3f321d54eaf3fc48c41ff981937b3d94704081d" exitCode=0 Jan 27 15:40:13 crc kubenswrapper[4697]: I0127 15:40:13.066213 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rm5t2" event={"ID":"e6db178e-d462-4895-84e2-10695b0df557","Type":"ContainerDied","Data":"a3c28fbcc5c059df5eb592e6a3f321d54eaf3fc48c41ff981937b3d94704081d"} Jan 27 15:40:14 crc kubenswrapper[4697]: I0127 15:40:14.465267 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rm5t2" Jan 27 15:40:14 crc kubenswrapper[4697]: I0127 15:40:14.648755 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e6db178e-d462-4895-84e2-10695b0df557-ssh-key-openstack-edpm-ipam\") pod \"e6db178e-d462-4895-84e2-10695b0df557\" (UID: \"e6db178e-d462-4895-84e2-10695b0df557\") " Jan 27 15:40:14 crc kubenswrapper[4697]: I0127 15:40:14.648917 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rctld\" (UniqueName: \"kubernetes.io/projected/e6db178e-d462-4895-84e2-10695b0df557-kube-api-access-rctld\") pod \"e6db178e-d462-4895-84e2-10695b0df557\" (UID: \"e6db178e-d462-4895-84e2-10695b0df557\") " Jan 27 15:40:14 crc kubenswrapper[4697]: I0127 15:40:14.648961 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6db178e-d462-4895-84e2-10695b0df557-bootstrap-combined-ca-bundle\") pod \"e6db178e-d462-4895-84e2-10695b0df557\" (UID: \"e6db178e-d462-4895-84e2-10695b0df557\") " Jan 27 15:40:14 crc kubenswrapper[4697]: I0127 15:40:14.649146 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e6db178e-d462-4895-84e2-10695b0df557-inventory\") pod \"e6db178e-d462-4895-84e2-10695b0df557\" (UID: \"e6db178e-d462-4895-84e2-10695b0df557\") " Jan 27 15:40:14 crc kubenswrapper[4697]: I0127 15:40:14.656699 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6db178e-d462-4895-84e2-10695b0df557-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "e6db178e-d462-4895-84e2-10695b0df557" (UID: "e6db178e-d462-4895-84e2-10695b0df557"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:40:14 crc kubenswrapper[4697]: I0127 15:40:14.662443 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6db178e-d462-4895-84e2-10695b0df557-kube-api-access-rctld" (OuterVolumeSpecName: "kube-api-access-rctld") pod "e6db178e-d462-4895-84e2-10695b0df557" (UID: "e6db178e-d462-4895-84e2-10695b0df557"). InnerVolumeSpecName "kube-api-access-rctld". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:40:14 crc kubenswrapper[4697]: I0127 15:40:14.688931 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6db178e-d462-4895-84e2-10695b0df557-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "e6db178e-d462-4895-84e2-10695b0df557" (UID: "e6db178e-d462-4895-84e2-10695b0df557"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:40:14 crc kubenswrapper[4697]: I0127 15:40:14.689531 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6db178e-d462-4895-84e2-10695b0df557-inventory" (OuterVolumeSpecName: "inventory") pod "e6db178e-d462-4895-84e2-10695b0df557" (UID: "e6db178e-d462-4895-84e2-10695b0df557"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:40:14 crc kubenswrapper[4697]: I0127 15:40:14.752496 4697 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e6db178e-d462-4895-84e2-10695b0df557-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 15:40:14 crc kubenswrapper[4697]: I0127 15:40:14.752532 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rctld\" (UniqueName: \"kubernetes.io/projected/e6db178e-d462-4895-84e2-10695b0df557-kube-api-access-rctld\") on node \"crc\" DevicePath \"\"" Jan 27 15:40:14 crc kubenswrapper[4697]: I0127 15:40:14.752549 4697 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6db178e-d462-4895-84e2-10695b0df557-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 15:40:14 crc kubenswrapper[4697]: I0127 15:40:14.752563 4697 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e6db178e-d462-4895-84e2-10695b0df557-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 15:40:15 crc kubenswrapper[4697]: I0127 15:40:15.083759 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rm5t2" event={"ID":"e6db178e-d462-4895-84e2-10695b0df557","Type":"ContainerDied","Data":"71722b0223e307fb8d23b94c5d3bfb6f77b7f814b04c0dc6629e5f7bf69d2700"} Jan 27 15:40:15 crc kubenswrapper[4697]: I0127 15:40:15.084096 4697 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="71722b0223e307fb8d23b94c5d3bfb6f77b7f814b04c0dc6629e5f7bf69d2700" Jan 27 15:40:15 crc kubenswrapper[4697]: I0127 15:40:15.083859 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rm5t2" Jan 27 15:40:15 crc kubenswrapper[4697]: I0127 15:40:15.183883 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-67tjj"] Jan 27 15:40:15 crc kubenswrapper[4697]: E0127 15:40:15.184305 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6db178e-d462-4895-84e2-10695b0df557" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 27 15:40:15 crc kubenswrapper[4697]: I0127 15:40:15.184323 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6db178e-d462-4895-84e2-10695b0df557" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 27 15:40:15 crc kubenswrapper[4697]: I0127 15:40:15.184481 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6db178e-d462-4895-84e2-10695b0df557" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 27 15:40:15 crc kubenswrapper[4697]: I0127 15:40:15.185078 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-67tjj" Jan 27 15:40:15 crc kubenswrapper[4697]: I0127 15:40:15.188038 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 15:40:15 crc kubenswrapper[4697]: I0127 15:40:15.188413 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 15:40:15 crc kubenswrapper[4697]: I0127 15:40:15.188582 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ctbjc" Jan 27 15:40:15 crc kubenswrapper[4697]: I0127 15:40:15.188699 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 15:40:15 crc kubenswrapper[4697]: I0127 15:40:15.201961 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-67tjj"] Jan 27 15:40:15 crc kubenswrapper[4697]: I0127 15:40:15.362941 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lb4g\" (UniqueName: \"kubernetes.io/projected/0b244a0a-7ccb-49be-bcef-497d3b0f99be-kube-api-access-6lb4g\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-67tjj\" (UID: \"0b244a0a-7ccb-49be-bcef-497d3b0f99be\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-67tjj" Jan 27 15:40:15 crc kubenswrapper[4697]: I0127 15:40:15.363077 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0b244a0a-7ccb-49be-bcef-497d3b0f99be-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-67tjj\" (UID: \"0b244a0a-7ccb-49be-bcef-497d3b0f99be\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-67tjj" Jan 27 15:40:15 crc kubenswrapper[4697]: I0127 15:40:15.363118 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0b244a0a-7ccb-49be-bcef-497d3b0f99be-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-67tjj\" (UID: \"0b244a0a-7ccb-49be-bcef-497d3b0f99be\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-67tjj" Jan 27 15:40:15 crc kubenswrapper[4697]: I0127 15:40:15.466495 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0b244a0a-7ccb-49be-bcef-497d3b0f99be-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-67tjj\" (UID: \"0b244a0a-7ccb-49be-bcef-497d3b0f99be\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-67tjj" Jan 27 15:40:15 crc kubenswrapper[4697]: I0127 15:40:15.466603 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0b244a0a-7ccb-49be-bcef-497d3b0f99be-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-67tjj\" (UID: \"0b244a0a-7ccb-49be-bcef-497d3b0f99be\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-67tjj" Jan 27 15:40:15 crc kubenswrapper[4697]: I0127 15:40:15.466743 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6lb4g\" (UniqueName: \"kubernetes.io/projected/0b244a0a-7ccb-49be-bcef-497d3b0f99be-kube-api-access-6lb4g\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-67tjj\" (UID: \"0b244a0a-7ccb-49be-bcef-497d3b0f99be\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-67tjj" Jan 27 15:40:15 crc kubenswrapper[4697]: I0127 15:40:15.475659 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0b244a0a-7ccb-49be-bcef-497d3b0f99be-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-67tjj\" (UID: \"0b244a0a-7ccb-49be-bcef-497d3b0f99be\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-67tjj" Jan 27 15:40:15 crc kubenswrapper[4697]: I0127 15:40:15.483187 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0b244a0a-7ccb-49be-bcef-497d3b0f99be-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-67tjj\" (UID: \"0b244a0a-7ccb-49be-bcef-497d3b0f99be\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-67tjj" Jan 27 15:40:15 crc kubenswrapper[4697]: I0127 15:40:15.517085 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6lb4g\" (UniqueName: \"kubernetes.io/projected/0b244a0a-7ccb-49be-bcef-497d3b0f99be-kube-api-access-6lb4g\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-67tjj\" (UID: \"0b244a0a-7ccb-49be-bcef-497d3b0f99be\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-67tjj" Jan 27 15:40:15 crc kubenswrapper[4697]: I0127 15:40:15.803177 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-67tjj" Jan 27 15:40:16 crc kubenswrapper[4697]: I0127 15:40:16.347749 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-67tjj"] Jan 27 15:40:17 crc kubenswrapper[4697]: I0127 15:40:17.103624 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-67tjj" event={"ID":"0b244a0a-7ccb-49be-bcef-497d3b0f99be","Type":"ContainerStarted","Data":"04cbbe16abebd5c11956ff3c72927efc2f762cf4a7a1a86d3f865376c9943b77"} Jan 27 15:40:18 crc kubenswrapper[4697]: I0127 15:40:18.114714 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-67tjj" event={"ID":"0b244a0a-7ccb-49be-bcef-497d3b0f99be","Type":"ContainerStarted","Data":"15b42499407bc0c710b4743405bc4c266269e73475dcdfebeea643cc8d9c6b90"} Jan 27 15:40:18 crc kubenswrapper[4697]: I0127 15:40:18.135744 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-67tjj" podStartSLOduration=2.730671251 podStartE2EDuration="3.135721258s" podCreationTimestamp="2026-01-27 15:40:15 +0000 UTC" firstStartedPulling="2026-01-27 15:40:16.359571343 +0000 UTC m=+1912.531971124" lastFinishedPulling="2026-01-27 15:40:16.76462135 +0000 UTC m=+1912.937021131" observedRunningTime="2026-01-27 15:40:18.130624994 +0000 UTC m=+1914.303024765" watchObservedRunningTime="2026-01-27 15:40:18.135721258 +0000 UTC m=+1914.308121049" Jan 27 15:40:47 crc kubenswrapper[4697]: I0127 15:40:47.102161 4697 scope.go:117] "RemoveContainer" containerID="7a1be61c999c8362c2811e0f505dab5f61ce2764c439a9bc30b6b44d69387c3e" Jan 27 15:40:47 crc kubenswrapper[4697]: I0127 15:40:47.142671 4697 scope.go:117] "RemoveContainer" containerID="cbfa7c85b9e2c7f5b3e7e417f0fd23351a97f5c2e8291eebc7a7e7770d3b08b2" Jan 27 15:40:53 crc kubenswrapper[4697]: I0127 15:40:53.038946 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-xldlq"] Jan 27 15:40:53 crc kubenswrapper[4697]: I0127 15:40:53.049231 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-q8kwj"] Jan 27 15:40:53 crc kubenswrapper[4697]: I0127 15:40:53.059185 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-abaf-account-create-update-bwx76"] Jan 27 15:40:53 crc kubenswrapper[4697]: I0127 15:40:53.067189 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-xldlq"] Jan 27 15:40:53 crc kubenswrapper[4697]: I0127 15:40:53.074767 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-q8kwj"] Jan 27 15:40:53 crc kubenswrapper[4697]: I0127 15:40:53.082924 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-abaf-account-create-update-bwx76"] Jan 27 15:40:54 crc kubenswrapper[4697]: I0127 15:40:54.030162 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-d000-account-create-update-4bkrc"] Jan 27 15:40:54 crc kubenswrapper[4697]: I0127 15:40:54.040070 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-v9vbm"] Jan 27 15:40:54 crc kubenswrapper[4697]: I0127 15:40:54.048932 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-33d3-account-create-update-5rbnd"] Jan 27 15:40:54 crc kubenswrapper[4697]: I0127 15:40:54.058762 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-d000-account-create-update-4bkrc"] Jan 27 15:40:54 crc kubenswrapper[4697]: I0127 15:40:54.068307 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-33d3-account-create-update-5rbnd"] Jan 27 15:40:54 crc kubenswrapper[4697]: I0127 15:40:54.076807 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-v9vbm"] Jan 27 15:40:54 crc kubenswrapper[4697]: I0127 15:40:54.579648 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="597107a0-5b36-457f-82f2-486eb5d2880b" path="/var/lib/kubelet/pods/597107a0-5b36-457f-82f2-486eb5d2880b/volumes" Jan 27 15:40:54 crc kubenswrapper[4697]: I0127 15:40:54.580612 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6172572f-5fa4-419b-8d75-695458f7b4bd" path="/var/lib/kubelet/pods/6172572f-5fa4-419b-8d75-695458f7b4bd/volumes" Jan 27 15:40:54 crc kubenswrapper[4697]: I0127 15:40:54.581271 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0fb28a5-449f-4063-8063-47fe549d8b30" path="/var/lib/kubelet/pods/b0fb28a5-449f-4063-8063-47fe549d8b30/volumes" Jan 27 15:40:54 crc kubenswrapper[4697]: I0127 15:40:54.581907 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d02d925a-22fb-46b8-a24e-754310c36008" path="/var/lib/kubelet/pods/d02d925a-22fb-46b8-a24e-754310c36008/volumes" Jan 27 15:40:54 crc kubenswrapper[4697]: I0127 15:40:54.582998 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5466297-9701-4292-954c-ac110351d448" path="/var/lib/kubelet/pods/f5466297-9701-4292-954c-ac110351d448/volumes" Jan 27 15:40:54 crc kubenswrapper[4697]: I0127 15:40:54.583636 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f723bccb-279a-4613-8825-6af12bf7a421" path="/var/lib/kubelet/pods/f723bccb-279a-4613-8825-6af12bf7a421/volumes" Jan 27 15:41:25 crc kubenswrapper[4697]: I0127 15:41:25.109375 4697 patch_prober.go:28] interesting pod/machine-config-daemon-wz495 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 15:41:25 crc kubenswrapper[4697]: I0127 15:41:25.109998 4697 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 15:41:47 crc kubenswrapper[4697]: I0127 15:41:47.233626 4697 scope.go:117] "RemoveContainer" containerID="c0ca261d130fbe08c2aee5de95eafee9b4138f8782a380b9c85f22813626fc40" Jan 27 15:41:47 crc kubenswrapper[4697]: I0127 15:41:47.256805 4697 scope.go:117] "RemoveContainer" containerID="517354af8f224427c18a7bb451b1a385b64dde899684989428ca535374d6c9ae" Jan 27 15:41:47 crc kubenswrapper[4697]: I0127 15:41:47.323683 4697 scope.go:117] "RemoveContainer" containerID="3617dbee2a7ed925b8df4578ec74a6b740a988cba21c1ea798ce504fd7a33edb" Jan 27 15:41:47 crc kubenswrapper[4697]: I0127 15:41:47.382535 4697 scope.go:117] "RemoveContainer" containerID="ad71c925f99b35347a70821180aaa38732851aa2f657330db39a2fa0d42790f1" Jan 27 15:41:47 crc kubenswrapper[4697]: I0127 15:41:47.434674 4697 scope.go:117] "RemoveContainer" containerID="9f332466f31941647d265892d47e52cd4f1f9d1d59ba86bfe5c382d31904f671" Jan 27 15:41:47 crc kubenswrapper[4697]: I0127 15:41:47.490891 4697 scope.go:117] "RemoveContainer" containerID="06e40ab31c616b05a9e6cf7a5543aeaf866ce670433cbde6b982f4eb1cf01b9c" Jan 27 15:41:55 crc kubenswrapper[4697]: I0127 15:41:55.108506 4697 patch_prober.go:28] interesting pod/machine-config-daemon-wz495 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 15:41:55 crc kubenswrapper[4697]: I0127 15:41:55.109272 4697 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 15:42:04 crc kubenswrapper[4697]: I0127 15:42:04.035603 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-qbv8q"] Jan 27 15:42:04 crc kubenswrapper[4697]: I0127 15:42:04.044066 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-qbv8q"] Jan 27 15:42:04 crc kubenswrapper[4697]: I0127 15:42:04.583472 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="68d5ac88-2f0e-4785-8f16-908526425bf5" path="/var/lib/kubelet/pods/68d5ac88-2f0e-4785-8f16-908526425bf5/volumes" Jan 27 15:42:15 crc kubenswrapper[4697]: I0127 15:42:15.293832 4697 generic.go:334] "Generic (PLEG): container finished" podID="0b244a0a-7ccb-49be-bcef-497d3b0f99be" containerID="15b42499407bc0c710b4743405bc4c266269e73475dcdfebeea643cc8d9c6b90" exitCode=0 Jan 27 15:42:15 crc kubenswrapper[4697]: I0127 15:42:15.293985 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-67tjj" event={"ID":"0b244a0a-7ccb-49be-bcef-497d3b0f99be","Type":"ContainerDied","Data":"15b42499407bc0c710b4743405bc4c266269e73475dcdfebeea643cc8d9c6b90"} Jan 27 15:42:16 crc kubenswrapper[4697]: I0127 15:42:16.751861 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-67tjj" Jan 27 15:42:16 crc kubenswrapper[4697]: I0127 15:42:16.917920 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6lb4g\" (UniqueName: \"kubernetes.io/projected/0b244a0a-7ccb-49be-bcef-497d3b0f99be-kube-api-access-6lb4g\") pod \"0b244a0a-7ccb-49be-bcef-497d3b0f99be\" (UID: \"0b244a0a-7ccb-49be-bcef-497d3b0f99be\") " Jan 27 15:42:16 crc kubenswrapper[4697]: I0127 15:42:16.918011 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0b244a0a-7ccb-49be-bcef-497d3b0f99be-ssh-key-openstack-edpm-ipam\") pod \"0b244a0a-7ccb-49be-bcef-497d3b0f99be\" (UID: \"0b244a0a-7ccb-49be-bcef-497d3b0f99be\") " Jan 27 15:42:16 crc kubenswrapper[4697]: I0127 15:42:16.918050 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0b244a0a-7ccb-49be-bcef-497d3b0f99be-inventory\") pod \"0b244a0a-7ccb-49be-bcef-497d3b0f99be\" (UID: \"0b244a0a-7ccb-49be-bcef-497d3b0f99be\") " Jan 27 15:42:16 crc kubenswrapper[4697]: I0127 15:42:16.926568 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b244a0a-7ccb-49be-bcef-497d3b0f99be-kube-api-access-6lb4g" (OuterVolumeSpecName: "kube-api-access-6lb4g") pod "0b244a0a-7ccb-49be-bcef-497d3b0f99be" (UID: "0b244a0a-7ccb-49be-bcef-497d3b0f99be"). InnerVolumeSpecName "kube-api-access-6lb4g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:42:16 crc kubenswrapper[4697]: I0127 15:42:16.945667 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b244a0a-7ccb-49be-bcef-497d3b0f99be-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "0b244a0a-7ccb-49be-bcef-497d3b0f99be" (UID: "0b244a0a-7ccb-49be-bcef-497d3b0f99be"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:42:16 crc kubenswrapper[4697]: I0127 15:42:16.948335 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b244a0a-7ccb-49be-bcef-497d3b0f99be-inventory" (OuterVolumeSpecName: "inventory") pod "0b244a0a-7ccb-49be-bcef-497d3b0f99be" (UID: "0b244a0a-7ccb-49be-bcef-497d3b0f99be"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:42:17 crc kubenswrapper[4697]: I0127 15:42:17.020378 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6lb4g\" (UniqueName: \"kubernetes.io/projected/0b244a0a-7ccb-49be-bcef-497d3b0f99be-kube-api-access-6lb4g\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:17 crc kubenswrapper[4697]: I0127 15:42:17.020412 4697 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0b244a0a-7ccb-49be-bcef-497d3b0f99be-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:17 crc kubenswrapper[4697]: I0127 15:42:17.020427 4697 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0b244a0a-7ccb-49be-bcef-497d3b0f99be-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:17 crc kubenswrapper[4697]: I0127 15:42:17.319946 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-67tjj" event={"ID":"0b244a0a-7ccb-49be-bcef-497d3b0f99be","Type":"ContainerDied","Data":"04cbbe16abebd5c11956ff3c72927efc2f762cf4a7a1a86d3f865376c9943b77"} Jan 27 15:42:17 crc kubenswrapper[4697]: I0127 15:42:17.319989 4697 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="04cbbe16abebd5c11956ff3c72927efc2f762cf4a7a1a86d3f865376c9943b77" Jan 27 15:42:17 crc kubenswrapper[4697]: I0127 15:42:17.320034 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-67tjj" Jan 27 15:42:17 crc kubenswrapper[4697]: I0127 15:42:17.433572 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-fpd2r"] Jan 27 15:42:17 crc kubenswrapper[4697]: E0127 15:42:17.434035 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b244a0a-7ccb-49be-bcef-497d3b0f99be" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 27 15:42:17 crc kubenswrapper[4697]: I0127 15:42:17.434056 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b244a0a-7ccb-49be-bcef-497d3b0f99be" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 27 15:42:17 crc kubenswrapper[4697]: I0127 15:42:17.434274 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b244a0a-7ccb-49be-bcef-497d3b0f99be" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 27 15:42:17 crc kubenswrapper[4697]: I0127 15:42:17.435016 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-fpd2r" Jan 27 15:42:17 crc kubenswrapper[4697]: I0127 15:42:17.437884 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 15:42:17 crc kubenswrapper[4697]: I0127 15:42:17.438034 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ctbjc" Jan 27 15:42:17 crc kubenswrapper[4697]: I0127 15:42:17.438282 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 15:42:17 crc kubenswrapper[4697]: I0127 15:42:17.438399 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 15:42:17 crc kubenswrapper[4697]: I0127 15:42:17.441633 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-fpd2r"] Jan 27 15:42:17 crc kubenswrapper[4697]: I0127 15:42:17.529229 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ed55f221-f5eb-421e-88b3-682ff73202dc-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-fpd2r\" (UID: \"ed55f221-f5eb-421e-88b3-682ff73202dc\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-fpd2r" Jan 27 15:42:17 crc kubenswrapper[4697]: I0127 15:42:17.529390 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkbds\" (UniqueName: \"kubernetes.io/projected/ed55f221-f5eb-421e-88b3-682ff73202dc-kube-api-access-bkbds\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-fpd2r\" (UID: \"ed55f221-f5eb-421e-88b3-682ff73202dc\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-fpd2r" Jan 27 15:42:17 crc kubenswrapper[4697]: I0127 15:42:17.529460 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ed55f221-f5eb-421e-88b3-682ff73202dc-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-fpd2r\" (UID: \"ed55f221-f5eb-421e-88b3-682ff73202dc\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-fpd2r" Jan 27 15:42:17 crc kubenswrapper[4697]: I0127 15:42:17.630985 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ed55f221-f5eb-421e-88b3-682ff73202dc-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-fpd2r\" (UID: \"ed55f221-f5eb-421e-88b3-682ff73202dc\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-fpd2r" Jan 27 15:42:17 crc kubenswrapper[4697]: I0127 15:42:17.631171 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ed55f221-f5eb-421e-88b3-682ff73202dc-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-fpd2r\" (UID: \"ed55f221-f5eb-421e-88b3-682ff73202dc\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-fpd2r" Jan 27 15:42:17 crc kubenswrapper[4697]: I0127 15:42:17.631244 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bkbds\" (UniqueName: \"kubernetes.io/projected/ed55f221-f5eb-421e-88b3-682ff73202dc-kube-api-access-bkbds\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-fpd2r\" (UID: \"ed55f221-f5eb-421e-88b3-682ff73202dc\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-fpd2r" Jan 27 15:42:17 crc kubenswrapper[4697]: I0127 15:42:17.634743 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ed55f221-f5eb-421e-88b3-682ff73202dc-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-fpd2r\" (UID: \"ed55f221-f5eb-421e-88b3-682ff73202dc\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-fpd2r" Jan 27 15:42:17 crc kubenswrapper[4697]: I0127 15:42:17.634753 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ed55f221-f5eb-421e-88b3-682ff73202dc-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-fpd2r\" (UID: \"ed55f221-f5eb-421e-88b3-682ff73202dc\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-fpd2r" Jan 27 15:42:17 crc kubenswrapper[4697]: I0127 15:42:17.649777 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bkbds\" (UniqueName: \"kubernetes.io/projected/ed55f221-f5eb-421e-88b3-682ff73202dc-kube-api-access-bkbds\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-fpd2r\" (UID: \"ed55f221-f5eb-421e-88b3-682ff73202dc\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-fpd2r" Jan 27 15:42:17 crc kubenswrapper[4697]: I0127 15:42:17.748995 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-fpd2r" Jan 27 15:42:18 crc kubenswrapper[4697]: I0127 15:42:18.249267 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-fpd2r"] Jan 27 15:42:18 crc kubenswrapper[4697]: I0127 15:42:18.258761 4697 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 15:42:18 crc kubenswrapper[4697]: I0127 15:42:18.330664 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-fpd2r" event={"ID":"ed55f221-f5eb-421e-88b3-682ff73202dc","Type":"ContainerStarted","Data":"2c7c2a7897892850f2ea53b96f522f75caebc3481ba7a042514aab07018048d9"} Jan 27 15:42:19 crc kubenswrapper[4697]: I0127 15:42:19.341550 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-fpd2r" event={"ID":"ed55f221-f5eb-421e-88b3-682ff73202dc","Type":"ContainerStarted","Data":"b1391421dd75805908f23139e0e7450e0c1cb873f44173767e082d70289c6354"} Jan 27 15:42:19 crc kubenswrapper[4697]: I0127 15:42:19.367221 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-fpd2r" podStartSLOduration=1.9504136760000002 podStartE2EDuration="2.367200052s" podCreationTimestamp="2026-01-27 15:42:17 +0000 UTC" firstStartedPulling="2026-01-27 15:42:18.258541287 +0000 UTC m=+2034.430941058" lastFinishedPulling="2026-01-27 15:42:18.675327643 +0000 UTC m=+2034.847727434" observedRunningTime="2026-01-27 15:42:19.366030654 +0000 UTC m=+2035.538430435" watchObservedRunningTime="2026-01-27 15:42:19.367200052 +0000 UTC m=+2035.539599843" Jan 27 15:42:25 crc kubenswrapper[4697]: I0127 15:42:25.109388 4697 patch_prober.go:28] interesting pod/machine-config-daemon-wz495 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 15:42:25 crc kubenswrapper[4697]: I0127 15:42:25.109996 4697 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 15:42:25 crc kubenswrapper[4697]: I0127 15:42:25.110057 4697 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wz495" Jan 27 15:42:25 crc kubenswrapper[4697]: I0127 15:42:25.110841 4697 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8a39377f66792076ded24d1dd2009ec1f66f27328a943fc9055b637c8a864fd4"} pod="openshift-machine-config-operator/machine-config-daemon-wz495" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 15:42:25 crc kubenswrapper[4697]: I0127 15:42:25.110902 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" containerName="machine-config-daemon" containerID="cri-o://8a39377f66792076ded24d1dd2009ec1f66f27328a943fc9055b637c8a864fd4" gracePeriod=600 Jan 27 15:42:25 crc kubenswrapper[4697]: I0127 15:42:25.387647 4697 generic.go:334] "Generic (PLEG): container finished" podID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" containerID="8a39377f66792076ded24d1dd2009ec1f66f27328a943fc9055b637c8a864fd4" exitCode=0 Jan 27 15:42:25 crc kubenswrapper[4697]: I0127 15:42:25.387724 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wz495" event={"ID":"e9bec8bc-b2a6-4865-83ca-692ae5c022a6","Type":"ContainerDied","Data":"8a39377f66792076ded24d1dd2009ec1f66f27328a943fc9055b637c8a864fd4"} Jan 27 15:42:25 crc kubenswrapper[4697]: I0127 15:42:25.388022 4697 scope.go:117] "RemoveContainer" containerID="1041a01976f73e6dbbf881bb74cdc0195408ed73fc04fdd6c07635790ef653fc" Jan 27 15:42:26 crc kubenswrapper[4697]: I0127 15:42:26.408470 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wz495" event={"ID":"e9bec8bc-b2a6-4865-83ca-692ae5c022a6","Type":"ContainerStarted","Data":"5c02510c7f6b98a79ed557ec4fec93e0bac6410a1ead6771ce500bd398b9cca6"} Jan 27 15:42:36 crc kubenswrapper[4697]: I0127 15:42:36.054980 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-szx5d"] Jan 27 15:42:36 crc kubenswrapper[4697]: I0127 15:42:36.066681 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-szx5d"] Jan 27 15:42:36 crc kubenswrapper[4697]: I0127 15:42:36.589266 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5544478-ac7d-47a8-a27f-8da131efb0fd" path="/var/lib/kubelet/pods/b5544478-ac7d-47a8-a27f-8da131efb0fd/volumes" Jan 27 15:42:45 crc kubenswrapper[4697]: I0127 15:42:45.027605 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-8qn4m"] Jan 27 15:42:45 crc kubenswrapper[4697]: I0127 15:42:45.036098 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-8qn4m"] Jan 27 15:42:46 crc kubenswrapper[4697]: I0127 15:42:46.580247 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dab4d694-1486-4154-89dd-5f2f04639abe" path="/var/lib/kubelet/pods/dab4d694-1486-4154-89dd-5f2f04639abe/volumes" Jan 27 15:42:47 crc kubenswrapper[4697]: I0127 15:42:47.660919 4697 scope.go:117] "RemoveContainer" containerID="1ff89c29f96f550b0c007c37fa86cbd840a71abf4f2d281576e8d3720d9458cf" Jan 27 15:42:47 crc kubenswrapper[4697]: I0127 15:42:47.712037 4697 scope.go:117] "RemoveContainer" containerID="ea5959e372e764bda1a7480ee5898792d34f826f9d14924c3af38f6211ba8f13" Jan 27 15:42:47 crc kubenswrapper[4697]: I0127 15:42:47.772260 4697 scope.go:117] "RemoveContainer" containerID="1bcbeaaf6eb253bb46ca641faacb97dbbeb6a74dd9da50d0b7167b10271fb699" Jan 27 15:43:21 crc kubenswrapper[4697]: I0127 15:43:21.055347 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-mls9d"] Jan 27 15:43:21 crc kubenswrapper[4697]: I0127 15:43:21.064492 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-mls9d"] Jan 27 15:43:22 crc kubenswrapper[4697]: I0127 15:43:22.579660 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9d65ac1-a402-40bd-96e8-9e7cacbe8f0b" path="/var/lib/kubelet/pods/e9d65ac1-a402-40bd-96e8-9e7cacbe8f0b/volumes" Jan 27 15:43:38 crc kubenswrapper[4697]: I0127 15:43:38.005856 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-49vrp"] Jan 27 15:43:38 crc kubenswrapper[4697]: I0127 15:43:38.008687 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-49vrp" Jan 27 15:43:38 crc kubenswrapper[4697]: I0127 15:43:38.027431 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-49vrp"] Jan 27 15:43:38 crc kubenswrapper[4697]: I0127 15:43:38.038741 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce2d0c40-6904-4472-8711-b8e3bdaf1876-catalog-content\") pod \"community-operators-49vrp\" (UID: \"ce2d0c40-6904-4472-8711-b8e3bdaf1876\") " pod="openshift-marketplace/community-operators-49vrp" Jan 27 15:43:38 crc kubenswrapper[4697]: I0127 15:43:38.038856 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce2d0c40-6904-4472-8711-b8e3bdaf1876-utilities\") pod \"community-operators-49vrp\" (UID: \"ce2d0c40-6904-4472-8711-b8e3bdaf1876\") " pod="openshift-marketplace/community-operators-49vrp" Jan 27 15:43:38 crc kubenswrapper[4697]: I0127 15:43:38.038909 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxq9v\" (UniqueName: \"kubernetes.io/projected/ce2d0c40-6904-4472-8711-b8e3bdaf1876-kube-api-access-cxq9v\") pod \"community-operators-49vrp\" (UID: \"ce2d0c40-6904-4472-8711-b8e3bdaf1876\") " pod="openshift-marketplace/community-operators-49vrp" Jan 27 15:43:38 crc kubenswrapper[4697]: I0127 15:43:38.140447 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce2d0c40-6904-4472-8711-b8e3bdaf1876-catalog-content\") pod \"community-operators-49vrp\" (UID: \"ce2d0c40-6904-4472-8711-b8e3bdaf1876\") " pod="openshift-marketplace/community-operators-49vrp" Jan 27 15:43:38 crc kubenswrapper[4697]: I0127 15:43:38.140565 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce2d0c40-6904-4472-8711-b8e3bdaf1876-utilities\") pod \"community-operators-49vrp\" (UID: \"ce2d0c40-6904-4472-8711-b8e3bdaf1876\") " pod="openshift-marketplace/community-operators-49vrp" Jan 27 15:43:38 crc kubenswrapper[4697]: I0127 15:43:38.140651 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cxq9v\" (UniqueName: \"kubernetes.io/projected/ce2d0c40-6904-4472-8711-b8e3bdaf1876-kube-api-access-cxq9v\") pod \"community-operators-49vrp\" (UID: \"ce2d0c40-6904-4472-8711-b8e3bdaf1876\") " pod="openshift-marketplace/community-operators-49vrp" Jan 27 15:43:38 crc kubenswrapper[4697]: I0127 15:43:38.140978 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce2d0c40-6904-4472-8711-b8e3bdaf1876-catalog-content\") pod \"community-operators-49vrp\" (UID: \"ce2d0c40-6904-4472-8711-b8e3bdaf1876\") " pod="openshift-marketplace/community-operators-49vrp" Jan 27 15:43:38 crc kubenswrapper[4697]: I0127 15:43:38.141301 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce2d0c40-6904-4472-8711-b8e3bdaf1876-utilities\") pod \"community-operators-49vrp\" (UID: \"ce2d0c40-6904-4472-8711-b8e3bdaf1876\") " pod="openshift-marketplace/community-operators-49vrp" Jan 27 15:43:38 crc kubenswrapper[4697]: I0127 15:43:38.167674 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxq9v\" (UniqueName: \"kubernetes.io/projected/ce2d0c40-6904-4472-8711-b8e3bdaf1876-kube-api-access-cxq9v\") pod \"community-operators-49vrp\" (UID: \"ce2d0c40-6904-4472-8711-b8e3bdaf1876\") " pod="openshift-marketplace/community-operators-49vrp" Jan 27 15:43:38 crc kubenswrapper[4697]: I0127 15:43:38.332527 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-49vrp" Jan 27 15:43:38 crc kubenswrapper[4697]: I0127 15:43:38.980105 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-49vrp"] Jan 27 15:43:39 crc kubenswrapper[4697]: I0127 15:43:39.035409 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-49vrp" event={"ID":"ce2d0c40-6904-4472-8711-b8e3bdaf1876","Type":"ContainerStarted","Data":"c6c79ea4a7d21e6b3ad34cdc8272963447168d7035d001192c6dd0ead212aae7"} Jan 27 15:43:40 crc kubenswrapper[4697]: I0127 15:43:40.045692 4697 generic.go:334] "Generic (PLEG): container finished" podID="ce2d0c40-6904-4472-8711-b8e3bdaf1876" containerID="a784e9cb72bc4155ae494b5ff3e63c1471a7b9a3f6316ff77a70c044821d5821" exitCode=0 Jan 27 15:43:40 crc kubenswrapper[4697]: I0127 15:43:40.045740 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-49vrp" event={"ID":"ce2d0c40-6904-4472-8711-b8e3bdaf1876","Type":"ContainerDied","Data":"a784e9cb72bc4155ae494b5ff3e63c1471a7b9a3f6316ff77a70c044821d5821"} Jan 27 15:43:42 crc kubenswrapper[4697]: I0127 15:43:42.074486 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-49vrp" event={"ID":"ce2d0c40-6904-4472-8711-b8e3bdaf1876","Type":"ContainerStarted","Data":"4f0d7b97a43b3c645b0673dfb69652fefddb4c9a299acae63535bc76b8af9fbd"} Jan 27 15:43:45 crc kubenswrapper[4697]: I0127 15:43:45.100028 4697 generic.go:334] "Generic (PLEG): container finished" podID="ce2d0c40-6904-4472-8711-b8e3bdaf1876" containerID="4f0d7b97a43b3c645b0673dfb69652fefddb4c9a299acae63535bc76b8af9fbd" exitCode=0 Jan 27 15:43:45 crc kubenswrapper[4697]: I0127 15:43:45.100121 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-49vrp" event={"ID":"ce2d0c40-6904-4472-8711-b8e3bdaf1876","Type":"ContainerDied","Data":"4f0d7b97a43b3c645b0673dfb69652fefddb4c9a299acae63535bc76b8af9fbd"} Jan 27 15:43:46 crc kubenswrapper[4697]: I0127 15:43:46.110402 4697 generic.go:334] "Generic (PLEG): container finished" podID="ed55f221-f5eb-421e-88b3-682ff73202dc" containerID="b1391421dd75805908f23139e0e7450e0c1cb873f44173767e082d70289c6354" exitCode=0 Jan 27 15:43:46 crc kubenswrapper[4697]: I0127 15:43:46.110473 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-fpd2r" event={"ID":"ed55f221-f5eb-421e-88b3-682ff73202dc","Type":"ContainerDied","Data":"b1391421dd75805908f23139e0e7450e0c1cb873f44173767e082d70289c6354"} Jan 27 15:43:46 crc kubenswrapper[4697]: I0127 15:43:46.115110 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-49vrp" event={"ID":"ce2d0c40-6904-4472-8711-b8e3bdaf1876","Type":"ContainerStarted","Data":"424585dd5c44c99c4305cc8874fdfec50073fd6915b0c98d007462edbbc08c86"} Jan 27 15:43:46 crc kubenswrapper[4697]: I0127 15:43:46.170167 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-49vrp" podStartSLOduration=3.647157269 podStartE2EDuration="9.170142237s" podCreationTimestamp="2026-01-27 15:43:37 +0000 UTC" firstStartedPulling="2026-01-27 15:43:40.047812768 +0000 UTC m=+2116.220212549" lastFinishedPulling="2026-01-27 15:43:45.570797736 +0000 UTC m=+2121.743197517" observedRunningTime="2026-01-27 15:43:46.161067225 +0000 UTC m=+2122.333467026" watchObservedRunningTime="2026-01-27 15:43:46.170142237 +0000 UTC m=+2122.342542018" Jan 27 15:43:47 crc kubenswrapper[4697]: I0127 15:43:47.581718 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-fpd2r" Jan 27 15:43:47 crc kubenswrapper[4697]: I0127 15:43:47.653147 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bkbds\" (UniqueName: \"kubernetes.io/projected/ed55f221-f5eb-421e-88b3-682ff73202dc-kube-api-access-bkbds\") pod \"ed55f221-f5eb-421e-88b3-682ff73202dc\" (UID: \"ed55f221-f5eb-421e-88b3-682ff73202dc\") " Jan 27 15:43:47 crc kubenswrapper[4697]: I0127 15:43:47.653359 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ed55f221-f5eb-421e-88b3-682ff73202dc-ssh-key-openstack-edpm-ipam\") pod \"ed55f221-f5eb-421e-88b3-682ff73202dc\" (UID: \"ed55f221-f5eb-421e-88b3-682ff73202dc\") " Jan 27 15:43:47 crc kubenswrapper[4697]: I0127 15:43:47.654335 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ed55f221-f5eb-421e-88b3-682ff73202dc-inventory\") pod \"ed55f221-f5eb-421e-88b3-682ff73202dc\" (UID: \"ed55f221-f5eb-421e-88b3-682ff73202dc\") " Jan 27 15:43:47 crc kubenswrapper[4697]: I0127 15:43:47.672881 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed55f221-f5eb-421e-88b3-682ff73202dc-kube-api-access-bkbds" (OuterVolumeSpecName: "kube-api-access-bkbds") pod "ed55f221-f5eb-421e-88b3-682ff73202dc" (UID: "ed55f221-f5eb-421e-88b3-682ff73202dc"). InnerVolumeSpecName "kube-api-access-bkbds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:43:47 crc kubenswrapper[4697]: I0127 15:43:47.682986 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed55f221-f5eb-421e-88b3-682ff73202dc-inventory" (OuterVolumeSpecName: "inventory") pod "ed55f221-f5eb-421e-88b3-682ff73202dc" (UID: "ed55f221-f5eb-421e-88b3-682ff73202dc"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:43:47 crc kubenswrapper[4697]: I0127 15:43:47.695610 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed55f221-f5eb-421e-88b3-682ff73202dc-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "ed55f221-f5eb-421e-88b3-682ff73202dc" (UID: "ed55f221-f5eb-421e-88b3-682ff73202dc"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:43:47 crc kubenswrapper[4697]: I0127 15:43:47.756434 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bkbds\" (UniqueName: \"kubernetes.io/projected/ed55f221-f5eb-421e-88b3-682ff73202dc-kube-api-access-bkbds\") on node \"crc\" DevicePath \"\"" Jan 27 15:43:47 crc kubenswrapper[4697]: I0127 15:43:47.756474 4697 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ed55f221-f5eb-421e-88b3-682ff73202dc-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 15:43:47 crc kubenswrapper[4697]: I0127 15:43:47.756488 4697 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ed55f221-f5eb-421e-88b3-682ff73202dc-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 15:43:47 crc kubenswrapper[4697]: I0127 15:43:47.922965 4697 scope.go:117] "RemoveContainer" containerID="6c713ae0a01cf18d1cffdf269e6e59001a1834caed56b26e6be53ed3fb283d13" Jan 27 15:43:48 crc kubenswrapper[4697]: I0127 15:43:48.141597 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-fpd2r" event={"ID":"ed55f221-f5eb-421e-88b3-682ff73202dc","Type":"ContainerDied","Data":"2c7c2a7897892850f2ea53b96f522f75caebc3481ba7a042514aab07018048d9"} Jan 27 15:43:48 crc kubenswrapper[4697]: I0127 15:43:48.141947 4697 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c7c2a7897892850f2ea53b96f522f75caebc3481ba7a042514aab07018048d9" Jan 27 15:43:48 crc kubenswrapper[4697]: I0127 15:43:48.141918 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-fpd2r" Jan 27 15:43:48 crc kubenswrapper[4697]: I0127 15:43:48.233510 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-h6skx"] Jan 27 15:43:48 crc kubenswrapper[4697]: E0127 15:43:48.234490 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed55f221-f5eb-421e-88b3-682ff73202dc" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 27 15:43:48 crc kubenswrapper[4697]: I0127 15:43:48.234589 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed55f221-f5eb-421e-88b3-682ff73202dc" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 27 15:43:48 crc kubenswrapper[4697]: I0127 15:43:48.234906 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed55f221-f5eb-421e-88b3-682ff73202dc" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 27 15:43:48 crc kubenswrapper[4697]: I0127 15:43:48.236288 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-h6skx" Jan 27 15:43:48 crc kubenswrapper[4697]: I0127 15:43:48.239202 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 15:43:48 crc kubenswrapper[4697]: I0127 15:43:48.239394 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 15:43:48 crc kubenswrapper[4697]: I0127 15:43:48.240115 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ctbjc" Jan 27 15:43:48 crc kubenswrapper[4697]: I0127 15:43:48.244850 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 15:43:48 crc kubenswrapper[4697]: I0127 15:43:48.245240 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-h6skx"] Jan 27 15:43:48 crc kubenswrapper[4697]: I0127 15:43:48.266046 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dba3e49a-c1cb-4006-b821-a341645c7fba-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-h6skx\" (UID: \"dba3e49a-c1cb-4006-b821-a341645c7fba\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-h6skx" Jan 27 15:43:48 crc kubenswrapper[4697]: I0127 15:43:48.266234 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dba3e49a-c1cb-4006-b821-a341645c7fba-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-h6skx\" (UID: \"dba3e49a-c1cb-4006-b821-a341645c7fba\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-h6skx" Jan 27 15:43:48 crc kubenswrapper[4697]: I0127 15:43:48.266283 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6jvv\" (UniqueName: \"kubernetes.io/projected/dba3e49a-c1cb-4006-b821-a341645c7fba-kube-api-access-g6jvv\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-h6skx\" (UID: \"dba3e49a-c1cb-4006-b821-a341645c7fba\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-h6skx" Jan 27 15:43:48 crc kubenswrapper[4697]: I0127 15:43:48.333308 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-49vrp" Jan 27 15:43:48 crc kubenswrapper[4697]: I0127 15:43:48.333353 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-49vrp" Jan 27 15:43:48 crc kubenswrapper[4697]: I0127 15:43:48.368344 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dba3e49a-c1cb-4006-b821-a341645c7fba-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-h6skx\" (UID: \"dba3e49a-c1cb-4006-b821-a341645c7fba\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-h6skx" Jan 27 15:43:48 crc kubenswrapper[4697]: I0127 15:43:48.368527 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dba3e49a-c1cb-4006-b821-a341645c7fba-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-h6skx\" (UID: \"dba3e49a-c1cb-4006-b821-a341645c7fba\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-h6skx" Jan 27 15:43:48 crc kubenswrapper[4697]: I0127 15:43:48.368567 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6jvv\" (UniqueName: \"kubernetes.io/projected/dba3e49a-c1cb-4006-b821-a341645c7fba-kube-api-access-g6jvv\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-h6skx\" (UID: \"dba3e49a-c1cb-4006-b821-a341645c7fba\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-h6skx" Jan 27 15:43:48 crc kubenswrapper[4697]: I0127 15:43:48.377149 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dba3e49a-c1cb-4006-b821-a341645c7fba-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-h6skx\" (UID: \"dba3e49a-c1cb-4006-b821-a341645c7fba\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-h6skx" Jan 27 15:43:48 crc kubenswrapper[4697]: I0127 15:43:48.377869 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dba3e49a-c1cb-4006-b821-a341645c7fba-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-h6skx\" (UID: \"dba3e49a-c1cb-4006-b821-a341645c7fba\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-h6skx" Jan 27 15:43:48 crc kubenswrapper[4697]: I0127 15:43:48.387767 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6jvv\" (UniqueName: \"kubernetes.io/projected/dba3e49a-c1cb-4006-b821-a341645c7fba-kube-api-access-g6jvv\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-h6skx\" (UID: \"dba3e49a-c1cb-4006-b821-a341645c7fba\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-h6skx" Jan 27 15:43:48 crc kubenswrapper[4697]: I0127 15:43:48.551673 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-h6skx" Jan 27 15:43:49 crc kubenswrapper[4697]: W0127 15:43:49.094128 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddba3e49a_c1cb_4006_b821_a341645c7fba.slice/crio-a2e566bf3bc51cd8c8d4eeb4f94da67e978ed961b09c9f4fcce7bdbe6feb9fd6 WatchSource:0}: Error finding container a2e566bf3bc51cd8c8d4eeb4f94da67e978ed961b09c9f4fcce7bdbe6feb9fd6: Status 404 returned error can't find the container with id a2e566bf3bc51cd8c8d4eeb4f94da67e978ed961b09c9f4fcce7bdbe6feb9fd6 Jan 27 15:43:49 crc kubenswrapper[4697]: I0127 15:43:49.100580 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-h6skx"] Jan 27 15:43:49 crc kubenswrapper[4697]: I0127 15:43:49.151112 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-h6skx" event={"ID":"dba3e49a-c1cb-4006-b821-a341645c7fba","Type":"ContainerStarted","Data":"a2e566bf3bc51cd8c8d4eeb4f94da67e978ed961b09c9f4fcce7bdbe6feb9fd6"} Jan 27 15:43:49 crc kubenswrapper[4697]: I0127 15:43:49.387426 4697 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-49vrp" podUID="ce2d0c40-6904-4472-8711-b8e3bdaf1876" containerName="registry-server" probeResult="failure" output=< Jan 27 15:43:49 crc kubenswrapper[4697]: timeout: failed to connect service ":50051" within 1s Jan 27 15:43:49 crc kubenswrapper[4697]: > Jan 27 15:43:50 crc kubenswrapper[4697]: I0127 15:43:50.164552 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-h6skx" event={"ID":"dba3e49a-c1cb-4006-b821-a341645c7fba","Type":"ContainerStarted","Data":"ed318e786909e2bd3ac9f681bdc35ed6558d5fcf7bbf08e41468d93fdd50cc2d"} Jan 27 15:43:55 crc kubenswrapper[4697]: I0127 15:43:55.220969 4697 generic.go:334] "Generic (PLEG): container finished" podID="dba3e49a-c1cb-4006-b821-a341645c7fba" containerID="ed318e786909e2bd3ac9f681bdc35ed6558d5fcf7bbf08e41468d93fdd50cc2d" exitCode=0 Jan 27 15:43:55 crc kubenswrapper[4697]: I0127 15:43:55.221038 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-h6skx" event={"ID":"dba3e49a-c1cb-4006-b821-a341645c7fba","Type":"ContainerDied","Data":"ed318e786909e2bd3ac9f681bdc35ed6558d5fcf7bbf08e41468d93fdd50cc2d"} Jan 27 15:43:56 crc kubenswrapper[4697]: I0127 15:43:56.707904 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-h6skx" Jan 27 15:43:56 crc kubenswrapper[4697]: I0127 15:43:56.742619 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dba3e49a-c1cb-4006-b821-a341645c7fba-ssh-key-openstack-edpm-ipam\") pod \"dba3e49a-c1cb-4006-b821-a341645c7fba\" (UID: \"dba3e49a-c1cb-4006-b821-a341645c7fba\") " Jan 27 15:43:56 crc kubenswrapper[4697]: I0127 15:43:56.743119 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g6jvv\" (UniqueName: \"kubernetes.io/projected/dba3e49a-c1cb-4006-b821-a341645c7fba-kube-api-access-g6jvv\") pod \"dba3e49a-c1cb-4006-b821-a341645c7fba\" (UID: \"dba3e49a-c1cb-4006-b821-a341645c7fba\") " Jan 27 15:43:56 crc kubenswrapper[4697]: I0127 15:43:56.744472 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dba3e49a-c1cb-4006-b821-a341645c7fba-inventory\") pod \"dba3e49a-c1cb-4006-b821-a341645c7fba\" (UID: \"dba3e49a-c1cb-4006-b821-a341645c7fba\") " Jan 27 15:43:56 crc kubenswrapper[4697]: I0127 15:43:56.756230 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dba3e49a-c1cb-4006-b821-a341645c7fba-kube-api-access-g6jvv" (OuterVolumeSpecName: "kube-api-access-g6jvv") pod "dba3e49a-c1cb-4006-b821-a341645c7fba" (UID: "dba3e49a-c1cb-4006-b821-a341645c7fba"). InnerVolumeSpecName "kube-api-access-g6jvv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:43:56 crc kubenswrapper[4697]: I0127 15:43:56.827073 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dba3e49a-c1cb-4006-b821-a341645c7fba-inventory" (OuterVolumeSpecName: "inventory") pod "dba3e49a-c1cb-4006-b821-a341645c7fba" (UID: "dba3e49a-c1cb-4006-b821-a341645c7fba"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:43:56 crc kubenswrapper[4697]: I0127 15:43:56.847391 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g6jvv\" (UniqueName: \"kubernetes.io/projected/dba3e49a-c1cb-4006-b821-a341645c7fba-kube-api-access-g6jvv\") on node \"crc\" DevicePath \"\"" Jan 27 15:43:56 crc kubenswrapper[4697]: I0127 15:43:56.847617 4697 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dba3e49a-c1cb-4006-b821-a341645c7fba-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 15:43:56 crc kubenswrapper[4697]: I0127 15:43:56.856659 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-5kk7n"] Jan 27 15:43:56 crc kubenswrapper[4697]: E0127 15:43:56.857107 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dba3e49a-c1cb-4006-b821-a341645c7fba" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 27 15:43:56 crc kubenswrapper[4697]: I0127 15:43:56.857125 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="dba3e49a-c1cb-4006-b821-a341645c7fba" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 27 15:43:56 crc kubenswrapper[4697]: I0127 15:43:56.857308 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="dba3e49a-c1cb-4006-b821-a341645c7fba" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 27 15:43:56 crc kubenswrapper[4697]: I0127 15:43:56.858625 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5kk7n" Jan 27 15:43:56 crc kubenswrapper[4697]: I0127 15:43:56.870462 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5kk7n"] Jan 27 15:43:56 crc kubenswrapper[4697]: I0127 15:43:56.874973 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dba3e49a-c1cb-4006-b821-a341645c7fba-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "dba3e49a-c1cb-4006-b821-a341645c7fba" (UID: "dba3e49a-c1cb-4006-b821-a341645c7fba"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:43:56 crc kubenswrapper[4697]: I0127 15:43:56.949507 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a3f0014d-f3e4-4924-8080-271e796f0f7a-catalog-content\") pod \"certified-operators-5kk7n\" (UID: \"a3f0014d-f3e4-4924-8080-271e796f0f7a\") " pod="openshift-marketplace/certified-operators-5kk7n" Jan 27 15:43:56 crc kubenswrapper[4697]: I0127 15:43:56.949600 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a3f0014d-f3e4-4924-8080-271e796f0f7a-utilities\") pod \"certified-operators-5kk7n\" (UID: \"a3f0014d-f3e4-4924-8080-271e796f0f7a\") " pod="openshift-marketplace/certified-operators-5kk7n" Jan 27 15:43:56 crc kubenswrapper[4697]: I0127 15:43:56.950008 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ctsm9\" (UniqueName: \"kubernetes.io/projected/a3f0014d-f3e4-4924-8080-271e796f0f7a-kube-api-access-ctsm9\") pod \"certified-operators-5kk7n\" (UID: \"a3f0014d-f3e4-4924-8080-271e796f0f7a\") " pod="openshift-marketplace/certified-operators-5kk7n" Jan 27 15:43:56 crc kubenswrapper[4697]: I0127 15:43:56.950439 4697 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dba3e49a-c1cb-4006-b821-a341645c7fba-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 15:43:57 crc kubenswrapper[4697]: I0127 15:43:57.051748 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ctsm9\" (UniqueName: \"kubernetes.io/projected/a3f0014d-f3e4-4924-8080-271e796f0f7a-kube-api-access-ctsm9\") pod \"certified-operators-5kk7n\" (UID: \"a3f0014d-f3e4-4924-8080-271e796f0f7a\") " pod="openshift-marketplace/certified-operators-5kk7n" Jan 27 15:43:57 crc kubenswrapper[4697]: I0127 15:43:57.052249 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a3f0014d-f3e4-4924-8080-271e796f0f7a-catalog-content\") pod \"certified-operators-5kk7n\" (UID: \"a3f0014d-f3e4-4924-8080-271e796f0f7a\") " pod="openshift-marketplace/certified-operators-5kk7n" Jan 27 15:43:57 crc kubenswrapper[4697]: I0127 15:43:57.052364 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a3f0014d-f3e4-4924-8080-271e796f0f7a-utilities\") pod \"certified-operators-5kk7n\" (UID: \"a3f0014d-f3e4-4924-8080-271e796f0f7a\") " pod="openshift-marketplace/certified-operators-5kk7n" Jan 27 15:43:57 crc kubenswrapper[4697]: I0127 15:43:57.052973 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a3f0014d-f3e4-4924-8080-271e796f0f7a-catalog-content\") pod \"certified-operators-5kk7n\" (UID: \"a3f0014d-f3e4-4924-8080-271e796f0f7a\") " pod="openshift-marketplace/certified-operators-5kk7n" Jan 27 15:43:57 crc kubenswrapper[4697]: I0127 15:43:57.053042 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a3f0014d-f3e4-4924-8080-271e796f0f7a-utilities\") pod \"certified-operators-5kk7n\" (UID: \"a3f0014d-f3e4-4924-8080-271e796f0f7a\") " pod="openshift-marketplace/certified-operators-5kk7n" Jan 27 15:43:57 crc kubenswrapper[4697]: I0127 15:43:57.072290 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ctsm9\" (UniqueName: \"kubernetes.io/projected/a3f0014d-f3e4-4924-8080-271e796f0f7a-kube-api-access-ctsm9\") pod \"certified-operators-5kk7n\" (UID: \"a3f0014d-f3e4-4924-8080-271e796f0f7a\") " pod="openshift-marketplace/certified-operators-5kk7n" Jan 27 15:43:57 crc kubenswrapper[4697]: I0127 15:43:57.216483 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5kk7n" Jan 27 15:43:57 crc kubenswrapper[4697]: I0127 15:43:57.242237 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-h6skx" event={"ID":"dba3e49a-c1cb-4006-b821-a341645c7fba","Type":"ContainerDied","Data":"a2e566bf3bc51cd8c8d4eeb4f94da67e978ed961b09c9f4fcce7bdbe6feb9fd6"} Jan 27 15:43:57 crc kubenswrapper[4697]: I0127 15:43:57.242267 4697 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a2e566bf3bc51cd8c8d4eeb4f94da67e978ed961b09c9f4fcce7bdbe6feb9fd6" Jan 27 15:43:57 crc kubenswrapper[4697]: I0127 15:43:57.242328 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-h6skx" Jan 27 15:43:57 crc kubenswrapper[4697]: I0127 15:43:57.361845 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-vfmvd"] Jan 27 15:43:57 crc kubenswrapper[4697]: I0127 15:43:57.374681 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vfmvd" Jan 27 15:43:57 crc kubenswrapper[4697]: I0127 15:43:57.382256 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 15:43:57 crc kubenswrapper[4697]: I0127 15:43:57.382556 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ctbjc" Jan 27 15:43:57 crc kubenswrapper[4697]: I0127 15:43:57.383127 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 15:43:57 crc kubenswrapper[4697]: I0127 15:43:57.383273 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 15:43:57 crc kubenswrapper[4697]: I0127 15:43:57.425765 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-vfmvd"] Jan 27 15:43:57 crc kubenswrapper[4697]: I0127 15:43:57.463878 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l48vq\" (UniqueName: \"kubernetes.io/projected/91be9d7e-7513-4b5f-a897-9bb94f9d7649-kube-api-access-l48vq\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-vfmvd\" (UID: \"91be9d7e-7513-4b5f-a897-9bb94f9d7649\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vfmvd" Jan 27 15:43:57 crc kubenswrapper[4697]: I0127 15:43:57.464228 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/91be9d7e-7513-4b5f-a897-9bb94f9d7649-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-vfmvd\" (UID: \"91be9d7e-7513-4b5f-a897-9bb94f9d7649\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vfmvd" Jan 27 15:43:57 crc kubenswrapper[4697]: I0127 15:43:57.464300 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/91be9d7e-7513-4b5f-a897-9bb94f9d7649-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-vfmvd\" (UID: \"91be9d7e-7513-4b5f-a897-9bb94f9d7649\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vfmvd" Jan 27 15:43:57 crc kubenswrapper[4697]: I0127 15:43:57.565522 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l48vq\" (UniqueName: \"kubernetes.io/projected/91be9d7e-7513-4b5f-a897-9bb94f9d7649-kube-api-access-l48vq\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-vfmvd\" (UID: \"91be9d7e-7513-4b5f-a897-9bb94f9d7649\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vfmvd" Jan 27 15:43:57 crc kubenswrapper[4697]: I0127 15:43:57.565580 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/91be9d7e-7513-4b5f-a897-9bb94f9d7649-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-vfmvd\" (UID: \"91be9d7e-7513-4b5f-a897-9bb94f9d7649\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vfmvd" Jan 27 15:43:57 crc kubenswrapper[4697]: I0127 15:43:57.565635 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/91be9d7e-7513-4b5f-a897-9bb94f9d7649-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-vfmvd\" (UID: \"91be9d7e-7513-4b5f-a897-9bb94f9d7649\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vfmvd" Jan 27 15:43:57 crc kubenswrapper[4697]: I0127 15:43:57.571776 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/91be9d7e-7513-4b5f-a897-9bb94f9d7649-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-vfmvd\" (UID: \"91be9d7e-7513-4b5f-a897-9bb94f9d7649\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vfmvd" Jan 27 15:43:57 crc kubenswrapper[4697]: I0127 15:43:57.572358 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/91be9d7e-7513-4b5f-a897-9bb94f9d7649-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-vfmvd\" (UID: \"91be9d7e-7513-4b5f-a897-9bb94f9d7649\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vfmvd" Jan 27 15:43:57 crc kubenswrapper[4697]: I0127 15:43:57.597738 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l48vq\" (UniqueName: \"kubernetes.io/projected/91be9d7e-7513-4b5f-a897-9bb94f9d7649-kube-api-access-l48vq\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-vfmvd\" (UID: \"91be9d7e-7513-4b5f-a897-9bb94f9d7649\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vfmvd" Jan 27 15:43:57 crc kubenswrapper[4697]: I0127 15:43:57.731898 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vfmvd" Jan 27 15:43:57 crc kubenswrapper[4697]: I0127 15:43:57.822436 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5kk7n"] Jan 27 15:43:58 crc kubenswrapper[4697]: I0127 15:43:58.254260 4697 generic.go:334] "Generic (PLEG): container finished" podID="a3f0014d-f3e4-4924-8080-271e796f0f7a" containerID="d332be6ffbe2829565a444e5480672ee6773164ab06ba6dae01fec5e63602dc0" exitCode=0 Jan 27 15:43:58 crc kubenswrapper[4697]: I0127 15:43:58.254401 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5kk7n" event={"ID":"a3f0014d-f3e4-4924-8080-271e796f0f7a","Type":"ContainerDied","Data":"d332be6ffbe2829565a444e5480672ee6773164ab06ba6dae01fec5e63602dc0"} Jan 27 15:43:58 crc kubenswrapper[4697]: I0127 15:43:58.254541 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5kk7n" event={"ID":"a3f0014d-f3e4-4924-8080-271e796f0f7a","Type":"ContainerStarted","Data":"b863c95cc688239cc2d5c19b0414656bdd35ca9f567d0f6069edc7f210175655"} Jan 27 15:43:58 crc kubenswrapper[4697]: I0127 15:43:58.389059 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-49vrp" Jan 27 15:43:58 crc kubenswrapper[4697]: I0127 15:43:58.435444 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-vfmvd"] Jan 27 15:43:58 crc kubenswrapper[4697]: W0127 15:43:58.438016 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod91be9d7e_7513_4b5f_a897_9bb94f9d7649.slice/crio-e5aeef4514ced00595e5904d96dd5c44e0e0c59fa764fba6f22c012c3558fae3 WatchSource:0}: Error finding container e5aeef4514ced00595e5904d96dd5c44e0e0c59fa764fba6f22c012c3558fae3: Status 404 returned error can't find the container with id e5aeef4514ced00595e5904d96dd5c44e0e0c59fa764fba6f22c012c3558fae3 Jan 27 15:43:58 crc kubenswrapper[4697]: I0127 15:43:58.445714 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-49vrp" Jan 27 15:43:59 crc kubenswrapper[4697]: I0127 15:43:59.271873 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vfmvd" event={"ID":"91be9d7e-7513-4b5f-a897-9bb94f9d7649","Type":"ContainerStarted","Data":"d753cecfc25764d8f96b8202b87d99b10b796d223a8f2db5916c6c60b51cf1f0"} Jan 27 15:43:59 crc kubenswrapper[4697]: I0127 15:43:59.272372 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vfmvd" event={"ID":"91be9d7e-7513-4b5f-a897-9bb94f9d7649","Type":"ContainerStarted","Data":"e5aeef4514ced00595e5904d96dd5c44e0e0c59fa764fba6f22c012c3558fae3"} Jan 27 15:43:59 crc kubenswrapper[4697]: I0127 15:43:59.286647 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vfmvd" podStartSLOduration=1.758990603 podStartE2EDuration="2.286630197s" podCreationTimestamp="2026-01-27 15:43:57 +0000 UTC" firstStartedPulling="2026-01-27 15:43:58.440957968 +0000 UTC m=+2134.613357749" lastFinishedPulling="2026-01-27 15:43:58.968597562 +0000 UTC m=+2135.140997343" observedRunningTime="2026-01-27 15:43:59.286458543 +0000 UTC m=+2135.458858324" watchObservedRunningTime="2026-01-27 15:43:59.286630197 +0000 UTC m=+2135.459029968" Jan 27 15:44:00 crc kubenswrapper[4697]: I0127 15:44:00.162954 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-fpqbx"] Jan 27 15:44:00 crc kubenswrapper[4697]: I0127 15:44:00.165132 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fpqbx" Jan 27 15:44:00 crc kubenswrapper[4697]: I0127 15:44:00.205500 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fpqbx"] Jan 27 15:44:00 crc kubenswrapper[4697]: I0127 15:44:00.221964 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45e572bd-87df-4491-a73c-c8b727097848-utilities\") pod \"redhat-marketplace-fpqbx\" (UID: \"45e572bd-87df-4491-a73c-c8b727097848\") " pod="openshift-marketplace/redhat-marketplace-fpqbx" Jan 27 15:44:00 crc kubenswrapper[4697]: I0127 15:44:00.222266 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45e572bd-87df-4491-a73c-c8b727097848-catalog-content\") pod \"redhat-marketplace-fpqbx\" (UID: \"45e572bd-87df-4491-a73c-c8b727097848\") " pod="openshift-marketplace/redhat-marketplace-fpqbx" Jan 27 15:44:00 crc kubenswrapper[4697]: I0127 15:44:00.222389 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmg5j\" (UniqueName: \"kubernetes.io/projected/45e572bd-87df-4491-a73c-c8b727097848-kube-api-access-dmg5j\") pod \"redhat-marketplace-fpqbx\" (UID: \"45e572bd-87df-4491-a73c-c8b727097848\") " pod="openshift-marketplace/redhat-marketplace-fpqbx" Jan 27 15:44:00 crc kubenswrapper[4697]: I0127 15:44:00.283984 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5kk7n" event={"ID":"a3f0014d-f3e4-4924-8080-271e796f0f7a","Type":"ContainerStarted","Data":"a3951fcfb961120d2a6561cae35555187ca58643236289db0bf361bb956ae5d0"} Jan 27 15:44:00 crc kubenswrapper[4697]: I0127 15:44:00.324000 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45e572bd-87df-4491-a73c-c8b727097848-utilities\") pod \"redhat-marketplace-fpqbx\" (UID: \"45e572bd-87df-4491-a73c-c8b727097848\") " pod="openshift-marketplace/redhat-marketplace-fpqbx" Jan 27 15:44:00 crc kubenswrapper[4697]: I0127 15:44:00.324070 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45e572bd-87df-4491-a73c-c8b727097848-catalog-content\") pod \"redhat-marketplace-fpqbx\" (UID: \"45e572bd-87df-4491-a73c-c8b727097848\") " pod="openshift-marketplace/redhat-marketplace-fpqbx" Jan 27 15:44:00 crc kubenswrapper[4697]: I0127 15:44:00.324123 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dmg5j\" (UniqueName: \"kubernetes.io/projected/45e572bd-87df-4491-a73c-c8b727097848-kube-api-access-dmg5j\") pod \"redhat-marketplace-fpqbx\" (UID: \"45e572bd-87df-4491-a73c-c8b727097848\") " pod="openshift-marketplace/redhat-marketplace-fpqbx" Jan 27 15:44:00 crc kubenswrapper[4697]: I0127 15:44:00.324688 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45e572bd-87df-4491-a73c-c8b727097848-catalog-content\") pod \"redhat-marketplace-fpqbx\" (UID: \"45e572bd-87df-4491-a73c-c8b727097848\") " pod="openshift-marketplace/redhat-marketplace-fpqbx" Jan 27 15:44:00 crc kubenswrapper[4697]: I0127 15:44:00.326137 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45e572bd-87df-4491-a73c-c8b727097848-utilities\") pod \"redhat-marketplace-fpqbx\" (UID: \"45e572bd-87df-4491-a73c-c8b727097848\") " pod="openshift-marketplace/redhat-marketplace-fpqbx" Jan 27 15:44:00 crc kubenswrapper[4697]: I0127 15:44:00.341986 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dmg5j\" (UniqueName: \"kubernetes.io/projected/45e572bd-87df-4491-a73c-c8b727097848-kube-api-access-dmg5j\") pod \"redhat-marketplace-fpqbx\" (UID: \"45e572bd-87df-4491-a73c-c8b727097848\") " pod="openshift-marketplace/redhat-marketplace-fpqbx" Jan 27 15:44:00 crc kubenswrapper[4697]: I0127 15:44:00.502223 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fpqbx" Jan 27 15:44:01 crc kubenswrapper[4697]: I0127 15:44:01.079627 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fpqbx"] Jan 27 15:44:01 crc kubenswrapper[4697]: W0127 15:44:01.092516 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod45e572bd_87df_4491_a73c_c8b727097848.slice/crio-60b6d391dc3f7281d9610456e7b1f4ba028946253914920641f9a1d507436d17 WatchSource:0}: Error finding container 60b6d391dc3f7281d9610456e7b1f4ba028946253914920641f9a1d507436d17: Status 404 returned error can't find the container with id 60b6d391dc3f7281d9610456e7b1f4ba028946253914920641f9a1d507436d17 Jan 27 15:44:01 crc kubenswrapper[4697]: I0127 15:44:01.296638 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fpqbx" event={"ID":"45e572bd-87df-4491-a73c-c8b727097848","Type":"ContainerStarted","Data":"60b6d391dc3f7281d9610456e7b1f4ba028946253914920641f9a1d507436d17"} Jan 27 15:44:02 crc kubenswrapper[4697]: I0127 15:44:02.307158 4697 generic.go:334] "Generic (PLEG): container finished" podID="a3f0014d-f3e4-4924-8080-271e796f0f7a" containerID="a3951fcfb961120d2a6561cae35555187ca58643236289db0bf361bb956ae5d0" exitCode=0 Jan 27 15:44:02 crc kubenswrapper[4697]: I0127 15:44:02.307228 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5kk7n" event={"ID":"a3f0014d-f3e4-4924-8080-271e796f0f7a","Type":"ContainerDied","Data":"a3951fcfb961120d2a6561cae35555187ca58643236289db0bf361bb956ae5d0"} Jan 27 15:44:02 crc kubenswrapper[4697]: I0127 15:44:02.310878 4697 generic.go:334] "Generic (PLEG): container finished" podID="45e572bd-87df-4491-a73c-c8b727097848" containerID="f2ecd2642c092cec63df521e3d2fa3cd05e49b95eaec6e39238651efba87eee5" exitCode=0 Jan 27 15:44:02 crc kubenswrapper[4697]: I0127 15:44:02.310910 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fpqbx" event={"ID":"45e572bd-87df-4491-a73c-c8b727097848","Type":"ContainerDied","Data":"f2ecd2642c092cec63df521e3d2fa3cd05e49b95eaec6e39238651efba87eee5"} Jan 27 15:44:02 crc kubenswrapper[4697]: I0127 15:44:02.349620 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-49vrp"] Jan 27 15:44:02 crc kubenswrapper[4697]: I0127 15:44:02.349997 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-49vrp" podUID="ce2d0c40-6904-4472-8711-b8e3bdaf1876" containerName="registry-server" containerID="cri-o://424585dd5c44c99c4305cc8874fdfec50073fd6915b0c98d007462edbbc08c86" gracePeriod=2 Jan 27 15:44:02 crc kubenswrapper[4697]: I0127 15:44:02.862762 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-49vrp" Jan 27 15:44:02 crc kubenswrapper[4697]: I0127 15:44:02.986446 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cxq9v\" (UniqueName: \"kubernetes.io/projected/ce2d0c40-6904-4472-8711-b8e3bdaf1876-kube-api-access-cxq9v\") pod \"ce2d0c40-6904-4472-8711-b8e3bdaf1876\" (UID: \"ce2d0c40-6904-4472-8711-b8e3bdaf1876\") " Jan 27 15:44:02 crc kubenswrapper[4697]: I0127 15:44:02.986530 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce2d0c40-6904-4472-8711-b8e3bdaf1876-catalog-content\") pod \"ce2d0c40-6904-4472-8711-b8e3bdaf1876\" (UID: \"ce2d0c40-6904-4472-8711-b8e3bdaf1876\") " Jan 27 15:44:02 crc kubenswrapper[4697]: I0127 15:44:02.986638 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce2d0c40-6904-4472-8711-b8e3bdaf1876-utilities\") pod \"ce2d0c40-6904-4472-8711-b8e3bdaf1876\" (UID: \"ce2d0c40-6904-4472-8711-b8e3bdaf1876\") " Jan 27 15:44:02 crc kubenswrapper[4697]: I0127 15:44:02.987985 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ce2d0c40-6904-4472-8711-b8e3bdaf1876-utilities" (OuterVolumeSpecName: "utilities") pod "ce2d0c40-6904-4472-8711-b8e3bdaf1876" (UID: "ce2d0c40-6904-4472-8711-b8e3bdaf1876"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:44:03 crc kubenswrapper[4697]: I0127 15:44:03.003146 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce2d0c40-6904-4472-8711-b8e3bdaf1876-kube-api-access-cxq9v" (OuterVolumeSpecName: "kube-api-access-cxq9v") pod "ce2d0c40-6904-4472-8711-b8e3bdaf1876" (UID: "ce2d0c40-6904-4472-8711-b8e3bdaf1876"). InnerVolumeSpecName "kube-api-access-cxq9v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:44:03 crc kubenswrapper[4697]: I0127 15:44:03.039732 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ce2d0c40-6904-4472-8711-b8e3bdaf1876-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ce2d0c40-6904-4472-8711-b8e3bdaf1876" (UID: "ce2d0c40-6904-4472-8711-b8e3bdaf1876"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:44:03 crc kubenswrapper[4697]: I0127 15:44:03.089712 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cxq9v\" (UniqueName: \"kubernetes.io/projected/ce2d0c40-6904-4472-8711-b8e3bdaf1876-kube-api-access-cxq9v\") on node \"crc\" DevicePath \"\"" Jan 27 15:44:03 crc kubenswrapper[4697]: I0127 15:44:03.089749 4697 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce2d0c40-6904-4472-8711-b8e3bdaf1876-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 15:44:03 crc kubenswrapper[4697]: I0127 15:44:03.089761 4697 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce2d0c40-6904-4472-8711-b8e3bdaf1876-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 15:44:03 crc kubenswrapper[4697]: I0127 15:44:03.322423 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5kk7n" event={"ID":"a3f0014d-f3e4-4924-8080-271e796f0f7a","Type":"ContainerStarted","Data":"03482bfa19c68f8c2ab1b4ab1ffbaa9d316f9eba66737f39f078dd69e3a5a9cb"} Jan 27 15:44:03 crc kubenswrapper[4697]: I0127 15:44:03.326162 4697 generic.go:334] "Generic (PLEG): container finished" podID="ce2d0c40-6904-4472-8711-b8e3bdaf1876" containerID="424585dd5c44c99c4305cc8874fdfec50073fd6915b0c98d007462edbbc08c86" exitCode=0 Jan 27 15:44:03 crc kubenswrapper[4697]: I0127 15:44:03.326769 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-49vrp" event={"ID":"ce2d0c40-6904-4472-8711-b8e3bdaf1876","Type":"ContainerDied","Data":"424585dd5c44c99c4305cc8874fdfec50073fd6915b0c98d007462edbbc08c86"} Jan 27 15:44:03 crc kubenswrapper[4697]: I0127 15:44:03.326750 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-49vrp" Jan 27 15:44:03 crc kubenswrapper[4697]: I0127 15:44:03.326865 4697 scope.go:117] "RemoveContainer" containerID="424585dd5c44c99c4305cc8874fdfec50073fd6915b0c98d007462edbbc08c86" Jan 27 15:44:03 crc kubenswrapper[4697]: I0127 15:44:03.326843 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-49vrp" event={"ID":"ce2d0c40-6904-4472-8711-b8e3bdaf1876","Type":"ContainerDied","Data":"c6c79ea4a7d21e6b3ad34cdc8272963447168d7035d001192c6dd0ead212aae7"} Jan 27 15:44:03 crc kubenswrapper[4697]: I0127 15:44:03.382269 4697 scope.go:117] "RemoveContainer" containerID="4f0d7b97a43b3c645b0673dfb69652fefddb4c9a299acae63535bc76b8af9fbd" Jan 27 15:44:03 crc kubenswrapper[4697]: I0127 15:44:03.387017 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-5kk7n" podStartSLOduration=2.932315962 podStartE2EDuration="7.386998665s" podCreationTimestamp="2026-01-27 15:43:56 +0000 UTC" firstStartedPulling="2026-01-27 15:43:58.257325057 +0000 UTC m=+2134.429724838" lastFinishedPulling="2026-01-27 15:44:02.71200776 +0000 UTC m=+2138.884407541" observedRunningTime="2026-01-27 15:44:03.352707484 +0000 UTC m=+2139.525107265" watchObservedRunningTime="2026-01-27 15:44:03.386998665 +0000 UTC m=+2139.559398446" Jan 27 15:44:03 crc kubenswrapper[4697]: I0127 15:44:03.387507 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-49vrp"] Jan 27 15:44:03 crc kubenswrapper[4697]: I0127 15:44:03.395134 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-49vrp"] Jan 27 15:44:03 crc kubenswrapper[4697]: I0127 15:44:03.412171 4697 scope.go:117] "RemoveContainer" containerID="a784e9cb72bc4155ae494b5ff3e63c1471a7b9a3f6316ff77a70c044821d5821" Jan 27 15:44:03 crc kubenswrapper[4697]: I0127 15:44:03.432576 4697 scope.go:117] "RemoveContainer" containerID="424585dd5c44c99c4305cc8874fdfec50073fd6915b0c98d007462edbbc08c86" Jan 27 15:44:03 crc kubenswrapper[4697]: E0127 15:44:03.433541 4697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"424585dd5c44c99c4305cc8874fdfec50073fd6915b0c98d007462edbbc08c86\": container with ID starting with 424585dd5c44c99c4305cc8874fdfec50073fd6915b0c98d007462edbbc08c86 not found: ID does not exist" containerID="424585dd5c44c99c4305cc8874fdfec50073fd6915b0c98d007462edbbc08c86" Jan 27 15:44:03 crc kubenswrapper[4697]: I0127 15:44:03.433601 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"424585dd5c44c99c4305cc8874fdfec50073fd6915b0c98d007462edbbc08c86"} err="failed to get container status \"424585dd5c44c99c4305cc8874fdfec50073fd6915b0c98d007462edbbc08c86\": rpc error: code = NotFound desc = could not find container \"424585dd5c44c99c4305cc8874fdfec50073fd6915b0c98d007462edbbc08c86\": container with ID starting with 424585dd5c44c99c4305cc8874fdfec50073fd6915b0c98d007462edbbc08c86 not found: ID does not exist" Jan 27 15:44:03 crc kubenswrapper[4697]: I0127 15:44:03.433638 4697 scope.go:117] "RemoveContainer" containerID="4f0d7b97a43b3c645b0673dfb69652fefddb4c9a299acae63535bc76b8af9fbd" Jan 27 15:44:03 crc kubenswrapper[4697]: E0127 15:44:03.434049 4697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f0d7b97a43b3c645b0673dfb69652fefddb4c9a299acae63535bc76b8af9fbd\": container with ID starting with 4f0d7b97a43b3c645b0673dfb69652fefddb4c9a299acae63535bc76b8af9fbd not found: ID does not exist" containerID="4f0d7b97a43b3c645b0673dfb69652fefddb4c9a299acae63535bc76b8af9fbd" Jan 27 15:44:03 crc kubenswrapper[4697]: I0127 15:44:03.434076 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f0d7b97a43b3c645b0673dfb69652fefddb4c9a299acae63535bc76b8af9fbd"} err="failed to get container status \"4f0d7b97a43b3c645b0673dfb69652fefddb4c9a299acae63535bc76b8af9fbd\": rpc error: code = NotFound desc = could not find container \"4f0d7b97a43b3c645b0673dfb69652fefddb4c9a299acae63535bc76b8af9fbd\": container with ID starting with 4f0d7b97a43b3c645b0673dfb69652fefddb4c9a299acae63535bc76b8af9fbd not found: ID does not exist" Jan 27 15:44:03 crc kubenswrapper[4697]: I0127 15:44:03.434096 4697 scope.go:117] "RemoveContainer" containerID="a784e9cb72bc4155ae494b5ff3e63c1471a7b9a3f6316ff77a70c044821d5821" Jan 27 15:44:03 crc kubenswrapper[4697]: E0127 15:44:03.436143 4697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a784e9cb72bc4155ae494b5ff3e63c1471a7b9a3f6316ff77a70c044821d5821\": container with ID starting with a784e9cb72bc4155ae494b5ff3e63c1471a7b9a3f6316ff77a70c044821d5821 not found: ID does not exist" containerID="a784e9cb72bc4155ae494b5ff3e63c1471a7b9a3f6316ff77a70c044821d5821" Jan 27 15:44:03 crc kubenswrapper[4697]: I0127 15:44:03.436166 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a784e9cb72bc4155ae494b5ff3e63c1471a7b9a3f6316ff77a70c044821d5821"} err="failed to get container status \"a784e9cb72bc4155ae494b5ff3e63c1471a7b9a3f6316ff77a70c044821d5821\": rpc error: code = NotFound desc = could not find container \"a784e9cb72bc4155ae494b5ff3e63c1471a7b9a3f6316ff77a70c044821d5821\": container with ID starting with a784e9cb72bc4155ae494b5ff3e63c1471a7b9a3f6316ff77a70c044821d5821 not found: ID does not exist" Jan 27 15:44:04 crc kubenswrapper[4697]: I0127 15:44:04.340422 4697 generic.go:334] "Generic (PLEG): container finished" podID="45e572bd-87df-4491-a73c-c8b727097848" containerID="6065760b3803a6f42fd7d5a1a2391de9cf8c9c2b5a9094e700c2d9a1e793a5a8" exitCode=0 Jan 27 15:44:04 crc kubenswrapper[4697]: I0127 15:44:04.340476 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fpqbx" event={"ID":"45e572bd-87df-4491-a73c-c8b727097848","Type":"ContainerDied","Data":"6065760b3803a6f42fd7d5a1a2391de9cf8c9c2b5a9094e700c2d9a1e793a5a8"} Jan 27 15:44:04 crc kubenswrapper[4697]: I0127 15:44:04.580024 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce2d0c40-6904-4472-8711-b8e3bdaf1876" path="/var/lib/kubelet/pods/ce2d0c40-6904-4472-8711-b8e3bdaf1876/volumes" Jan 27 15:44:06 crc kubenswrapper[4697]: I0127 15:44:06.361025 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fpqbx" event={"ID":"45e572bd-87df-4491-a73c-c8b727097848","Type":"ContainerStarted","Data":"7155f8ce6aac26241c4b2cd2dd50dca2bc4c85e172ddc9c8b8e453297d8f4471"} Jan 27 15:44:06 crc kubenswrapper[4697]: I0127 15:44:06.383705 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-fpqbx" podStartSLOduration=3.117893408 podStartE2EDuration="6.383684049s" podCreationTimestamp="2026-01-27 15:44:00 +0000 UTC" firstStartedPulling="2026-01-27 15:44:02.312085567 +0000 UTC m=+2138.484485348" lastFinishedPulling="2026-01-27 15:44:05.577876208 +0000 UTC m=+2141.750275989" observedRunningTime="2026-01-27 15:44:06.378888252 +0000 UTC m=+2142.551288033" watchObservedRunningTime="2026-01-27 15:44:06.383684049 +0000 UTC m=+2142.556083830" Jan 27 15:44:07 crc kubenswrapper[4697]: I0127 15:44:07.217460 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-5kk7n" Jan 27 15:44:07 crc kubenswrapper[4697]: I0127 15:44:07.217930 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-5kk7n" Jan 27 15:44:08 crc kubenswrapper[4697]: I0127 15:44:08.282652 4697 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-5kk7n" podUID="a3f0014d-f3e4-4924-8080-271e796f0f7a" containerName="registry-server" probeResult="failure" output=< Jan 27 15:44:08 crc kubenswrapper[4697]: timeout: failed to connect service ":50051" within 1s Jan 27 15:44:08 crc kubenswrapper[4697]: > Jan 27 15:44:10 crc kubenswrapper[4697]: I0127 15:44:10.503303 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-fpqbx" Jan 27 15:44:10 crc kubenswrapper[4697]: I0127 15:44:10.504400 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-fpqbx" Jan 27 15:44:10 crc kubenswrapper[4697]: I0127 15:44:10.551247 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-fpqbx" Jan 27 15:44:11 crc kubenswrapper[4697]: I0127 15:44:11.459594 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-fpqbx" Jan 27 15:44:12 crc kubenswrapper[4697]: I0127 15:44:12.545337 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fpqbx"] Jan 27 15:44:14 crc kubenswrapper[4697]: I0127 15:44:14.431026 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-fpqbx" podUID="45e572bd-87df-4491-a73c-c8b727097848" containerName="registry-server" containerID="cri-o://7155f8ce6aac26241c4b2cd2dd50dca2bc4c85e172ddc9c8b8e453297d8f4471" gracePeriod=2 Jan 27 15:44:14 crc kubenswrapper[4697]: I0127 15:44:14.873759 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fpqbx" Jan 27 15:44:14 crc kubenswrapper[4697]: I0127 15:44:14.925283 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45e572bd-87df-4491-a73c-c8b727097848-utilities\") pod \"45e572bd-87df-4491-a73c-c8b727097848\" (UID: \"45e572bd-87df-4491-a73c-c8b727097848\") " Jan 27 15:44:14 crc kubenswrapper[4697]: I0127 15:44:14.925643 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dmg5j\" (UniqueName: \"kubernetes.io/projected/45e572bd-87df-4491-a73c-c8b727097848-kube-api-access-dmg5j\") pod \"45e572bd-87df-4491-a73c-c8b727097848\" (UID: \"45e572bd-87df-4491-a73c-c8b727097848\") " Jan 27 15:44:14 crc kubenswrapper[4697]: I0127 15:44:14.925766 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45e572bd-87df-4491-a73c-c8b727097848-catalog-content\") pod \"45e572bd-87df-4491-a73c-c8b727097848\" (UID: \"45e572bd-87df-4491-a73c-c8b727097848\") " Jan 27 15:44:14 crc kubenswrapper[4697]: I0127 15:44:14.927844 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45e572bd-87df-4491-a73c-c8b727097848-utilities" (OuterVolumeSpecName: "utilities") pod "45e572bd-87df-4491-a73c-c8b727097848" (UID: "45e572bd-87df-4491-a73c-c8b727097848"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:44:14 crc kubenswrapper[4697]: I0127 15:44:14.944091 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45e572bd-87df-4491-a73c-c8b727097848-kube-api-access-dmg5j" (OuterVolumeSpecName: "kube-api-access-dmg5j") pod "45e572bd-87df-4491-a73c-c8b727097848" (UID: "45e572bd-87df-4491-a73c-c8b727097848"). InnerVolumeSpecName "kube-api-access-dmg5j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:44:14 crc kubenswrapper[4697]: I0127 15:44:14.947491 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45e572bd-87df-4491-a73c-c8b727097848-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "45e572bd-87df-4491-a73c-c8b727097848" (UID: "45e572bd-87df-4491-a73c-c8b727097848"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:44:15 crc kubenswrapper[4697]: I0127 15:44:15.027984 4697 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45e572bd-87df-4491-a73c-c8b727097848-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 15:44:15 crc kubenswrapper[4697]: I0127 15:44:15.028025 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dmg5j\" (UniqueName: \"kubernetes.io/projected/45e572bd-87df-4491-a73c-c8b727097848-kube-api-access-dmg5j\") on node \"crc\" DevicePath \"\"" Jan 27 15:44:15 crc kubenswrapper[4697]: I0127 15:44:15.028037 4697 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45e572bd-87df-4491-a73c-c8b727097848-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 15:44:15 crc kubenswrapper[4697]: I0127 15:44:15.440727 4697 generic.go:334] "Generic (PLEG): container finished" podID="45e572bd-87df-4491-a73c-c8b727097848" containerID="7155f8ce6aac26241c4b2cd2dd50dca2bc4c85e172ddc9c8b8e453297d8f4471" exitCode=0 Jan 27 15:44:15 crc kubenswrapper[4697]: I0127 15:44:15.440767 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fpqbx" event={"ID":"45e572bd-87df-4491-a73c-c8b727097848","Type":"ContainerDied","Data":"7155f8ce6aac26241c4b2cd2dd50dca2bc4c85e172ddc9c8b8e453297d8f4471"} Jan 27 15:44:15 crc kubenswrapper[4697]: I0127 15:44:15.440814 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fpqbx" event={"ID":"45e572bd-87df-4491-a73c-c8b727097848","Type":"ContainerDied","Data":"60b6d391dc3f7281d9610456e7b1f4ba028946253914920641f9a1d507436d17"} Jan 27 15:44:15 crc kubenswrapper[4697]: I0127 15:44:15.440842 4697 scope.go:117] "RemoveContainer" containerID="7155f8ce6aac26241c4b2cd2dd50dca2bc4c85e172ddc9c8b8e453297d8f4471" Jan 27 15:44:15 crc kubenswrapper[4697]: I0127 15:44:15.440992 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fpqbx" Jan 27 15:44:15 crc kubenswrapper[4697]: I0127 15:44:15.484523 4697 scope.go:117] "RemoveContainer" containerID="6065760b3803a6f42fd7d5a1a2391de9cf8c9c2b5a9094e700c2d9a1e793a5a8" Jan 27 15:44:15 crc kubenswrapper[4697]: I0127 15:44:15.503911 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fpqbx"] Jan 27 15:44:15 crc kubenswrapper[4697]: I0127 15:44:15.508037 4697 scope.go:117] "RemoveContainer" containerID="f2ecd2642c092cec63df521e3d2fa3cd05e49b95eaec6e39238651efba87eee5" Jan 27 15:44:15 crc kubenswrapper[4697]: I0127 15:44:15.516924 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-fpqbx"] Jan 27 15:44:15 crc kubenswrapper[4697]: I0127 15:44:15.560130 4697 scope.go:117] "RemoveContainer" containerID="7155f8ce6aac26241c4b2cd2dd50dca2bc4c85e172ddc9c8b8e453297d8f4471" Jan 27 15:44:15 crc kubenswrapper[4697]: E0127 15:44:15.560401 4697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7155f8ce6aac26241c4b2cd2dd50dca2bc4c85e172ddc9c8b8e453297d8f4471\": container with ID starting with 7155f8ce6aac26241c4b2cd2dd50dca2bc4c85e172ddc9c8b8e453297d8f4471 not found: ID does not exist" containerID="7155f8ce6aac26241c4b2cd2dd50dca2bc4c85e172ddc9c8b8e453297d8f4471" Jan 27 15:44:15 crc kubenswrapper[4697]: I0127 15:44:15.560429 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7155f8ce6aac26241c4b2cd2dd50dca2bc4c85e172ddc9c8b8e453297d8f4471"} err="failed to get container status \"7155f8ce6aac26241c4b2cd2dd50dca2bc4c85e172ddc9c8b8e453297d8f4471\": rpc error: code = NotFound desc = could not find container \"7155f8ce6aac26241c4b2cd2dd50dca2bc4c85e172ddc9c8b8e453297d8f4471\": container with ID starting with 7155f8ce6aac26241c4b2cd2dd50dca2bc4c85e172ddc9c8b8e453297d8f4471 not found: ID does not exist" Jan 27 15:44:15 crc kubenswrapper[4697]: I0127 15:44:15.560448 4697 scope.go:117] "RemoveContainer" containerID="6065760b3803a6f42fd7d5a1a2391de9cf8c9c2b5a9094e700c2d9a1e793a5a8" Jan 27 15:44:15 crc kubenswrapper[4697]: E0127 15:44:15.560691 4697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6065760b3803a6f42fd7d5a1a2391de9cf8c9c2b5a9094e700c2d9a1e793a5a8\": container with ID starting with 6065760b3803a6f42fd7d5a1a2391de9cf8c9c2b5a9094e700c2d9a1e793a5a8 not found: ID does not exist" containerID="6065760b3803a6f42fd7d5a1a2391de9cf8c9c2b5a9094e700c2d9a1e793a5a8" Jan 27 15:44:15 crc kubenswrapper[4697]: I0127 15:44:15.560721 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6065760b3803a6f42fd7d5a1a2391de9cf8c9c2b5a9094e700c2d9a1e793a5a8"} err="failed to get container status \"6065760b3803a6f42fd7d5a1a2391de9cf8c9c2b5a9094e700c2d9a1e793a5a8\": rpc error: code = NotFound desc = could not find container \"6065760b3803a6f42fd7d5a1a2391de9cf8c9c2b5a9094e700c2d9a1e793a5a8\": container with ID starting with 6065760b3803a6f42fd7d5a1a2391de9cf8c9c2b5a9094e700c2d9a1e793a5a8 not found: ID does not exist" Jan 27 15:44:15 crc kubenswrapper[4697]: I0127 15:44:15.560739 4697 scope.go:117] "RemoveContainer" containerID="f2ecd2642c092cec63df521e3d2fa3cd05e49b95eaec6e39238651efba87eee5" Jan 27 15:44:15 crc kubenswrapper[4697]: E0127 15:44:15.561331 4697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f2ecd2642c092cec63df521e3d2fa3cd05e49b95eaec6e39238651efba87eee5\": container with ID starting with f2ecd2642c092cec63df521e3d2fa3cd05e49b95eaec6e39238651efba87eee5 not found: ID does not exist" containerID="f2ecd2642c092cec63df521e3d2fa3cd05e49b95eaec6e39238651efba87eee5" Jan 27 15:44:15 crc kubenswrapper[4697]: I0127 15:44:15.561366 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f2ecd2642c092cec63df521e3d2fa3cd05e49b95eaec6e39238651efba87eee5"} err="failed to get container status \"f2ecd2642c092cec63df521e3d2fa3cd05e49b95eaec6e39238651efba87eee5\": rpc error: code = NotFound desc = could not find container \"f2ecd2642c092cec63df521e3d2fa3cd05e49b95eaec6e39238651efba87eee5\": container with ID starting with f2ecd2642c092cec63df521e3d2fa3cd05e49b95eaec6e39238651efba87eee5 not found: ID does not exist" Jan 27 15:44:16 crc kubenswrapper[4697]: I0127 15:44:16.578881 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45e572bd-87df-4491-a73c-c8b727097848" path="/var/lib/kubelet/pods/45e572bd-87df-4491-a73c-c8b727097848/volumes" Jan 27 15:44:17 crc kubenswrapper[4697]: I0127 15:44:17.274313 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-5kk7n" Jan 27 15:44:17 crc kubenswrapper[4697]: I0127 15:44:17.339630 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-5kk7n" Jan 27 15:44:17 crc kubenswrapper[4697]: I0127 15:44:17.949152 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5kk7n"] Jan 27 15:44:18 crc kubenswrapper[4697]: I0127 15:44:18.479499 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-5kk7n" podUID="a3f0014d-f3e4-4924-8080-271e796f0f7a" containerName="registry-server" containerID="cri-o://03482bfa19c68f8c2ab1b4ab1ffbaa9d316f9eba66737f39f078dd69e3a5a9cb" gracePeriod=2 Jan 27 15:44:19 crc kubenswrapper[4697]: I0127 15:44:19.301150 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5kk7n" Jan 27 15:44:19 crc kubenswrapper[4697]: I0127 15:44:19.412550 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a3f0014d-f3e4-4924-8080-271e796f0f7a-catalog-content\") pod \"a3f0014d-f3e4-4924-8080-271e796f0f7a\" (UID: \"a3f0014d-f3e4-4924-8080-271e796f0f7a\") " Jan 27 15:44:19 crc kubenswrapper[4697]: I0127 15:44:19.412882 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ctsm9\" (UniqueName: \"kubernetes.io/projected/a3f0014d-f3e4-4924-8080-271e796f0f7a-kube-api-access-ctsm9\") pod \"a3f0014d-f3e4-4924-8080-271e796f0f7a\" (UID: \"a3f0014d-f3e4-4924-8080-271e796f0f7a\") " Jan 27 15:44:19 crc kubenswrapper[4697]: I0127 15:44:19.413672 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a3f0014d-f3e4-4924-8080-271e796f0f7a-utilities\") pod \"a3f0014d-f3e4-4924-8080-271e796f0f7a\" (UID: \"a3f0014d-f3e4-4924-8080-271e796f0f7a\") " Jan 27 15:44:19 crc kubenswrapper[4697]: I0127 15:44:19.414193 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a3f0014d-f3e4-4924-8080-271e796f0f7a-utilities" (OuterVolumeSpecName: "utilities") pod "a3f0014d-f3e4-4924-8080-271e796f0f7a" (UID: "a3f0014d-f3e4-4924-8080-271e796f0f7a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:44:19 crc kubenswrapper[4697]: I0127 15:44:19.414391 4697 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a3f0014d-f3e4-4924-8080-271e796f0f7a-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 15:44:19 crc kubenswrapper[4697]: I0127 15:44:19.419567 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3f0014d-f3e4-4924-8080-271e796f0f7a-kube-api-access-ctsm9" (OuterVolumeSpecName: "kube-api-access-ctsm9") pod "a3f0014d-f3e4-4924-8080-271e796f0f7a" (UID: "a3f0014d-f3e4-4924-8080-271e796f0f7a"). InnerVolumeSpecName "kube-api-access-ctsm9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:44:19 crc kubenswrapper[4697]: I0127 15:44:19.464181 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a3f0014d-f3e4-4924-8080-271e796f0f7a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a3f0014d-f3e4-4924-8080-271e796f0f7a" (UID: "a3f0014d-f3e4-4924-8080-271e796f0f7a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:44:19 crc kubenswrapper[4697]: I0127 15:44:19.489394 4697 generic.go:334] "Generic (PLEG): container finished" podID="a3f0014d-f3e4-4924-8080-271e796f0f7a" containerID="03482bfa19c68f8c2ab1b4ab1ffbaa9d316f9eba66737f39f078dd69e3a5a9cb" exitCode=0 Jan 27 15:44:19 crc kubenswrapper[4697]: I0127 15:44:19.489438 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5kk7n" event={"ID":"a3f0014d-f3e4-4924-8080-271e796f0f7a","Type":"ContainerDied","Data":"03482bfa19c68f8c2ab1b4ab1ffbaa9d316f9eba66737f39f078dd69e3a5a9cb"} Jan 27 15:44:19 crc kubenswrapper[4697]: I0127 15:44:19.489467 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5kk7n" event={"ID":"a3f0014d-f3e4-4924-8080-271e796f0f7a","Type":"ContainerDied","Data":"b863c95cc688239cc2d5c19b0414656bdd35ca9f567d0f6069edc7f210175655"} Jan 27 15:44:19 crc kubenswrapper[4697]: I0127 15:44:19.489484 4697 scope.go:117] "RemoveContainer" containerID="03482bfa19c68f8c2ab1b4ab1ffbaa9d316f9eba66737f39f078dd69e3a5a9cb" Jan 27 15:44:19 crc kubenswrapper[4697]: I0127 15:44:19.489638 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5kk7n" Jan 27 15:44:19 crc kubenswrapper[4697]: I0127 15:44:19.517008 4697 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a3f0014d-f3e4-4924-8080-271e796f0f7a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 15:44:19 crc kubenswrapper[4697]: I0127 15:44:19.517209 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ctsm9\" (UniqueName: \"kubernetes.io/projected/a3f0014d-f3e4-4924-8080-271e796f0f7a-kube-api-access-ctsm9\") on node \"crc\" DevicePath \"\"" Jan 27 15:44:19 crc kubenswrapper[4697]: I0127 15:44:19.521075 4697 scope.go:117] "RemoveContainer" containerID="a3951fcfb961120d2a6561cae35555187ca58643236289db0bf361bb956ae5d0" Jan 27 15:44:19 crc kubenswrapper[4697]: I0127 15:44:19.522858 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5kk7n"] Jan 27 15:44:19 crc kubenswrapper[4697]: I0127 15:44:19.530153 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-5kk7n"] Jan 27 15:44:19 crc kubenswrapper[4697]: I0127 15:44:19.537816 4697 scope.go:117] "RemoveContainer" containerID="d332be6ffbe2829565a444e5480672ee6773164ab06ba6dae01fec5e63602dc0" Jan 27 15:44:19 crc kubenswrapper[4697]: I0127 15:44:19.577584 4697 scope.go:117] "RemoveContainer" containerID="03482bfa19c68f8c2ab1b4ab1ffbaa9d316f9eba66737f39f078dd69e3a5a9cb" Jan 27 15:44:19 crc kubenswrapper[4697]: E0127 15:44:19.578761 4697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"03482bfa19c68f8c2ab1b4ab1ffbaa9d316f9eba66737f39f078dd69e3a5a9cb\": container with ID starting with 03482bfa19c68f8c2ab1b4ab1ffbaa9d316f9eba66737f39f078dd69e3a5a9cb not found: ID does not exist" containerID="03482bfa19c68f8c2ab1b4ab1ffbaa9d316f9eba66737f39f078dd69e3a5a9cb" Jan 27 15:44:19 crc kubenswrapper[4697]: I0127 15:44:19.578908 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"03482bfa19c68f8c2ab1b4ab1ffbaa9d316f9eba66737f39f078dd69e3a5a9cb"} err="failed to get container status \"03482bfa19c68f8c2ab1b4ab1ffbaa9d316f9eba66737f39f078dd69e3a5a9cb\": rpc error: code = NotFound desc = could not find container \"03482bfa19c68f8c2ab1b4ab1ffbaa9d316f9eba66737f39f078dd69e3a5a9cb\": container with ID starting with 03482bfa19c68f8c2ab1b4ab1ffbaa9d316f9eba66737f39f078dd69e3a5a9cb not found: ID does not exist" Jan 27 15:44:19 crc kubenswrapper[4697]: I0127 15:44:19.578987 4697 scope.go:117] "RemoveContainer" containerID="a3951fcfb961120d2a6561cae35555187ca58643236289db0bf361bb956ae5d0" Jan 27 15:44:19 crc kubenswrapper[4697]: E0127 15:44:19.579296 4697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a3951fcfb961120d2a6561cae35555187ca58643236289db0bf361bb956ae5d0\": container with ID starting with a3951fcfb961120d2a6561cae35555187ca58643236289db0bf361bb956ae5d0 not found: ID does not exist" containerID="a3951fcfb961120d2a6561cae35555187ca58643236289db0bf361bb956ae5d0" Jan 27 15:44:19 crc kubenswrapper[4697]: I0127 15:44:19.579386 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3951fcfb961120d2a6561cae35555187ca58643236289db0bf361bb956ae5d0"} err="failed to get container status \"a3951fcfb961120d2a6561cae35555187ca58643236289db0bf361bb956ae5d0\": rpc error: code = NotFound desc = could not find container \"a3951fcfb961120d2a6561cae35555187ca58643236289db0bf361bb956ae5d0\": container with ID starting with a3951fcfb961120d2a6561cae35555187ca58643236289db0bf361bb956ae5d0 not found: ID does not exist" Jan 27 15:44:19 crc kubenswrapper[4697]: I0127 15:44:19.579476 4697 scope.go:117] "RemoveContainer" containerID="d332be6ffbe2829565a444e5480672ee6773164ab06ba6dae01fec5e63602dc0" Jan 27 15:44:19 crc kubenswrapper[4697]: E0127 15:44:19.579812 4697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d332be6ffbe2829565a444e5480672ee6773164ab06ba6dae01fec5e63602dc0\": container with ID starting with d332be6ffbe2829565a444e5480672ee6773164ab06ba6dae01fec5e63602dc0 not found: ID does not exist" containerID="d332be6ffbe2829565a444e5480672ee6773164ab06ba6dae01fec5e63602dc0" Jan 27 15:44:19 crc kubenswrapper[4697]: I0127 15:44:19.579902 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d332be6ffbe2829565a444e5480672ee6773164ab06ba6dae01fec5e63602dc0"} err="failed to get container status \"d332be6ffbe2829565a444e5480672ee6773164ab06ba6dae01fec5e63602dc0\": rpc error: code = NotFound desc = could not find container \"d332be6ffbe2829565a444e5480672ee6773164ab06ba6dae01fec5e63602dc0\": container with ID starting with d332be6ffbe2829565a444e5480672ee6773164ab06ba6dae01fec5e63602dc0 not found: ID does not exist" Jan 27 15:44:20 crc kubenswrapper[4697]: I0127 15:44:20.579533 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a3f0014d-f3e4-4924-8080-271e796f0f7a" path="/var/lib/kubelet/pods/a3f0014d-f3e4-4924-8080-271e796f0f7a/volumes" Jan 27 15:44:25 crc kubenswrapper[4697]: I0127 15:44:25.108437 4697 patch_prober.go:28] interesting pod/machine-config-daemon-wz495 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 15:44:25 crc kubenswrapper[4697]: I0127 15:44:25.108980 4697 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 15:44:40 crc kubenswrapper[4697]: I0127 15:44:40.660041 4697 generic.go:334] "Generic (PLEG): container finished" podID="91be9d7e-7513-4b5f-a897-9bb94f9d7649" containerID="d753cecfc25764d8f96b8202b87d99b10b796d223a8f2db5916c6c60b51cf1f0" exitCode=0 Jan 27 15:44:40 crc kubenswrapper[4697]: I0127 15:44:40.660136 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vfmvd" event={"ID":"91be9d7e-7513-4b5f-a897-9bb94f9d7649","Type":"ContainerDied","Data":"d753cecfc25764d8f96b8202b87d99b10b796d223a8f2db5916c6c60b51cf1f0"} Jan 27 15:44:42 crc kubenswrapper[4697]: I0127 15:44:42.160693 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vfmvd" Jan 27 15:44:42 crc kubenswrapper[4697]: I0127 15:44:42.254709 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/91be9d7e-7513-4b5f-a897-9bb94f9d7649-ssh-key-openstack-edpm-ipam\") pod \"91be9d7e-7513-4b5f-a897-9bb94f9d7649\" (UID: \"91be9d7e-7513-4b5f-a897-9bb94f9d7649\") " Jan 27 15:44:42 crc kubenswrapper[4697]: I0127 15:44:42.254848 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/91be9d7e-7513-4b5f-a897-9bb94f9d7649-inventory\") pod \"91be9d7e-7513-4b5f-a897-9bb94f9d7649\" (UID: \"91be9d7e-7513-4b5f-a897-9bb94f9d7649\") " Jan 27 15:44:42 crc kubenswrapper[4697]: I0127 15:44:42.254895 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l48vq\" (UniqueName: \"kubernetes.io/projected/91be9d7e-7513-4b5f-a897-9bb94f9d7649-kube-api-access-l48vq\") pod \"91be9d7e-7513-4b5f-a897-9bb94f9d7649\" (UID: \"91be9d7e-7513-4b5f-a897-9bb94f9d7649\") " Jan 27 15:44:42 crc kubenswrapper[4697]: I0127 15:44:42.262394 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91be9d7e-7513-4b5f-a897-9bb94f9d7649-kube-api-access-l48vq" (OuterVolumeSpecName: "kube-api-access-l48vq") pod "91be9d7e-7513-4b5f-a897-9bb94f9d7649" (UID: "91be9d7e-7513-4b5f-a897-9bb94f9d7649"). InnerVolumeSpecName "kube-api-access-l48vq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:44:42 crc kubenswrapper[4697]: I0127 15:44:42.286771 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91be9d7e-7513-4b5f-a897-9bb94f9d7649-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "91be9d7e-7513-4b5f-a897-9bb94f9d7649" (UID: "91be9d7e-7513-4b5f-a897-9bb94f9d7649"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:44:42 crc kubenswrapper[4697]: I0127 15:44:42.293372 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91be9d7e-7513-4b5f-a897-9bb94f9d7649-inventory" (OuterVolumeSpecName: "inventory") pod "91be9d7e-7513-4b5f-a897-9bb94f9d7649" (UID: "91be9d7e-7513-4b5f-a897-9bb94f9d7649"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:44:42 crc kubenswrapper[4697]: I0127 15:44:42.357753 4697 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/91be9d7e-7513-4b5f-a897-9bb94f9d7649-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 15:44:42 crc kubenswrapper[4697]: I0127 15:44:42.357811 4697 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/91be9d7e-7513-4b5f-a897-9bb94f9d7649-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 15:44:42 crc kubenswrapper[4697]: I0127 15:44:42.357821 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l48vq\" (UniqueName: \"kubernetes.io/projected/91be9d7e-7513-4b5f-a897-9bb94f9d7649-kube-api-access-l48vq\") on node \"crc\" DevicePath \"\"" Jan 27 15:44:42 crc kubenswrapper[4697]: I0127 15:44:42.676288 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vfmvd" event={"ID":"91be9d7e-7513-4b5f-a897-9bb94f9d7649","Type":"ContainerDied","Data":"e5aeef4514ced00595e5904d96dd5c44e0e0c59fa764fba6f22c012c3558fae3"} Jan 27 15:44:42 crc kubenswrapper[4697]: I0127 15:44:42.676598 4697 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e5aeef4514ced00595e5904d96dd5c44e0e0c59fa764fba6f22c012c3558fae3" Jan 27 15:44:42 crc kubenswrapper[4697]: I0127 15:44:42.676671 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vfmvd" Jan 27 15:44:42 crc kubenswrapper[4697]: I0127 15:44:42.811212 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-666b2"] Jan 27 15:44:42 crc kubenswrapper[4697]: E0127 15:44:42.811621 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3f0014d-f3e4-4924-8080-271e796f0f7a" containerName="extract-utilities" Jan 27 15:44:42 crc kubenswrapper[4697]: I0127 15:44:42.811649 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3f0014d-f3e4-4924-8080-271e796f0f7a" containerName="extract-utilities" Jan 27 15:44:42 crc kubenswrapper[4697]: E0127 15:44:42.811667 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce2d0c40-6904-4472-8711-b8e3bdaf1876" containerName="registry-server" Jan 27 15:44:42 crc kubenswrapper[4697]: I0127 15:44:42.811676 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce2d0c40-6904-4472-8711-b8e3bdaf1876" containerName="registry-server" Jan 27 15:44:42 crc kubenswrapper[4697]: E0127 15:44:42.811686 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce2d0c40-6904-4472-8711-b8e3bdaf1876" containerName="extract-content" Jan 27 15:44:42 crc kubenswrapper[4697]: I0127 15:44:42.811694 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce2d0c40-6904-4472-8711-b8e3bdaf1876" containerName="extract-content" Jan 27 15:44:42 crc kubenswrapper[4697]: E0127 15:44:42.811707 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91be9d7e-7513-4b5f-a897-9bb94f9d7649" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 27 15:44:42 crc kubenswrapper[4697]: I0127 15:44:42.811716 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="91be9d7e-7513-4b5f-a897-9bb94f9d7649" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 27 15:44:42 crc kubenswrapper[4697]: E0127 15:44:42.811728 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3f0014d-f3e4-4924-8080-271e796f0f7a" containerName="extract-content" Jan 27 15:44:42 crc kubenswrapper[4697]: I0127 15:44:42.811735 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3f0014d-f3e4-4924-8080-271e796f0f7a" containerName="extract-content" Jan 27 15:44:42 crc kubenswrapper[4697]: E0127 15:44:42.811749 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45e572bd-87df-4491-a73c-c8b727097848" containerName="extract-utilities" Jan 27 15:44:42 crc kubenswrapper[4697]: I0127 15:44:42.811758 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="45e572bd-87df-4491-a73c-c8b727097848" containerName="extract-utilities" Jan 27 15:44:42 crc kubenswrapper[4697]: E0127 15:44:42.811776 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45e572bd-87df-4491-a73c-c8b727097848" containerName="extract-content" Jan 27 15:44:42 crc kubenswrapper[4697]: I0127 15:44:42.811801 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="45e572bd-87df-4491-a73c-c8b727097848" containerName="extract-content" Jan 27 15:44:42 crc kubenswrapper[4697]: E0127 15:44:42.811822 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45e572bd-87df-4491-a73c-c8b727097848" containerName="registry-server" Jan 27 15:44:42 crc kubenswrapper[4697]: I0127 15:44:42.811831 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="45e572bd-87df-4491-a73c-c8b727097848" containerName="registry-server" Jan 27 15:44:42 crc kubenswrapper[4697]: E0127 15:44:42.811847 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce2d0c40-6904-4472-8711-b8e3bdaf1876" containerName="extract-utilities" Jan 27 15:44:42 crc kubenswrapper[4697]: I0127 15:44:42.811854 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce2d0c40-6904-4472-8711-b8e3bdaf1876" containerName="extract-utilities" Jan 27 15:44:42 crc kubenswrapper[4697]: E0127 15:44:42.811873 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3f0014d-f3e4-4924-8080-271e796f0f7a" containerName="registry-server" Jan 27 15:44:42 crc kubenswrapper[4697]: I0127 15:44:42.811880 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3f0014d-f3e4-4924-8080-271e796f0f7a" containerName="registry-server" Jan 27 15:44:42 crc kubenswrapper[4697]: I0127 15:44:42.812127 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="45e572bd-87df-4491-a73c-c8b727097848" containerName="registry-server" Jan 27 15:44:42 crc kubenswrapper[4697]: I0127 15:44:42.812146 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="91be9d7e-7513-4b5f-a897-9bb94f9d7649" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 27 15:44:42 crc kubenswrapper[4697]: I0127 15:44:42.812160 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce2d0c40-6904-4472-8711-b8e3bdaf1876" containerName="registry-server" Jan 27 15:44:42 crc kubenswrapper[4697]: I0127 15:44:42.812182 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3f0014d-f3e4-4924-8080-271e796f0f7a" containerName="registry-server" Jan 27 15:44:42 crc kubenswrapper[4697]: I0127 15:44:42.812863 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-666b2" Jan 27 15:44:42 crc kubenswrapper[4697]: I0127 15:44:42.815162 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 15:44:42 crc kubenswrapper[4697]: I0127 15:44:42.815457 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 15:44:42 crc kubenswrapper[4697]: I0127 15:44:42.815687 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ctbjc" Jan 27 15:44:42 crc kubenswrapper[4697]: I0127 15:44:42.815860 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 15:44:42 crc kubenswrapper[4697]: I0127 15:44:42.827372 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-666b2"] Jan 27 15:44:42 crc kubenswrapper[4697]: I0127 15:44:42.867164 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e3f4f826-3a5f-4eb5-a34b-c1c0ff66d4e3-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-666b2\" (UID: \"e3f4f826-3a5f-4eb5-a34b-c1c0ff66d4e3\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-666b2" Jan 27 15:44:42 crc kubenswrapper[4697]: I0127 15:44:42.867219 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gd992\" (UniqueName: \"kubernetes.io/projected/e3f4f826-3a5f-4eb5-a34b-c1c0ff66d4e3-kube-api-access-gd992\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-666b2\" (UID: \"e3f4f826-3a5f-4eb5-a34b-c1c0ff66d4e3\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-666b2" Jan 27 15:44:42 crc kubenswrapper[4697]: I0127 15:44:42.867261 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e3f4f826-3a5f-4eb5-a34b-c1c0ff66d4e3-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-666b2\" (UID: \"e3f4f826-3a5f-4eb5-a34b-c1c0ff66d4e3\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-666b2" Jan 27 15:44:42 crc kubenswrapper[4697]: I0127 15:44:42.968621 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e3f4f826-3a5f-4eb5-a34b-c1c0ff66d4e3-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-666b2\" (UID: \"e3f4f826-3a5f-4eb5-a34b-c1c0ff66d4e3\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-666b2" Jan 27 15:44:42 crc kubenswrapper[4697]: I0127 15:44:42.968669 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gd992\" (UniqueName: \"kubernetes.io/projected/e3f4f826-3a5f-4eb5-a34b-c1c0ff66d4e3-kube-api-access-gd992\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-666b2\" (UID: \"e3f4f826-3a5f-4eb5-a34b-c1c0ff66d4e3\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-666b2" Jan 27 15:44:42 crc kubenswrapper[4697]: I0127 15:44:42.968714 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e3f4f826-3a5f-4eb5-a34b-c1c0ff66d4e3-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-666b2\" (UID: \"e3f4f826-3a5f-4eb5-a34b-c1c0ff66d4e3\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-666b2" Jan 27 15:44:42 crc kubenswrapper[4697]: I0127 15:44:42.973437 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e3f4f826-3a5f-4eb5-a34b-c1c0ff66d4e3-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-666b2\" (UID: \"e3f4f826-3a5f-4eb5-a34b-c1c0ff66d4e3\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-666b2" Jan 27 15:44:42 crc kubenswrapper[4697]: I0127 15:44:42.973451 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e3f4f826-3a5f-4eb5-a34b-c1c0ff66d4e3-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-666b2\" (UID: \"e3f4f826-3a5f-4eb5-a34b-c1c0ff66d4e3\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-666b2" Jan 27 15:44:42 crc kubenswrapper[4697]: I0127 15:44:42.993349 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gd992\" (UniqueName: \"kubernetes.io/projected/e3f4f826-3a5f-4eb5-a34b-c1c0ff66d4e3-kube-api-access-gd992\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-666b2\" (UID: \"e3f4f826-3a5f-4eb5-a34b-c1c0ff66d4e3\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-666b2" Jan 27 15:44:43 crc kubenswrapper[4697]: I0127 15:44:43.132567 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-666b2" Jan 27 15:44:43 crc kubenswrapper[4697]: I0127 15:44:43.661132 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-666b2"] Jan 27 15:44:43 crc kubenswrapper[4697]: I0127 15:44:43.691187 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-666b2" event={"ID":"e3f4f826-3a5f-4eb5-a34b-c1c0ff66d4e3","Type":"ContainerStarted","Data":"b9ebd6218ac6958acc8914cf66b7c78fbad83da743b98f42595b645e90b74940"} Jan 27 15:44:44 crc kubenswrapper[4697]: I0127 15:44:44.700171 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-666b2" event={"ID":"e3f4f826-3a5f-4eb5-a34b-c1c0ff66d4e3","Type":"ContainerStarted","Data":"c218f85bf58b370981d34b3f9f9ac5725a420b22118fbb85968c614736916d6d"} Jan 27 15:44:44 crc kubenswrapper[4697]: I0127 15:44:44.723630 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-666b2" podStartSLOduration=2.3123494620000002 podStartE2EDuration="2.723608343s" podCreationTimestamp="2026-01-27 15:44:42 +0000 UTC" firstStartedPulling="2026-01-27 15:44:43.660121234 +0000 UTC m=+2179.832521015" lastFinishedPulling="2026-01-27 15:44:44.071380115 +0000 UTC m=+2180.243779896" observedRunningTime="2026-01-27 15:44:44.714881759 +0000 UTC m=+2180.887281540" watchObservedRunningTime="2026-01-27 15:44:44.723608343 +0000 UTC m=+2180.896008124" Jan 27 15:44:50 crc kubenswrapper[4697]: I0127 15:44:50.590444 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-dq2df"] Jan 27 15:44:50 crc kubenswrapper[4697]: I0127 15:44:50.593696 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dq2df" Jan 27 15:44:50 crc kubenswrapper[4697]: I0127 15:44:50.617294 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9afae58e-9132-4f96-b373-82beb70e245b-catalog-content\") pod \"redhat-operators-dq2df\" (UID: \"9afae58e-9132-4f96-b373-82beb70e245b\") " pod="openshift-marketplace/redhat-operators-dq2df" Jan 27 15:44:50 crc kubenswrapper[4697]: I0127 15:44:50.617356 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mt7td\" (UniqueName: \"kubernetes.io/projected/9afae58e-9132-4f96-b373-82beb70e245b-kube-api-access-mt7td\") pod \"redhat-operators-dq2df\" (UID: \"9afae58e-9132-4f96-b373-82beb70e245b\") " pod="openshift-marketplace/redhat-operators-dq2df" Jan 27 15:44:50 crc kubenswrapper[4697]: I0127 15:44:50.617462 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9afae58e-9132-4f96-b373-82beb70e245b-utilities\") pod \"redhat-operators-dq2df\" (UID: \"9afae58e-9132-4f96-b373-82beb70e245b\") " pod="openshift-marketplace/redhat-operators-dq2df" Jan 27 15:44:50 crc kubenswrapper[4697]: I0127 15:44:50.646520 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dq2df"] Jan 27 15:44:50 crc kubenswrapper[4697]: I0127 15:44:50.719486 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9afae58e-9132-4f96-b373-82beb70e245b-catalog-content\") pod \"redhat-operators-dq2df\" (UID: \"9afae58e-9132-4f96-b373-82beb70e245b\") " pod="openshift-marketplace/redhat-operators-dq2df" Jan 27 15:44:50 crc kubenswrapper[4697]: I0127 15:44:50.719594 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mt7td\" (UniqueName: \"kubernetes.io/projected/9afae58e-9132-4f96-b373-82beb70e245b-kube-api-access-mt7td\") pod \"redhat-operators-dq2df\" (UID: \"9afae58e-9132-4f96-b373-82beb70e245b\") " pod="openshift-marketplace/redhat-operators-dq2df" Jan 27 15:44:50 crc kubenswrapper[4697]: I0127 15:44:50.719664 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9afae58e-9132-4f96-b373-82beb70e245b-utilities\") pod \"redhat-operators-dq2df\" (UID: \"9afae58e-9132-4f96-b373-82beb70e245b\") " pod="openshift-marketplace/redhat-operators-dq2df" Jan 27 15:44:50 crc kubenswrapper[4697]: I0127 15:44:50.720303 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9afae58e-9132-4f96-b373-82beb70e245b-utilities\") pod \"redhat-operators-dq2df\" (UID: \"9afae58e-9132-4f96-b373-82beb70e245b\") " pod="openshift-marketplace/redhat-operators-dq2df" Jan 27 15:44:50 crc kubenswrapper[4697]: I0127 15:44:50.720365 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9afae58e-9132-4f96-b373-82beb70e245b-catalog-content\") pod \"redhat-operators-dq2df\" (UID: \"9afae58e-9132-4f96-b373-82beb70e245b\") " pod="openshift-marketplace/redhat-operators-dq2df" Jan 27 15:44:50 crc kubenswrapper[4697]: I0127 15:44:50.739276 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mt7td\" (UniqueName: \"kubernetes.io/projected/9afae58e-9132-4f96-b373-82beb70e245b-kube-api-access-mt7td\") pod \"redhat-operators-dq2df\" (UID: \"9afae58e-9132-4f96-b373-82beb70e245b\") " pod="openshift-marketplace/redhat-operators-dq2df" Jan 27 15:44:50 crc kubenswrapper[4697]: I0127 15:44:50.949272 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dq2df" Jan 27 15:44:51 crc kubenswrapper[4697]: I0127 15:44:51.429615 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dq2df"] Jan 27 15:44:51 crc kubenswrapper[4697]: I0127 15:44:51.758824 4697 generic.go:334] "Generic (PLEG): container finished" podID="9afae58e-9132-4f96-b373-82beb70e245b" containerID="0ae331cff0f08b802c018902d59a34392448d4374b5d499ef809bb548fa71449" exitCode=0 Jan 27 15:44:51 crc kubenswrapper[4697]: I0127 15:44:51.758960 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dq2df" event={"ID":"9afae58e-9132-4f96-b373-82beb70e245b","Type":"ContainerDied","Data":"0ae331cff0f08b802c018902d59a34392448d4374b5d499ef809bb548fa71449"} Jan 27 15:44:51 crc kubenswrapper[4697]: I0127 15:44:51.758993 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dq2df" event={"ID":"9afae58e-9132-4f96-b373-82beb70e245b","Type":"ContainerStarted","Data":"d5d38034dbfb2eb311c374ca1a9b6a26f4da0fbe945e7fecd8ce0199bec4af9d"} Jan 27 15:44:53 crc kubenswrapper[4697]: I0127 15:44:53.776670 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dq2df" event={"ID":"9afae58e-9132-4f96-b373-82beb70e245b","Type":"ContainerStarted","Data":"1aeb7b4e83640cfa1f83a4fcd36618d9158a9e7d8d57a43e0837da3ab6bd6e68"} Jan 27 15:44:55 crc kubenswrapper[4697]: I0127 15:44:55.108543 4697 patch_prober.go:28] interesting pod/machine-config-daemon-wz495 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 15:44:55 crc kubenswrapper[4697]: I0127 15:44:55.108878 4697 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 15:45:00 crc kubenswrapper[4697]: I0127 15:45:00.161999 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492145-9dp4d"] Jan 27 15:45:00 crc kubenswrapper[4697]: I0127 15:45:00.165339 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492145-9dp4d" Jan 27 15:45:00 crc kubenswrapper[4697]: I0127 15:45:00.171549 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 27 15:45:00 crc kubenswrapper[4697]: I0127 15:45:00.171816 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 27 15:45:00 crc kubenswrapper[4697]: I0127 15:45:00.183846 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492145-9dp4d"] Jan 27 15:45:00 crc kubenswrapper[4697]: I0127 15:45:00.301278 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wsdcm\" (UniqueName: \"kubernetes.io/projected/5b0ad28c-9462-4d97-bcc9-da634e079fd2-kube-api-access-wsdcm\") pod \"collect-profiles-29492145-9dp4d\" (UID: \"5b0ad28c-9462-4d97-bcc9-da634e079fd2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492145-9dp4d" Jan 27 15:45:00 crc kubenswrapper[4697]: I0127 15:45:00.301629 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5b0ad28c-9462-4d97-bcc9-da634e079fd2-config-volume\") pod \"collect-profiles-29492145-9dp4d\" (UID: \"5b0ad28c-9462-4d97-bcc9-da634e079fd2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492145-9dp4d" Jan 27 15:45:00 crc kubenswrapper[4697]: I0127 15:45:00.302029 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5b0ad28c-9462-4d97-bcc9-da634e079fd2-secret-volume\") pod \"collect-profiles-29492145-9dp4d\" (UID: \"5b0ad28c-9462-4d97-bcc9-da634e079fd2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492145-9dp4d" Jan 27 15:45:00 crc kubenswrapper[4697]: I0127 15:45:00.404563 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wsdcm\" (UniqueName: \"kubernetes.io/projected/5b0ad28c-9462-4d97-bcc9-da634e079fd2-kube-api-access-wsdcm\") pod \"collect-profiles-29492145-9dp4d\" (UID: \"5b0ad28c-9462-4d97-bcc9-da634e079fd2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492145-9dp4d" Jan 27 15:45:00 crc kubenswrapper[4697]: I0127 15:45:00.404701 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5b0ad28c-9462-4d97-bcc9-da634e079fd2-config-volume\") pod \"collect-profiles-29492145-9dp4d\" (UID: \"5b0ad28c-9462-4d97-bcc9-da634e079fd2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492145-9dp4d" Jan 27 15:45:00 crc kubenswrapper[4697]: I0127 15:45:00.404767 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5b0ad28c-9462-4d97-bcc9-da634e079fd2-secret-volume\") pod \"collect-profiles-29492145-9dp4d\" (UID: \"5b0ad28c-9462-4d97-bcc9-da634e079fd2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492145-9dp4d" Jan 27 15:45:00 crc kubenswrapper[4697]: I0127 15:45:00.406486 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5b0ad28c-9462-4d97-bcc9-da634e079fd2-config-volume\") pod \"collect-profiles-29492145-9dp4d\" (UID: \"5b0ad28c-9462-4d97-bcc9-da634e079fd2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492145-9dp4d" Jan 27 15:45:00 crc kubenswrapper[4697]: I0127 15:45:00.413809 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5b0ad28c-9462-4d97-bcc9-da634e079fd2-secret-volume\") pod \"collect-profiles-29492145-9dp4d\" (UID: \"5b0ad28c-9462-4d97-bcc9-da634e079fd2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492145-9dp4d" Jan 27 15:45:00 crc kubenswrapper[4697]: I0127 15:45:00.422915 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wsdcm\" (UniqueName: \"kubernetes.io/projected/5b0ad28c-9462-4d97-bcc9-da634e079fd2-kube-api-access-wsdcm\") pod \"collect-profiles-29492145-9dp4d\" (UID: \"5b0ad28c-9462-4d97-bcc9-da634e079fd2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492145-9dp4d" Jan 27 15:45:00 crc kubenswrapper[4697]: I0127 15:45:00.486426 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492145-9dp4d" Jan 27 15:45:00 crc kubenswrapper[4697]: I0127 15:45:00.847421 4697 generic.go:334] "Generic (PLEG): container finished" podID="9afae58e-9132-4f96-b373-82beb70e245b" containerID="1aeb7b4e83640cfa1f83a4fcd36618d9158a9e7d8d57a43e0837da3ab6bd6e68" exitCode=0 Jan 27 15:45:00 crc kubenswrapper[4697]: I0127 15:45:00.847507 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dq2df" event={"ID":"9afae58e-9132-4f96-b373-82beb70e245b","Type":"ContainerDied","Data":"1aeb7b4e83640cfa1f83a4fcd36618d9158a9e7d8d57a43e0837da3ab6bd6e68"} Jan 27 15:45:01 crc kubenswrapper[4697]: I0127 15:45:01.553381 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492145-9dp4d"] Jan 27 15:45:01 crc kubenswrapper[4697]: I0127 15:45:01.858710 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492145-9dp4d" event={"ID":"5b0ad28c-9462-4d97-bcc9-da634e079fd2","Type":"ContainerStarted","Data":"182fe67c72a5584c7524a69f4060233fbd552baf1612123e9feb64d6906c4cd5"} Jan 27 15:45:01 crc kubenswrapper[4697]: I0127 15:45:01.858976 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492145-9dp4d" event={"ID":"5b0ad28c-9462-4d97-bcc9-da634e079fd2","Type":"ContainerStarted","Data":"4accb869efd54931f58d966430fc5e6c9e54ad851a3c8be981b67d3f44c59944"} Jan 27 15:45:01 crc kubenswrapper[4697]: I0127 15:45:01.881708 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29492145-9dp4d" podStartSLOduration=1.8816808090000001 podStartE2EDuration="1.881680809s" podCreationTimestamp="2026-01-27 15:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:45:01.875469976 +0000 UTC m=+2198.047869777" watchObservedRunningTime="2026-01-27 15:45:01.881680809 +0000 UTC m=+2198.054080590" Jan 27 15:45:02 crc kubenswrapper[4697]: I0127 15:45:02.871946 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dq2df" event={"ID":"9afae58e-9132-4f96-b373-82beb70e245b","Type":"ContainerStarted","Data":"8c86ea2f6cf93a5a65c9af7441a8fc37ef06d7289c8242727880e4ff07551ddb"} Jan 27 15:45:02 crc kubenswrapper[4697]: I0127 15:45:02.874798 4697 generic.go:334] "Generic (PLEG): container finished" podID="5b0ad28c-9462-4d97-bcc9-da634e079fd2" containerID="182fe67c72a5584c7524a69f4060233fbd552baf1612123e9feb64d6906c4cd5" exitCode=0 Jan 27 15:45:02 crc kubenswrapper[4697]: I0127 15:45:02.874834 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492145-9dp4d" event={"ID":"5b0ad28c-9462-4d97-bcc9-da634e079fd2","Type":"ContainerDied","Data":"182fe67c72a5584c7524a69f4060233fbd552baf1612123e9feb64d6906c4cd5"} Jan 27 15:45:02 crc kubenswrapper[4697]: I0127 15:45:02.902390 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-dq2df" podStartSLOduration=2.987690031 podStartE2EDuration="12.902370177s" podCreationTimestamp="2026-01-27 15:44:50 +0000 UTC" firstStartedPulling="2026-01-27 15:44:51.761664449 +0000 UTC m=+2187.934064230" lastFinishedPulling="2026-01-27 15:45:01.676344585 +0000 UTC m=+2197.848744376" observedRunningTime="2026-01-27 15:45:02.897431177 +0000 UTC m=+2199.069830958" watchObservedRunningTime="2026-01-27 15:45:02.902370177 +0000 UTC m=+2199.074769958" Jan 27 15:45:04 crc kubenswrapper[4697]: I0127 15:45:04.322827 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492145-9dp4d" Jan 27 15:45:04 crc kubenswrapper[4697]: I0127 15:45:04.482300 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wsdcm\" (UniqueName: \"kubernetes.io/projected/5b0ad28c-9462-4d97-bcc9-da634e079fd2-kube-api-access-wsdcm\") pod \"5b0ad28c-9462-4d97-bcc9-da634e079fd2\" (UID: \"5b0ad28c-9462-4d97-bcc9-da634e079fd2\") " Jan 27 15:45:04 crc kubenswrapper[4697]: I0127 15:45:04.482422 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5b0ad28c-9462-4d97-bcc9-da634e079fd2-config-volume\") pod \"5b0ad28c-9462-4d97-bcc9-da634e079fd2\" (UID: \"5b0ad28c-9462-4d97-bcc9-da634e079fd2\") " Jan 27 15:45:04 crc kubenswrapper[4697]: I0127 15:45:04.482554 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5b0ad28c-9462-4d97-bcc9-da634e079fd2-secret-volume\") pod \"5b0ad28c-9462-4d97-bcc9-da634e079fd2\" (UID: \"5b0ad28c-9462-4d97-bcc9-da634e079fd2\") " Jan 27 15:45:04 crc kubenswrapper[4697]: I0127 15:45:04.484247 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b0ad28c-9462-4d97-bcc9-da634e079fd2-config-volume" (OuterVolumeSpecName: "config-volume") pod "5b0ad28c-9462-4d97-bcc9-da634e079fd2" (UID: "5b0ad28c-9462-4d97-bcc9-da634e079fd2"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:45:04 crc kubenswrapper[4697]: I0127 15:45:04.490025 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b0ad28c-9462-4d97-bcc9-da634e079fd2-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "5b0ad28c-9462-4d97-bcc9-da634e079fd2" (UID: "5b0ad28c-9462-4d97-bcc9-da634e079fd2"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:45:04 crc kubenswrapper[4697]: I0127 15:45:04.491577 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b0ad28c-9462-4d97-bcc9-da634e079fd2-kube-api-access-wsdcm" (OuterVolumeSpecName: "kube-api-access-wsdcm") pod "5b0ad28c-9462-4d97-bcc9-da634e079fd2" (UID: "5b0ad28c-9462-4d97-bcc9-da634e079fd2"). InnerVolumeSpecName "kube-api-access-wsdcm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:45:04 crc kubenswrapper[4697]: I0127 15:45:04.585813 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wsdcm\" (UniqueName: \"kubernetes.io/projected/5b0ad28c-9462-4d97-bcc9-da634e079fd2-kube-api-access-wsdcm\") on node \"crc\" DevicePath \"\"" Jan 27 15:45:04 crc kubenswrapper[4697]: I0127 15:45:04.586238 4697 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5b0ad28c-9462-4d97-bcc9-da634e079fd2-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 15:45:04 crc kubenswrapper[4697]: I0127 15:45:04.586266 4697 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5b0ad28c-9462-4d97-bcc9-da634e079fd2-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 27 15:45:04 crc kubenswrapper[4697]: I0127 15:45:04.636695 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492100-sfd69"] Jan 27 15:45:04 crc kubenswrapper[4697]: I0127 15:45:04.646694 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492100-sfd69"] Jan 27 15:45:04 crc kubenswrapper[4697]: I0127 15:45:04.911856 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492145-9dp4d" event={"ID":"5b0ad28c-9462-4d97-bcc9-da634e079fd2","Type":"ContainerDied","Data":"4accb869efd54931f58d966430fc5e6c9e54ad851a3c8be981b67d3f44c59944"} Jan 27 15:45:04 crc kubenswrapper[4697]: I0127 15:45:04.912183 4697 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4accb869efd54931f58d966430fc5e6c9e54ad851a3c8be981b67d3f44c59944" Jan 27 15:45:04 crc kubenswrapper[4697]: I0127 15:45:04.911924 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492145-9dp4d" Jan 27 15:45:06 crc kubenswrapper[4697]: I0127 15:45:06.582925 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79f6280f-8dc0-42b8-be4c-cbbc6528bf58" path="/var/lib/kubelet/pods/79f6280f-8dc0-42b8-be4c-cbbc6528bf58/volumes" Jan 27 15:45:10 crc kubenswrapper[4697]: I0127 15:45:10.949373 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-dq2df" Jan 27 15:45:10 crc kubenswrapper[4697]: I0127 15:45:10.950943 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-dq2df" Jan 27 15:45:11 crc kubenswrapper[4697]: I0127 15:45:11.993482 4697 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-dq2df" podUID="9afae58e-9132-4f96-b373-82beb70e245b" containerName="registry-server" probeResult="failure" output=< Jan 27 15:45:11 crc kubenswrapper[4697]: timeout: failed to connect service ":50051" within 1s Jan 27 15:45:11 crc kubenswrapper[4697]: > Jan 27 15:45:20 crc kubenswrapper[4697]: I0127 15:45:20.993159 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-dq2df" Jan 27 15:45:21 crc kubenswrapper[4697]: I0127 15:45:21.050827 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-dq2df" Jan 27 15:45:21 crc kubenswrapper[4697]: I0127 15:45:21.779863 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dq2df"] Jan 27 15:45:22 crc kubenswrapper[4697]: I0127 15:45:22.051513 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-dq2df" podUID="9afae58e-9132-4f96-b373-82beb70e245b" containerName="registry-server" containerID="cri-o://8c86ea2f6cf93a5a65c9af7441a8fc37ef06d7289c8242727880e4ff07551ddb" gracePeriod=2 Jan 27 15:45:22 crc kubenswrapper[4697]: I0127 15:45:22.536429 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dq2df" Jan 27 15:45:22 crc kubenswrapper[4697]: I0127 15:45:22.628124 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9afae58e-9132-4f96-b373-82beb70e245b-catalog-content\") pod \"9afae58e-9132-4f96-b373-82beb70e245b\" (UID: \"9afae58e-9132-4f96-b373-82beb70e245b\") " Jan 27 15:45:22 crc kubenswrapper[4697]: I0127 15:45:22.628278 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9afae58e-9132-4f96-b373-82beb70e245b-utilities\") pod \"9afae58e-9132-4f96-b373-82beb70e245b\" (UID: \"9afae58e-9132-4f96-b373-82beb70e245b\") " Jan 27 15:45:22 crc kubenswrapper[4697]: I0127 15:45:22.628373 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mt7td\" (UniqueName: \"kubernetes.io/projected/9afae58e-9132-4f96-b373-82beb70e245b-kube-api-access-mt7td\") pod \"9afae58e-9132-4f96-b373-82beb70e245b\" (UID: \"9afae58e-9132-4f96-b373-82beb70e245b\") " Jan 27 15:45:22 crc kubenswrapper[4697]: I0127 15:45:22.628961 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9afae58e-9132-4f96-b373-82beb70e245b-utilities" (OuterVolumeSpecName: "utilities") pod "9afae58e-9132-4f96-b373-82beb70e245b" (UID: "9afae58e-9132-4f96-b373-82beb70e245b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:45:22 crc kubenswrapper[4697]: I0127 15:45:22.636168 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9afae58e-9132-4f96-b373-82beb70e245b-kube-api-access-mt7td" (OuterVolumeSpecName: "kube-api-access-mt7td") pod "9afae58e-9132-4f96-b373-82beb70e245b" (UID: "9afae58e-9132-4f96-b373-82beb70e245b"). InnerVolumeSpecName "kube-api-access-mt7td". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:45:22 crc kubenswrapper[4697]: I0127 15:45:22.732217 4697 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9afae58e-9132-4f96-b373-82beb70e245b-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 15:45:22 crc kubenswrapper[4697]: I0127 15:45:22.732563 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mt7td\" (UniqueName: \"kubernetes.io/projected/9afae58e-9132-4f96-b373-82beb70e245b-kube-api-access-mt7td\") on node \"crc\" DevicePath \"\"" Jan 27 15:45:22 crc kubenswrapper[4697]: I0127 15:45:22.757222 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9afae58e-9132-4f96-b373-82beb70e245b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9afae58e-9132-4f96-b373-82beb70e245b" (UID: "9afae58e-9132-4f96-b373-82beb70e245b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:45:22 crc kubenswrapper[4697]: I0127 15:45:22.834828 4697 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9afae58e-9132-4f96-b373-82beb70e245b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 15:45:23 crc kubenswrapper[4697]: I0127 15:45:23.062066 4697 generic.go:334] "Generic (PLEG): container finished" podID="9afae58e-9132-4f96-b373-82beb70e245b" containerID="8c86ea2f6cf93a5a65c9af7441a8fc37ef06d7289c8242727880e4ff07551ddb" exitCode=0 Jan 27 15:45:23 crc kubenswrapper[4697]: I0127 15:45:23.062119 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dq2df" event={"ID":"9afae58e-9132-4f96-b373-82beb70e245b","Type":"ContainerDied","Data":"8c86ea2f6cf93a5a65c9af7441a8fc37ef06d7289c8242727880e4ff07551ddb"} Jan 27 15:45:23 crc kubenswrapper[4697]: I0127 15:45:23.062124 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dq2df" Jan 27 15:45:23 crc kubenswrapper[4697]: I0127 15:45:23.062154 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dq2df" event={"ID":"9afae58e-9132-4f96-b373-82beb70e245b","Type":"ContainerDied","Data":"d5d38034dbfb2eb311c374ca1a9b6a26f4da0fbe945e7fecd8ce0199bec4af9d"} Jan 27 15:45:23 crc kubenswrapper[4697]: I0127 15:45:23.062182 4697 scope.go:117] "RemoveContainer" containerID="8c86ea2f6cf93a5a65c9af7441a8fc37ef06d7289c8242727880e4ff07551ddb" Jan 27 15:45:23 crc kubenswrapper[4697]: I0127 15:45:23.088887 4697 scope.go:117] "RemoveContainer" containerID="1aeb7b4e83640cfa1f83a4fcd36618d9158a9e7d8d57a43e0837da3ab6bd6e68" Jan 27 15:45:23 crc kubenswrapper[4697]: I0127 15:45:23.100270 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dq2df"] Jan 27 15:45:23 crc kubenswrapper[4697]: I0127 15:45:23.112719 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-dq2df"] Jan 27 15:45:23 crc kubenswrapper[4697]: I0127 15:45:23.123840 4697 scope.go:117] "RemoveContainer" containerID="0ae331cff0f08b802c018902d59a34392448d4374b5d499ef809bb548fa71449" Jan 27 15:45:23 crc kubenswrapper[4697]: I0127 15:45:23.156895 4697 scope.go:117] "RemoveContainer" containerID="8c86ea2f6cf93a5a65c9af7441a8fc37ef06d7289c8242727880e4ff07551ddb" Jan 27 15:45:23 crc kubenswrapper[4697]: E0127 15:45:23.157436 4697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c86ea2f6cf93a5a65c9af7441a8fc37ef06d7289c8242727880e4ff07551ddb\": container with ID starting with 8c86ea2f6cf93a5a65c9af7441a8fc37ef06d7289c8242727880e4ff07551ddb not found: ID does not exist" containerID="8c86ea2f6cf93a5a65c9af7441a8fc37ef06d7289c8242727880e4ff07551ddb" Jan 27 15:45:23 crc kubenswrapper[4697]: I0127 15:45:23.157478 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c86ea2f6cf93a5a65c9af7441a8fc37ef06d7289c8242727880e4ff07551ddb"} err="failed to get container status \"8c86ea2f6cf93a5a65c9af7441a8fc37ef06d7289c8242727880e4ff07551ddb\": rpc error: code = NotFound desc = could not find container \"8c86ea2f6cf93a5a65c9af7441a8fc37ef06d7289c8242727880e4ff07551ddb\": container with ID starting with 8c86ea2f6cf93a5a65c9af7441a8fc37ef06d7289c8242727880e4ff07551ddb not found: ID does not exist" Jan 27 15:45:23 crc kubenswrapper[4697]: I0127 15:45:23.157502 4697 scope.go:117] "RemoveContainer" containerID="1aeb7b4e83640cfa1f83a4fcd36618d9158a9e7d8d57a43e0837da3ab6bd6e68" Jan 27 15:45:23 crc kubenswrapper[4697]: E0127 15:45:23.157767 4697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1aeb7b4e83640cfa1f83a4fcd36618d9158a9e7d8d57a43e0837da3ab6bd6e68\": container with ID starting with 1aeb7b4e83640cfa1f83a4fcd36618d9158a9e7d8d57a43e0837da3ab6bd6e68 not found: ID does not exist" containerID="1aeb7b4e83640cfa1f83a4fcd36618d9158a9e7d8d57a43e0837da3ab6bd6e68" Jan 27 15:45:23 crc kubenswrapper[4697]: I0127 15:45:23.157797 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1aeb7b4e83640cfa1f83a4fcd36618d9158a9e7d8d57a43e0837da3ab6bd6e68"} err="failed to get container status \"1aeb7b4e83640cfa1f83a4fcd36618d9158a9e7d8d57a43e0837da3ab6bd6e68\": rpc error: code = NotFound desc = could not find container \"1aeb7b4e83640cfa1f83a4fcd36618d9158a9e7d8d57a43e0837da3ab6bd6e68\": container with ID starting with 1aeb7b4e83640cfa1f83a4fcd36618d9158a9e7d8d57a43e0837da3ab6bd6e68 not found: ID does not exist" Jan 27 15:45:23 crc kubenswrapper[4697]: I0127 15:45:23.157812 4697 scope.go:117] "RemoveContainer" containerID="0ae331cff0f08b802c018902d59a34392448d4374b5d499ef809bb548fa71449" Jan 27 15:45:23 crc kubenswrapper[4697]: E0127 15:45:23.158015 4697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0ae331cff0f08b802c018902d59a34392448d4374b5d499ef809bb548fa71449\": container with ID starting with 0ae331cff0f08b802c018902d59a34392448d4374b5d499ef809bb548fa71449 not found: ID does not exist" containerID="0ae331cff0f08b802c018902d59a34392448d4374b5d499ef809bb548fa71449" Jan 27 15:45:23 crc kubenswrapper[4697]: I0127 15:45:23.158044 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ae331cff0f08b802c018902d59a34392448d4374b5d499ef809bb548fa71449"} err="failed to get container status \"0ae331cff0f08b802c018902d59a34392448d4374b5d499ef809bb548fa71449\": rpc error: code = NotFound desc = could not find container \"0ae331cff0f08b802c018902d59a34392448d4374b5d499ef809bb548fa71449\": container with ID starting with 0ae331cff0f08b802c018902d59a34392448d4374b5d499ef809bb548fa71449 not found: ID does not exist" Jan 27 15:45:24 crc kubenswrapper[4697]: I0127 15:45:24.579295 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9afae58e-9132-4f96-b373-82beb70e245b" path="/var/lib/kubelet/pods/9afae58e-9132-4f96-b373-82beb70e245b/volumes" Jan 27 15:45:25 crc kubenswrapper[4697]: I0127 15:45:25.108658 4697 patch_prober.go:28] interesting pod/machine-config-daemon-wz495 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 15:45:25 crc kubenswrapper[4697]: I0127 15:45:25.108991 4697 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 15:45:25 crc kubenswrapper[4697]: I0127 15:45:25.109035 4697 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wz495" Jan 27 15:45:25 crc kubenswrapper[4697]: I0127 15:45:25.109720 4697 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5c02510c7f6b98a79ed557ec4fec93e0bac6410a1ead6771ce500bd398b9cca6"} pod="openshift-machine-config-operator/machine-config-daemon-wz495" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 15:45:25 crc kubenswrapper[4697]: I0127 15:45:25.109796 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" containerName="machine-config-daemon" containerID="cri-o://5c02510c7f6b98a79ed557ec4fec93e0bac6410a1ead6771ce500bd398b9cca6" gracePeriod=600 Jan 27 15:45:25 crc kubenswrapper[4697]: E0127 15:45:25.235095 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wz495_openshift-machine-config-operator(e9bec8bc-b2a6-4865-83ca-692ae5c022a6)\"" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" Jan 27 15:45:26 crc kubenswrapper[4697]: I0127 15:45:26.087016 4697 generic.go:334] "Generic (PLEG): container finished" podID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" containerID="5c02510c7f6b98a79ed557ec4fec93e0bac6410a1ead6771ce500bd398b9cca6" exitCode=0 Jan 27 15:45:26 crc kubenswrapper[4697]: I0127 15:45:26.087083 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wz495" event={"ID":"e9bec8bc-b2a6-4865-83ca-692ae5c022a6","Type":"ContainerDied","Data":"5c02510c7f6b98a79ed557ec4fec93e0bac6410a1ead6771ce500bd398b9cca6"} Jan 27 15:45:26 crc kubenswrapper[4697]: I0127 15:45:26.087401 4697 scope.go:117] "RemoveContainer" containerID="8a39377f66792076ded24d1dd2009ec1f66f27328a943fc9055b637c8a864fd4" Jan 27 15:45:26 crc kubenswrapper[4697]: I0127 15:45:26.088117 4697 scope.go:117] "RemoveContainer" containerID="5c02510c7f6b98a79ed557ec4fec93e0bac6410a1ead6771ce500bd398b9cca6" Jan 27 15:45:26 crc kubenswrapper[4697]: E0127 15:45:26.088563 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wz495_openshift-machine-config-operator(e9bec8bc-b2a6-4865-83ca-692ae5c022a6)\"" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" Jan 27 15:45:36 crc kubenswrapper[4697]: I0127 15:45:36.569478 4697 scope.go:117] "RemoveContainer" containerID="5c02510c7f6b98a79ed557ec4fec93e0bac6410a1ead6771ce500bd398b9cca6" Jan 27 15:45:36 crc kubenswrapper[4697]: E0127 15:45:36.570276 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wz495_openshift-machine-config-operator(e9bec8bc-b2a6-4865-83ca-692ae5c022a6)\"" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" Jan 27 15:45:37 crc kubenswrapper[4697]: I0127 15:45:37.181048 4697 generic.go:334] "Generic (PLEG): container finished" podID="e3f4f826-3a5f-4eb5-a34b-c1c0ff66d4e3" containerID="c218f85bf58b370981d34b3f9f9ac5725a420b22118fbb85968c614736916d6d" exitCode=0 Jan 27 15:45:37 crc kubenswrapper[4697]: I0127 15:45:37.181172 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-666b2" event={"ID":"e3f4f826-3a5f-4eb5-a34b-c1c0ff66d4e3","Type":"ContainerDied","Data":"c218f85bf58b370981d34b3f9f9ac5725a420b22118fbb85968c614736916d6d"} Jan 27 15:45:38 crc kubenswrapper[4697]: I0127 15:45:38.757342 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-666b2" Jan 27 15:45:38 crc kubenswrapper[4697]: I0127 15:45:38.832477 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gd992\" (UniqueName: \"kubernetes.io/projected/e3f4f826-3a5f-4eb5-a34b-c1c0ff66d4e3-kube-api-access-gd992\") pod \"e3f4f826-3a5f-4eb5-a34b-c1c0ff66d4e3\" (UID: \"e3f4f826-3a5f-4eb5-a34b-c1c0ff66d4e3\") " Jan 27 15:45:38 crc kubenswrapper[4697]: I0127 15:45:38.832667 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e3f4f826-3a5f-4eb5-a34b-c1c0ff66d4e3-ssh-key-openstack-edpm-ipam\") pod \"e3f4f826-3a5f-4eb5-a34b-c1c0ff66d4e3\" (UID: \"e3f4f826-3a5f-4eb5-a34b-c1c0ff66d4e3\") " Jan 27 15:45:38 crc kubenswrapper[4697]: I0127 15:45:38.832721 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e3f4f826-3a5f-4eb5-a34b-c1c0ff66d4e3-inventory\") pod \"e3f4f826-3a5f-4eb5-a34b-c1c0ff66d4e3\" (UID: \"e3f4f826-3a5f-4eb5-a34b-c1c0ff66d4e3\") " Jan 27 15:45:38 crc kubenswrapper[4697]: I0127 15:45:38.840052 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3f4f826-3a5f-4eb5-a34b-c1c0ff66d4e3-kube-api-access-gd992" (OuterVolumeSpecName: "kube-api-access-gd992") pod "e3f4f826-3a5f-4eb5-a34b-c1c0ff66d4e3" (UID: "e3f4f826-3a5f-4eb5-a34b-c1c0ff66d4e3"). InnerVolumeSpecName "kube-api-access-gd992". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:45:38 crc kubenswrapper[4697]: I0127 15:45:38.864000 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3f4f826-3a5f-4eb5-a34b-c1c0ff66d4e3-inventory" (OuterVolumeSpecName: "inventory") pod "e3f4f826-3a5f-4eb5-a34b-c1c0ff66d4e3" (UID: "e3f4f826-3a5f-4eb5-a34b-c1c0ff66d4e3"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:45:38 crc kubenswrapper[4697]: I0127 15:45:38.864344 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3f4f826-3a5f-4eb5-a34b-c1c0ff66d4e3-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "e3f4f826-3a5f-4eb5-a34b-c1c0ff66d4e3" (UID: "e3f4f826-3a5f-4eb5-a34b-c1c0ff66d4e3"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:45:38 crc kubenswrapper[4697]: I0127 15:45:38.934715 4697 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e3f4f826-3a5f-4eb5-a34b-c1c0ff66d4e3-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 15:45:38 crc kubenswrapper[4697]: I0127 15:45:38.934977 4697 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e3f4f826-3a5f-4eb5-a34b-c1c0ff66d4e3-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 15:45:38 crc kubenswrapper[4697]: I0127 15:45:38.935046 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gd992\" (UniqueName: \"kubernetes.io/projected/e3f4f826-3a5f-4eb5-a34b-c1c0ff66d4e3-kube-api-access-gd992\") on node \"crc\" DevicePath \"\"" Jan 27 15:45:39 crc kubenswrapper[4697]: I0127 15:45:39.200315 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-666b2" event={"ID":"e3f4f826-3a5f-4eb5-a34b-c1c0ff66d4e3","Type":"ContainerDied","Data":"b9ebd6218ac6958acc8914cf66b7c78fbad83da743b98f42595b645e90b74940"} Jan 27 15:45:39 crc kubenswrapper[4697]: I0127 15:45:39.200610 4697 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b9ebd6218ac6958acc8914cf66b7c78fbad83da743b98f42595b645e90b74940" Jan 27 15:45:39 crc kubenswrapper[4697]: I0127 15:45:39.200373 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-666b2" Jan 27 15:45:39 crc kubenswrapper[4697]: I0127 15:45:39.283429 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-nkk45"] Jan 27 15:45:39 crc kubenswrapper[4697]: E0127 15:45:39.283793 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9afae58e-9132-4f96-b373-82beb70e245b" containerName="extract-utilities" Jan 27 15:45:39 crc kubenswrapper[4697]: I0127 15:45:39.283807 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="9afae58e-9132-4f96-b373-82beb70e245b" containerName="extract-utilities" Jan 27 15:45:39 crc kubenswrapper[4697]: E0127 15:45:39.283820 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9afae58e-9132-4f96-b373-82beb70e245b" containerName="extract-content" Jan 27 15:45:39 crc kubenswrapper[4697]: I0127 15:45:39.283826 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="9afae58e-9132-4f96-b373-82beb70e245b" containerName="extract-content" Jan 27 15:45:39 crc kubenswrapper[4697]: E0127 15:45:39.283840 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b0ad28c-9462-4d97-bcc9-da634e079fd2" containerName="collect-profiles" Jan 27 15:45:39 crc kubenswrapper[4697]: I0127 15:45:39.283846 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b0ad28c-9462-4d97-bcc9-da634e079fd2" containerName="collect-profiles" Jan 27 15:45:39 crc kubenswrapper[4697]: E0127 15:45:39.283863 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3f4f826-3a5f-4eb5-a34b-c1c0ff66d4e3" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 27 15:45:39 crc kubenswrapper[4697]: I0127 15:45:39.283869 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3f4f826-3a5f-4eb5-a34b-c1c0ff66d4e3" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 27 15:45:39 crc kubenswrapper[4697]: E0127 15:45:39.283886 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9afae58e-9132-4f96-b373-82beb70e245b" containerName="registry-server" Jan 27 15:45:39 crc kubenswrapper[4697]: I0127 15:45:39.283892 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="9afae58e-9132-4f96-b373-82beb70e245b" containerName="registry-server" Jan 27 15:45:39 crc kubenswrapper[4697]: I0127 15:45:39.284050 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b0ad28c-9462-4d97-bcc9-da634e079fd2" containerName="collect-profiles" Jan 27 15:45:39 crc kubenswrapper[4697]: I0127 15:45:39.284073 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="9afae58e-9132-4f96-b373-82beb70e245b" containerName="registry-server" Jan 27 15:45:39 crc kubenswrapper[4697]: I0127 15:45:39.284089 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3f4f826-3a5f-4eb5-a34b-c1c0ff66d4e3" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 27 15:45:39 crc kubenswrapper[4697]: I0127 15:45:39.284647 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-nkk45" Jan 27 15:45:39 crc kubenswrapper[4697]: I0127 15:45:39.287504 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 15:45:39 crc kubenswrapper[4697]: I0127 15:45:39.287850 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ctbjc" Jan 27 15:45:39 crc kubenswrapper[4697]: I0127 15:45:39.287868 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 15:45:39 crc kubenswrapper[4697]: I0127 15:45:39.297404 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 15:45:39 crc kubenswrapper[4697]: I0127 15:45:39.315239 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-nkk45"] Jan 27 15:45:39 crc kubenswrapper[4697]: I0127 15:45:39.443079 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/707d2908-c632-4cb5-9a3f-8d44f79aedcb-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-nkk45\" (UID: \"707d2908-c632-4cb5-9a3f-8d44f79aedcb\") " pod="openstack/ssh-known-hosts-edpm-deployment-nkk45" Jan 27 15:45:39 crc kubenswrapper[4697]: I0127 15:45:39.443186 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/707d2908-c632-4cb5-9a3f-8d44f79aedcb-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-nkk45\" (UID: \"707d2908-c632-4cb5-9a3f-8d44f79aedcb\") " pod="openstack/ssh-known-hosts-edpm-deployment-nkk45" Jan 27 15:45:39 crc kubenswrapper[4697]: I0127 15:45:39.443252 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5d7s2\" (UniqueName: \"kubernetes.io/projected/707d2908-c632-4cb5-9a3f-8d44f79aedcb-kube-api-access-5d7s2\") pod \"ssh-known-hosts-edpm-deployment-nkk45\" (UID: \"707d2908-c632-4cb5-9a3f-8d44f79aedcb\") " pod="openstack/ssh-known-hosts-edpm-deployment-nkk45" Jan 27 15:45:39 crc kubenswrapper[4697]: I0127 15:45:39.545328 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/707d2908-c632-4cb5-9a3f-8d44f79aedcb-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-nkk45\" (UID: \"707d2908-c632-4cb5-9a3f-8d44f79aedcb\") " pod="openstack/ssh-known-hosts-edpm-deployment-nkk45" Jan 27 15:45:39 crc kubenswrapper[4697]: I0127 15:45:39.545428 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5d7s2\" (UniqueName: \"kubernetes.io/projected/707d2908-c632-4cb5-9a3f-8d44f79aedcb-kube-api-access-5d7s2\") pod \"ssh-known-hosts-edpm-deployment-nkk45\" (UID: \"707d2908-c632-4cb5-9a3f-8d44f79aedcb\") " pod="openstack/ssh-known-hosts-edpm-deployment-nkk45" Jan 27 15:45:39 crc kubenswrapper[4697]: I0127 15:45:39.545531 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/707d2908-c632-4cb5-9a3f-8d44f79aedcb-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-nkk45\" (UID: \"707d2908-c632-4cb5-9a3f-8d44f79aedcb\") " pod="openstack/ssh-known-hosts-edpm-deployment-nkk45" Jan 27 15:45:39 crc kubenswrapper[4697]: I0127 15:45:39.550863 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/707d2908-c632-4cb5-9a3f-8d44f79aedcb-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-nkk45\" (UID: \"707d2908-c632-4cb5-9a3f-8d44f79aedcb\") " pod="openstack/ssh-known-hosts-edpm-deployment-nkk45" Jan 27 15:45:39 crc kubenswrapper[4697]: I0127 15:45:39.556438 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/707d2908-c632-4cb5-9a3f-8d44f79aedcb-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-nkk45\" (UID: \"707d2908-c632-4cb5-9a3f-8d44f79aedcb\") " pod="openstack/ssh-known-hosts-edpm-deployment-nkk45" Jan 27 15:45:39 crc kubenswrapper[4697]: I0127 15:45:39.577191 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5d7s2\" (UniqueName: \"kubernetes.io/projected/707d2908-c632-4cb5-9a3f-8d44f79aedcb-kube-api-access-5d7s2\") pod \"ssh-known-hosts-edpm-deployment-nkk45\" (UID: \"707d2908-c632-4cb5-9a3f-8d44f79aedcb\") " pod="openstack/ssh-known-hosts-edpm-deployment-nkk45" Jan 27 15:45:39 crc kubenswrapper[4697]: I0127 15:45:39.607805 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-nkk45" Jan 27 15:45:40 crc kubenswrapper[4697]: I0127 15:45:40.142642 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-nkk45"] Jan 27 15:45:40 crc kubenswrapper[4697]: I0127 15:45:40.209218 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-nkk45" event={"ID":"707d2908-c632-4cb5-9a3f-8d44f79aedcb","Type":"ContainerStarted","Data":"aa4c25fef06f40ab181082762d9c692d24a1618afa4ef75d7faae79c69702e25"} Jan 27 15:45:41 crc kubenswrapper[4697]: I0127 15:45:41.218641 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-nkk45" event={"ID":"707d2908-c632-4cb5-9a3f-8d44f79aedcb","Type":"ContainerStarted","Data":"4866617d2176657a46fd2c3d0c76fee628fdf64323efdf8a1e5ad00b888a8643"} Jan 27 15:45:41 crc kubenswrapper[4697]: I0127 15:45:41.245611 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-nkk45" podStartSLOduration=1.826865788 podStartE2EDuration="2.245588932s" podCreationTimestamp="2026-01-27 15:45:39 +0000 UTC" firstStartedPulling="2026-01-27 15:45:40.153617468 +0000 UTC m=+2236.326017249" lastFinishedPulling="2026-01-27 15:45:40.572340612 +0000 UTC m=+2236.744740393" observedRunningTime="2026-01-27 15:45:41.236880797 +0000 UTC m=+2237.409280578" watchObservedRunningTime="2026-01-27 15:45:41.245588932 +0000 UTC m=+2237.417988713" Jan 27 15:45:48 crc kubenswrapper[4697]: I0127 15:45:48.058449 4697 scope.go:117] "RemoveContainer" containerID="fa905afa8cf330172707edf7a50b0996520776d36a6e83c10c9788272644f84f" Jan 27 15:45:48 crc kubenswrapper[4697]: I0127 15:45:48.284283 4697 generic.go:334] "Generic (PLEG): container finished" podID="707d2908-c632-4cb5-9a3f-8d44f79aedcb" containerID="4866617d2176657a46fd2c3d0c76fee628fdf64323efdf8a1e5ad00b888a8643" exitCode=0 Jan 27 15:45:48 crc kubenswrapper[4697]: I0127 15:45:48.284385 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-nkk45" event={"ID":"707d2908-c632-4cb5-9a3f-8d44f79aedcb","Type":"ContainerDied","Data":"4866617d2176657a46fd2c3d0c76fee628fdf64323efdf8a1e5ad00b888a8643"} Jan 27 15:45:49 crc kubenswrapper[4697]: I0127 15:45:49.707873 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-nkk45" Jan 27 15:45:49 crc kubenswrapper[4697]: I0127 15:45:49.881329 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5d7s2\" (UniqueName: \"kubernetes.io/projected/707d2908-c632-4cb5-9a3f-8d44f79aedcb-kube-api-access-5d7s2\") pod \"707d2908-c632-4cb5-9a3f-8d44f79aedcb\" (UID: \"707d2908-c632-4cb5-9a3f-8d44f79aedcb\") " Jan 27 15:45:49 crc kubenswrapper[4697]: I0127 15:45:49.881582 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/707d2908-c632-4cb5-9a3f-8d44f79aedcb-inventory-0\") pod \"707d2908-c632-4cb5-9a3f-8d44f79aedcb\" (UID: \"707d2908-c632-4cb5-9a3f-8d44f79aedcb\") " Jan 27 15:45:49 crc kubenswrapper[4697]: I0127 15:45:49.881686 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/707d2908-c632-4cb5-9a3f-8d44f79aedcb-ssh-key-openstack-edpm-ipam\") pod \"707d2908-c632-4cb5-9a3f-8d44f79aedcb\" (UID: \"707d2908-c632-4cb5-9a3f-8d44f79aedcb\") " Jan 27 15:45:49 crc kubenswrapper[4697]: I0127 15:45:49.896827 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/707d2908-c632-4cb5-9a3f-8d44f79aedcb-kube-api-access-5d7s2" (OuterVolumeSpecName: "kube-api-access-5d7s2") pod "707d2908-c632-4cb5-9a3f-8d44f79aedcb" (UID: "707d2908-c632-4cb5-9a3f-8d44f79aedcb"). InnerVolumeSpecName "kube-api-access-5d7s2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:45:49 crc kubenswrapper[4697]: I0127 15:45:49.911559 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/707d2908-c632-4cb5-9a3f-8d44f79aedcb-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "707d2908-c632-4cb5-9a3f-8d44f79aedcb" (UID: "707d2908-c632-4cb5-9a3f-8d44f79aedcb"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:45:49 crc kubenswrapper[4697]: I0127 15:45:49.911905 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/707d2908-c632-4cb5-9a3f-8d44f79aedcb-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "707d2908-c632-4cb5-9a3f-8d44f79aedcb" (UID: "707d2908-c632-4cb5-9a3f-8d44f79aedcb"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:45:49 crc kubenswrapper[4697]: I0127 15:45:49.984230 4697 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/707d2908-c632-4cb5-9a3f-8d44f79aedcb-inventory-0\") on node \"crc\" DevicePath \"\"" Jan 27 15:45:49 crc kubenswrapper[4697]: I0127 15:45:49.984301 4697 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/707d2908-c632-4cb5-9a3f-8d44f79aedcb-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 15:45:49 crc kubenswrapper[4697]: I0127 15:45:49.984320 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5d7s2\" (UniqueName: \"kubernetes.io/projected/707d2908-c632-4cb5-9a3f-8d44f79aedcb-kube-api-access-5d7s2\") on node \"crc\" DevicePath \"\"" Jan 27 15:45:50 crc kubenswrapper[4697]: I0127 15:45:50.308104 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-nkk45" event={"ID":"707d2908-c632-4cb5-9a3f-8d44f79aedcb","Type":"ContainerDied","Data":"aa4c25fef06f40ab181082762d9c692d24a1618afa4ef75d7faae79c69702e25"} Jan 27 15:45:50 crc kubenswrapper[4697]: I0127 15:45:50.308486 4697 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aa4c25fef06f40ab181082762d9c692d24a1618afa4ef75d7faae79c69702e25" Jan 27 15:45:50 crc kubenswrapper[4697]: I0127 15:45:50.308730 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-nkk45" Jan 27 15:45:50 crc kubenswrapper[4697]: I0127 15:45:50.367736 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-4tcrw"] Jan 27 15:45:50 crc kubenswrapper[4697]: E0127 15:45:50.368163 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="707d2908-c632-4cb5-9a3f-8d44f79aedcb" containerName="ssh-known-hosts-edpm-deployment" Jan 27 15:45:50 crc kubenswrapper[4697]: I0127 15:45:50.368184 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="707d2908-c632-4cb5-9a3f-8d44f79aedcb" containerName="ssh-known-hosts-edpm-deployment" Jan 27 15:45:50 crc kubenswrapper[4697]: I0127 15:45:50.368368 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="707d2908-c632-4cb5-9a3f-8d44f79aedcb" containerName="ssh-known-hosts-edpm-deployment" Jan 27 15:45:50 crc kubenswrapper[4697]: I0127 15:45:50.369267 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4tcrw" Jan 27 15:45:50 crc kubenswrapper[4697]: I0127 15:45:50.371541 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 15:45:50 crc kubenswrapper[4697]: I0127 15:45:50.373824 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ctbjc" Jan 27 15:45:50 crc kubenswrapper[4697]: I0127 15:45:50.374378 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 15:45:50 crc kubenswrapper[4697]: I0127 15:45:50.379898 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 15:45:50 crc kubenswrapper[4697]: I0127 15:45:50.382089 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-4tcrw"] Jan 27 15:45:50 crc kubenswrapper[4697]: I0127 15:45:50.493013 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f15a6662-a671-40da-9473-59daaedbe07c-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-4tcrw\" (UID: \"f15a6662-a671-40da-9473-59daaedbe07c\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4tcrw" Jan 27 15:45:50 crc kubenswrapper[4697]: I0127 15:45:50.493132 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkdc4\" (UniqueName: \"kubernetes.io/projected/f15a6662-a671-40da-9473-59daaedbe07c-kube-api-access-fkdc4\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-4tcrw\" (UID: \"f15a6662-a671-40da-9473-59daaedbe07c\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4tcrw" Jan 27 15:45:50 crc kubenswrapper[4697]: I0127 15:45:50.493733 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f15a6662-a671-40da-9473-59daaedbe07c-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-4tcrw\" (UID: \"f15a6662-a671-40da-9473-59daaedbe07c\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4tcrw" Jan 27 15:45:50 crc kubenswrapper[4697]: I0127 15:45:50.569124 4697 scope.go:117] "RemoveContainer" containerID="5c02510c7f6b98a79ed557ec4fec93e0bac6410a1ead6771ce500bd398b9cca6" Jan 27 15:45:50 crc kubenswrapper[4697]: E0127 15:45:50.569394 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wz495_openshift-machine-config-operator(e9bec8bc-b2a6-4865-83ca-692ae5c022a6)\"" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" Jan 27 15:45:50 crc kubenswrapper[4697]: I0127 15:45:50.604105 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f15a6662-a671-40da-9473-59daaedbe07c-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-4tcrw\" (UID: \"f15a6662-a671-40da-9473-59daaedbe07c\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4tcrw" Jan 27 15:45:50 crc kubenswrapper[4697]: I0127 15:45:50.604832 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f15a6662-a671-40da-9473-59daaedbe07c-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-4tcrw\" (UID: \"f15a6662-a671-40da-9473-59daaedbe07c\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4tcrw" Jan 27 15:45:50 crc kubenswrapper[4697]: I0127 15:45:50.605069 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fkdc4\" (UniqueName: \"kubernetes.io/projected/f15a6662-a671-40da-9473-59daaedbe07c-kube-api-access-fkdc4\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-4tcrw\" (UID: \"f15a6662-a671-40da-9473-59daaedbe07c\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4tcrw" Jan 27 15:45:50 crc kubenswrapper[4697]: I0127 15:45:50.610546 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f15a6662-a671-40da-9473-59daaedbe07c-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-4tcrw\" (UID: \"f15a6662-a671-40da-9473-59daaedbe07c\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4tcrw" Jan 27 15:45:50 crc kubenswrapper[4697]: I0127 15:45:50.622388 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f15a6662-a671-40da-9473-59daaedbe07c-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-4tcrw\" (UID: \"f15a6662-a671-40da-9473-59daaedbe07c\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4tcrw" Jan 27 15:45:50 crc kubenswrapper[4697]: I0127 15:45:50.623085 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fkdc4\" (UniqueName: \"kubernetes.io/projected/f15a6662-a671-40da-9473-59daaedbe07c-kube-api-access-fkdc4\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-4tcrw\" (UID: \"f15a6662-a671-40da-9473-59daaedbe07c\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4tcrw" Jan 27 15:45:50 crc kubenswrapper[4697]: I0127 15:45:50.688211 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4tcrw" Jan 27 15:45:51 crc kubenswrapper[4697]: I0127 15:45:51.194766 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-4tcrw"] Jan 27 15:45:51 crc kubenswrapper[4697]: I0127 15:45:51.317671 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4tcrw" event={"ID":"f15a6662-a671-40da-9473-59daaedbe07c","Type":"ContainerStarted","Data":"44463940566ef2c2e837fe2a3d361119533a5ca19fc23addfa8f56f1ca4edf49"} Jan 27 15:45:52 crc kubenswrapper[4697]: I0127 15:45:52.328074 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4tcrw" event={"ID":"f15a6662-a671-40da-9473-59daaedbe07c","Type":"ContainerStarted","Data":"9346a3c7920bb88430cc6b83cc5f0a1cfeb100ba2c4e25adbc85a55efc6bf3ea"} Jan 27 15:45:52 crc kubenswrapper[4697]: I0127 15:45:52.357138 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4tcrw" podStartSLOduration=1.919358369 podStartE2EDuration="2.357117885s" podCreationTimestamp="2026-01-27 15:45:50 +0000 UTC" firstStartedPulling="2026-01-27 15:45:51.209532053 +0000 UTC m=+2247.381931834" lastFinishedPulling="2026-01-27 15:45:51.647291569 +0000 UTC m=+2247.819691350" observedRunningTime="2026-01-27 15:45:52.345622431 +0000 UTC m=+2248.518022222" watchObservedRunningTime="2026-01-27 15:45:52.357117885 +0000 UTC m=+2248.529517666" Jan 27 15:46:00 crc kubenswrapper[4697]: I0127 15:46:00.406457 4697 generic.go:334] "Generic (PLEG): container finished" podID="f15a6662-a671-40da-9473-59daaedbe07c" containerID="9346a3c7920bb88430cc6b83cc5f0a1cfeb100ba2c4e25adbc85a55efc6bf3ea" exitCode=0 Jan 27 15:46:00 crc kubenswrapper[4697]: I0127 15:46:00.406550 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4tcrw" event={"ID":"f15a6662-a671-40da-9473-59daaedbe07c","Type":"ContainerDied","Data":"9346a3c7920bb88430cc6b83cc5f0a1cfeb100ba2c4e25adbc85a55efc6bf3ea"} Jan 27 15:46:01 crc kubenswrapper[4697]: I0127 15:46:01.569714 4697 scope.go:117] "RemoveContainer" containerID="5c02510c7f6b98a79ed557ec4fec93e0bac6410a1ead6771ce500bd398b9cca6" Jan 27 15:46:01 crc kubenswrapper[4697]: E0127 15:46:01.570368 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wz495_openshift-machine-config-operator(e9bec8bc-b2a6-4865-83ca-692ae5c022a6)\"" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" Jan 27 15:46:01 crc kubenswrapper[4697]: I0127 15:46:01.875576 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4tcrw" Jan 27 15:46:02 crc kubenswrapper[4697]: I0127 15:46:02.041502 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f15a6662-a671-40da-9473-59daaedbe07c-ssh-key-openstack-edpm-ipam\") pod \"f15a6662-a671-40da-9473-59daaedbe07c\" (UID: \"f15a6662-a671-40da-9473-59daaedbe07c\") " Jan 27 15:46:02 crc kubenswrapper[4697]: I0127 15:46:02.041690 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fkdc4\" (UniqueName: \"kubernetes.io/projected/f15a6662-a671-40da-9473-59daaedbe07c-kube-api-access-fkdc4\") pod \"f15a6662-a671-40da-9473-59daaedbe07c\" (UID: \"f15a6662-a671-40da-9473-59daaedbe07c\") " Jan 27 15:46:02 crc kubenswrapper[4697]: I0127 15:46:02.042483 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f15a6662-a671-40da-9473-59daaedbe07c-inventory\") pod \"f15a6662-a671-40da-9473-59daaedbe07c\" (UID: \"f15a6662-a671-40da-9473-59daaedbe07c\") " Jan 27 15:46:02 crc kubenswrapper[4697]: I0127 15:46:02.047250 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f15a6662-a671-40da-9473-59daaedbe07c-kube-api-access-fkdc4" (OuterVolumeSpecName: "kube-api-access-fkdc4") pod "f15a6662-a671-40da-9473-59daaedbe07c" (UID: "f15a6662-a671-40da-9473-59daaedbe07c"). InnerVolumeSpecName "kube-api-access-fkdc4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:46:02 crc kubenswrapper[4697]: I0127 15:46:02.068598 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f15a6662-a671-40da-9473-59daaedbe07c-inventory" (OuterVolumeSpecName: "inventory") pod "f15a6662-a671-40da-9473-59daaedbe07c" (UID: "f15a6662-a671-40da-9473-59daaedbe07c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:46:02 crc kubenswrapper[4697]: I0127 15:46:02.070731 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f15a6662-a671-40da-9473-59daaedbe07c-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "f15a6662-a671-40da-9473-59daaedbe07c" (UID: "f15a6662-a671-40da-9473-59daaedbe07c"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:46:02 crc kubenswrapper[4697]: I0127 15:46:02.145121 4697 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f15a6662-a671-40da-9473-59daaedbe07c-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 15:46:02 crc kubenswrapper[4697]: I0127 15:46:02.145150 4697 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f15a6662-a671-40da-9473-59daaedbe07c-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 15:46:02 crc kubenswrapper[4697]: I0127 15:46:02.145160 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fkdc4\" (UniqueName: \"kubernetes.io/projected/f15a6662-a671-40da-9473-59daaedbe07c-kube-api-access-fkdc4\") on node \"crc\" DevicePath \"\"" Jan 27 15:46:02 crc kubenswrapper[4697]: I0127 15:46:02.427359 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4tcrw" event={"ID":"f15a6662-a671-40da-9473-59daaedbe07c","Type":"ContainerDied","Data":"44463940566ef2c2e837fe2a3d361119533a5ca19fc23addfa8f56f1ca4edf49"} Jan 27 15:46:02 crc kubenswrapper[4697]: I0127 15:46:02.427402 4697 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="44463940566ef2c2e837fe2a3d361119533a5ca19fc23addfa8f56f1ca4edf49" Jan 27 15:46:02 crc kubenswrapper[4697]: I0127 15:46:02.427466 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4tcrw" Jan 27 15:46:02 crc kubenswrapper[4697]: I0127 15:46:02.528470 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2cdg9"] Jan 27 15:46:02 crc kubenswrapper[4697]: E0127 15:46:02.528920 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f15a6662-a671-40da-9473-59daaedbe07c" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 27 15:46:02 crc kubenswrapper[4697]: I0127 15:46:02.528941 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="f15a6662-a671-40da-9473-59daaedbe07c" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 27 15:46:02 crc kubenswrapper[4697]: I0127 15:46:02.529183 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="f15a6662-a671-40da-9473-59daaedbe07c" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 27 15:46:02 crc kubenswrapper[4697]: I0127 15:46:02.529919 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2cdg9" Jan 27 15:46:02 crc kubenswrapper[4697]: I0127 15:46:02.534893 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 15:46:02 crc kubenswrapper[4697]: I0127 15:46:02.534913 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 15:46:02 crc kubenswrapper[4697]: I0127 15:46:02.535475 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ctbjc" Jan 27 15:46:02 crc kubenswrapper[4697]: I0127 15:46:02.537084 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 15:46:02 crc kubenswrapper[4697]: I0127 15:46:02.545227 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2cdg9"] Jan 27 15:46:02 crc kubenswrapper[4697]: I0127 15:46:02.653031 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9wv5\" (UniqueName: \"kubernetes.io/projected/59725918-a0f4-46fb-afcf-393ee1d4d22b-kube-api-access-q9wv5\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-2cdg9\" (UID: \"59725918-a0f4-46fb-afcf-393ee1d4d22b\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2cdg9" Jan 27 15:46:02 crc kubenswrapper[4697]: I0127 15:46:02.653375 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/59725918-a0f4-46fb-afcf-393ee1d4d22b-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-2cdg9\" (UID: \"59725918-a0f4-46fb-afcf-393ee1d4d22b\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2cdg9" Jan 27 15:46:02 crc kubenswrapper[4697]: I0127 15:46:02.653596 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/59725918-a0f4-46fb-afcf-393ee1d4d22b-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-2cdg9\" (UID: \"59725918-a0f4-46fb-afcf-393ee1d4d22b\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2cdg9" Jan 27 15:46:02 crc kubenswrapper[4697]: I0127 15:46:02.755763 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/59725918-a0f4-46fb-afcf-393ee1d4d22b-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-2cdg9\" (UID: \"59725918-a0f4-46fb-afcf-393ee1d4d22b\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2cdg9" Jan 27 15:46:02 crc kubenswrapper[4697]: I0127 15:46:02.756348 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/59725918-a0f4-46fb-afcf-393ee1d4d22b-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-2cdg9\" (UID: \"59725918-a0f4-46fb-afcf-393ee1d4d22b\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2cdg9" Jan 27 15:46:02 crc kubenswrapper[4697]: I0127 15:46:02.756588 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9wv5\" (UniqueName: \"kubernetes.io/projected/59725918-a0f4-46fb-afcf-393ee1d4d22b-kube-api-access-q9wv5\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-2cdg9\" (UID: \"59725918-a0f4-46fb-afcf-393ee1d4d22b\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2cdg9" Jan 27 15:46:02 crc kubenswrapper[4697]: I0127 15:46:02.760027 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/59725918-a0f4-46fb-afcf-393ee1d4d22b-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-2cdg9\" (UID: \"59725918-a0f4-46fb-afcf-393ee1d4d22b\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2cdg9" Jan 27 15:46:02 crc kubenswrapper[4697]: I0127 15:46:02.760107 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/59725918-a0f4-46fb-afcf-393ee1d4d22b-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-2cdg9\" (UID: \"59725918-a0f4-46fb-afcf-393ee1d4d22b\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2cdg9" Jan 27 15:46:02 crc kubenswrapper[4697]: I0127 15:46:02.775041 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9wv5\" (UniqueName: \"kubernetes.io/projected/59725918-a0f4-46fb-afcf-393ee1d4d22b-kube-api-access-q9wv5\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-2cdg9\" (UID: \"59725918-a0f4-46fb-afcf-393ee1d4d22b\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2cdg9" Jan 27 15:46:02 crc kubenswrapper[4697]: I0127 15:46:02.849035 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2cdg9" Jan 27 15:46:03 crc kubenswrapper[4697]: I0127 15:46:03.349229 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2cdg9"] Jan 27 15:46:03 crc kubenswrapper[4697]: I0127 15:46:03.435774 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2cdg9" event={"ID":"59725918-a0f4-46fb-afcf-393ee1d4d22b","Type":"ContainerStarted","Data":"4c551f9ab386fc0397378db50790c2f8294922402868738b8c9d07a93e0849ff"} Jan 27 15:46:04 crc kubenswrapper[4697]: I0127 15:46:04.468236 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2cdg9" event={"ID":"59725918-a0f4-46fb-afcf-393ee1d4d22b","Type":"ContainerStarted","Data":"35732367c11e08400215f8b629c40658ef9693beededf2b8eb2f1f7482446410"} Jan 27 15:46:04 crc kubenswrapper[4697]: I0127 15:46:04.493249 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2cdg9" podStartSLOduration=1.7972924799999999 podStartE2EDuration="2.493227993s" podCreationTimestamp="2026-01-27 15:46:02 +0000 UTC" firstStartedPulling="2026-01-27 15:46:03.367126572 +0000 UTC m=+2259.539526353" lastFinishedPulling="2026-01-27 15:46:04.063062075 +0000 UTC m=+2260.235461866" observedRunningTime="2026-01-27 15:46:04.487388257 +0000 UTC m=+2260.659788058" watchObservedRunningTime="2026-01-27 15:46:04.493227993 +0000 UTC m=+2260.665627774" Jan 27 15:46:14 crc kubenswrapper[4697]: I0127 15:46:14.554883 4697 generic.go:334] "Generic (PLEG): container finished" podID="59725918-a0f4-46fb-afcf-393ee1d4d22b" containerID="35732367c11e08400215f8b629c40658ef9693beededf2b8eb2f1f7482446410" exitCode=0 Jan 27 15:46:14 crc kubenswrapper[4697]: I0127 15:46:14.554974 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2cdg9" event={"ID":"59725918-a0f4-46fb-afcf-393ee1d4d22b","Type":"ContainerDied","Data":"35732367c11e08400215f8b629c40658ef9693beededf2b8eb2f1f7482446410"} Jan 27 15:46:15 crc kubenswrapper[4697]: I0127 15:46:15.953194 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2cdg9" Jan 27 15:46:16 crc kubenswrapper[4697]: I0127 15:46:16.028960 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/59725918-a0f4-46fb-afcf-393ee1d4d22b-ssh-key-openstack-edpm-ipam\") pod \"59725918-a0f4-46fb-afcf-393ee1d4d22b\" (UID: \"59725918-a0f4-46fb-afcf-393ee1d4d22b\") " Jan 27 15:46:16 crc kubenswrapper[4697]: I0127 15:46:16.029013 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q9wv5\" (UniqueName: \"kubernetes.io/projected/59725918-a0f4-46fb-afcf-393ee1d4d22b-kube-api-access-q9wv5\") pod \"59725918-a0f4-46fb-afcf-393ee1d4d22b\" (UID: \"59725918-a0f4-46fb-afcf-393ee1d4d22b\") " Jan 27 15:46:16 crc kubenswrapper[4697]: I0127 15:46:16.029077 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/59725918-a0f4-46fb-afcf-393ee1d4d22b-inventory\") pod \"59725918-a0f4-46fb-afcf-393ee1d4d22b\" (UID: \"59725918-a0f4-46fb-afcf-393ee1d4d22b\") " Jan 27 15:46:16 crc kubenswrapper[4697]: I0127 15:46:16.034071 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59725918-a0f4-46fb-afcf-393ee1d4d22b-kube-api-access-q9wv5" (OuterVolumeSpecName: "kube-api-access-q9wv5") pod "59725918-a0f4-46fb-afcf-393ee1d4d22b" (UID: "59725918-a0f4-46fb-afcf-393ee1d4d22b"). InnerVolumeSpecName "kube-api-access-q9wv5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:46:16 crc kubenswrapper[4697]: I0127 15:46:16.054886 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59725918-a0f4-46fb-afcf-393ee1d4d22b-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "59725918-a0f4-46fb-afcf-393ee1d4d22b" (UID: "59725918-a0f4-46fb-afcf-393ee1d4d22b"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:46:16 crc kubenswrapper[4697]: I0127 15:46:16.063053 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59725918-a0f4-46fb-afcf-393ee1d4d22b-inventory" (OuterVolumeSpecName: "inventory") pod "59725918-a0f4-46fb-afcf-393ee1d4d22b" (UID: "59725918-a0f4-46fb-afcf-393ee1d4d22b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:46:16 crc kubenswrapper[4697]: I0127 15:46:16.131611 4697 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/59725918-a0f4-46fb-afcf-393ee1d4d22b-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 15:46:16 crc kubenswrapper[4697]: I0127 15:46:16.131651 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q9wv5\" (UniqueName: \"kubernetes.io/projected/59725918-a0f4-46fb-afcf-393ee1d4d22b-kube-api-access-q9wv5\") on node \"crc\" DevicePath \"\"" Jan 27 15:46:16 crc kubenswrapper[4697]: I0127 15:46:16.131661 4697 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/59725918-a0f4-46fb-afcf-393ee1d4d22b-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 15:46:16 crc kubenswrapper[4697]: I0127 15:46:16.568857 4697 scope.go:117] "RemoveContainer" containerID="5c02510c7f6b98a79ed557ec4fec93e0bac6410a1ead6771ce500bd398b9cca6" Jan 27 15:46:16 crc kubenswrapper[4697]: E0127 15:46:16.569198 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wz495_openshift-machine-config-operator(e9bec8bc-b2a6-4865-83ca-692ae5c022a6)\"" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" Jan 27 15:46:16 crc kubenswrapper[4697]: I0127 15:46:16.574025 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2cdg9" Jan 27 15:46:16 crc kubenswrapper[4697]: I0127 15:46:16.578564 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2cdg9" event={"ID":"59725918-a0f4-46fb-afcf-393ee1d4d22b","Type":"ContainerDied","Data":"4c551f9ab386fc0397378db50790c2f8294922402868738b8c9d07a93e0849ff"} Jan 27 15:46:16 crc kubenswrapper[4697]: I0127 15:46:16.578620 4697 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4c551f9ab386fc0397378db50790c2f8294922402868738b8c9d07a93e0849ff" Jan 27 15:46:16 crc kubenswrapper[4697]: I0127 15:46:16.650881 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-j646z"] Jan 27 15:46:16 crc kubenswrapper[4697]: E0127 15:46:16.651493 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59725918-a0f4-46fb-afcf-393ee1d4d22b" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 27 15:46:16 crc kubenswrapper[4697]: I0127 15:46:16.651506 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="59725918-a0f4-46fb-afcf-393ee1d4d22b" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 27 15:46:16 crc kubenswrapper[4697]: I0127 15:46:16.651702 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="59725918-a0f4-46fb-afcf-393ee1d4d22b" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 27 15:46:16 crc kubenswrapper[4697]: I0127 15:46:16.652303 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-j646z" Jan 27 15:46:16 crc kubenswrapper[4697]: I0127 15:46:16.654561 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 15:46:16 crc kubenswrapper[4697]: I0127 15:46:16.656532 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 15:46:16 crc kubenswrapper[4697]: I0127 15:46:16.659587 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Jan 27 15:46:16 crc kubenswrapper[4697]: I0127 15:46:16.660192 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Jan 27 15:46:16 crc kubenswrapper[4697]: I0127 15:46:16.660336 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ctbjc" Jan 27 15:46:16 crc kubenswrapper[4697]: I0127 15:46:16.660466 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 15:46:16 crc kubenswrapper[4697]: I0127 15:46:16.660638 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Jan 27 15:46:16 crc kubenswrapper[4697]: I0127 15:46:16.662663 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Jan 27 15:46:16 crc kubenswrapper[4697]: I0127 15:46:16.676217 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-j646z"] Jan 27 15:46:16 crc kubenswrapper[4697]: I0127 15:46:16.747216 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59c9a20e-f30b-44c1-86ff-fc751969cb24-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-j646z\" (UID: \"59c9a20e-f30b-44c1-86ff-fc751969cb24\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-j646z" Jan 27 15:46:16 crc kubenswrapper[4697]: I0127 15:46:16.747537 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/59c9a20e-f30b-44c1-86ff-fc751969cb24-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-j646z\" (UID: \"59c9a20e-f30b-44c1-86ff-fc751969cb24\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-j646z" Jan 27 15:46:16 crc kubenswrapper[4697]: I0127 15:46:16.747678 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59c9a20e-f30b-44c1-86ff-fc751969cb24-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-j646z\" (UID: \"59c9a20e-f30b-44c1-86ff-fc751969cb24\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-j646z" Jan 27 15:46:16 crc kubenswrapper[4697]: I0127 15:46:16.748066 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59c9a20e-f30b-44c1-86ff-fc751969cb24-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-j646z\" (UID: \"59c9a20e-f30b-44c1-86ff-fc751969cb24\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-j646z" Jan 27 15:46:16 crc kubenswrapper[4697]: I0127 15:46:16.748209 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59c9a20e-f30b-44c1-86ff-fc751969cb24-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-j646z\" (UID: \"59c9a20e-f30b-44c1-86ff-fc751969cb24\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-j646z" Jan 27 15:46:16 crc kubenswrapper[4697]: I0127 15:46:16.748296 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbj67\" (UniqueName: \"kubernetes.io/projected/59c9a20e-f30b-44c1-86ff-fc751969cb24-kube-api-access-nbj67\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-j646z\" (UID: \"59c9a20e-f30b-44c1-86ff-fc751969cb24\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-j646z" Jan 27 15:46:16 crc kubenswrapper[4697]: I0127 15:46:16.748384 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/59c9a20e-f30b-44c1-86ff-fc751969cb24-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-j646z\" (UID: \"59c9a20e-f30b-44c1-86ff-fc751969cb24\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-j646z" Jan 27 15:46:16 crc kubenswrapper[4697]: I0127 15:46:16.748493 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59c9a20e-f30b-44c1-86ff-fc751969cb24-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-j646z\" (UID: \"59c9a20e-f30b-44c1-86ff-fc751969cb24\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-j646z" Jan 27 15:46:16 crc kubenswrapper[4697]: I0127 15:46:16.748591 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/59c9a20e-f30b-44c1-86ff-fc751969cb24-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-j646z\" (UID: \"59c9a20e-f30b-44c1-86ff-fc751969cb24\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-j646z" Jan 27 15:46:16 crc kubenswrapper[4697]: I0127 15:46:16.748678 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/59c9a20e-f30b-44c1-86ff-fc751969cb24-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-j646z\" (UID: \"59c9a20e-f30b-44c1-86ff-fc751969cb24\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-j646z" Jan 27 15:46:16 crc kubenswrapper[4697]: I0127 15:46:16.748774 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59c9a20e-f30b-44c1-86ff-fc751969cb24-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-j646z\" (UID: \"59c9a20e-f30b-44c1-86ff-fc751969cb24\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-j646z" Jan 27 15:46:16 crc kubenswrapper[4697]: I0127 15:46:16.748880 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59c9a20e-f30b-44c1-86ff-fc751969cb24-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-j646z\" (UID: \"59c9a20e-f30b-44c1-86ff-fc751969cb24\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-j646z" Jan 27 15:46:16 crc kubenswrapper[4697]: I0127 15:46:16.748994 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/59c9a20e-f30b-44c1-86ff-fc751969cb24-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-j646z\" (UID: \"59c9a20e-f30b-44c1-86ff-fc751969cb24\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-j646z" Jan 27 15:46:16 crc kubenswrapper[4697]: I0127 15:46:16.749116 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/59c9a20e-f30b-44c1-86ff-fc751969cb24-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-j646z\" (UID: \"59c9a20e-f30b-44c1-86ff-fc751969cb24\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-j646z" Jan 27 15:46:16 crc kubenswrapper[4697]: I0127 15:46:16.851094 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59c9a20e-f30b-44c1-86ff-fc751969cb24-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-j646z\" (UID: \"59c9a20e-f30b-44c1-86ff-fc751969cb24\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-j646z" Jan 27 15:46:16 crc kubenswrapper[4697]: I0127 15:46:16.851192 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/59c9a20e-f30b-44c1-86ff-fc751969cb24-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-j646z\" (UID: \"59c9a20e-f30b-44c1-86ff-fc751969cb24\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-j646z" Jan 27 15:46:16 crc kubenswrapper[4697]: I0127 15:46:16.851243 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59c9a20e-f30b-44c1-86ff-fc751969cb24-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-j646z\" (UID: \"59c9a20e-f30b-44c1-86ff-fc751969cb24\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-j646z" Jan 27 15:46:16 crc kubenswrapper[4697]: I0127 15:46:16.851275 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59c9a20e-f30b-44c1-86ff-fc751969cb24-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-j646z\" (UID: \"59c9a20e-f30b-44c1-86ff-fc751969cb24\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-j646z" Jan 27 15:46:16 crc kubenswrapper[4697]: I0127 15:46:16.851312 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59c9a20e-f30b-44c1-86ff-fc751969cb24-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-j646z\" (UID: \"59c9a20e-f30b-44c1-86ff-fc751969cb24\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-j646z" Jan 27 15:46:16 crc kubenswrapper[4697]: I0127 15:46:16.851346 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nbj67\" (UniqueName: \"kubernetes.io/projected/59c9a20e-f30b-44c1-86ff-fc751969cb24-kube-api-access-nbj67\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-j646z\" (UID: \"59c9a20e-f30b-44c1-86ff-fc751969cb24\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-j646z" Jan 27 15:46:16 crc kubenswrapper[4697]: I0127 15:46:16.851378 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/59c9a20e-f30b-44c1-86ff-fc751969cb24-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-j646z\" (UID: \"59c9a20e-f30b-44c1-86ff-fc751969cb24\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-j646z" Jan 27 15:46:16 crc kubenswrapper[4697]: I0127 15:46:16.851422 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59c9a20e-f30b-44c1-86ff-fc751969cb24-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-j646z\" (UID: \"59c9a20e-f30b-44c1-86ff-fc751969cb24\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-j646z" Jan 27 15:46:16 crc kubenswrapper[4697]: I0127 15:46:16.851459 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/59c9a20e-f30b-44c1-86ff-fc751969cb24-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-j646z\" (UID: \"59c9a20e-f30b-44c1-86ff-fc751969cb24\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-j646z" Jan 27 15:46:16 crc kubenswrapper[4697]: I0127 15:46:16.851489 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/59c9a20e-f30b-44c1-86ff-fc751969cb24-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-j646z\" (UID: \"59c9a20e-f30b-44c1-86ff-fc751969cb24\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-j646z" Jan 27 15:46:16 crc kubenswrapper[4697]: I0127 15:46:16.851532 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59c9a20e-f30b-44c1-86ff-fc751969cb24-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-j646z\" (UID: \"59c9a20e-f30b-44c1-86ff-fc751969cb24\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-j646z" Jan 27 15:46:16 crc kubenswrapper[4697]: I0127 15:46:16.851555 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59c9a20e-f30b-44c1-86ff-fc751969cb24-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-j646z\" (UID: \"59c9a20e-f30b-44c1-86ff-fc751969cb24\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-j646z" Jan 27 15:46:16 crc kubenswrapper[4697]: I0127 15:46:16.851599 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/59c9a20e-f30b-44c1-86ff-fc751969cb24-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-j646z\" (UID: \"59c9a20e-f30b-44c1-86ff-fc751969cb24\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-j646z" Jan 27 15:46:16 crc kubenswrapper[4697]: I0127 15:46:16.851632 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/59c9a20e-f30b-44c1-86ff-fc751969cb24-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-j646z\" (UID: \"59c9a20e-f30b-44c1-86ff-fc751969cb24\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-j646z" Jan 27 15:46:16 crc kubenswrapper[4697]: I0127 15:46:16.856580 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59c9a20e-f30b-44c1-86ff-fc751969cb24-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-j646z\" (UID: \"59c9a20e-f30b-44c1-86ff-fc751969cb24\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-j646z" Jan 27 15:46:16 crc kubenswrapper[4697]: I0127 15:46:16.859475 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59c9a20e-f30b-44c1-86ff-fc751969cb24-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-j646z\" (UID: \"59c9a20e-f30b-44c1-86ff-fc751969cb24\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-j646z" Jan 27 15:46:16 crc kubenswrapper[4697]: I0127 15:46:16.860000 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/59c9a20e-f30b-44c1-86ff-fc751969cb24-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-j646z\" (UID: \"59c9a20e-f30b-44c1-86ff-fc751969cb24\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-j646z" Jan 27 15:46:16 crc kubenswrapper[4697]: I0127 15:46:16.860018 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59c9a20e-f30b-44c1-86ff-fc751969cb24-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-j646z\" (UID: \"59c9a20e-f30b-44c1-86ff-fc751969cb24\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-j646z" Jan 27 15:46:16 crc kubenswrapper[4697]: I0127 15:46:16.865466 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59c9a20e-f30b-44c1-86ff-fc751969cb24-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-j646z\" (UID: \"59c9a20e-f30b-44c1-86ff-fc751969cb24\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-j646z" Jan 27 15:46:16 crc kubenswrapper[4697]: I0127 15:46:16.865579 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/59c9a20e-f30b-44c1-86ff-fc751969cb24-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-j646z\" (UID: \"59c9a20e-f30b-44c1-86ff-fc751969cb24\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-j646z" Jan 27 15:46:16 crc kubenswrapper[4697]: I0127 15:46:16.866494 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59c9a20e-f30b-44c1-86ff-fc751969cb24-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-j646z\" (UID: \"59c9a20e-f30b-44c1-86ff-fc751969cb24\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-j646z" Jan 27 15:46:16 crc kubenswrapper[4697]: I0127 15:46:16.866586 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/59c9a20e-f30b-44c1-86ff-fc751969cb24-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-j646z\" (UID: \"59c9a20e-f30b-44c1-86ff-fc751969cb24\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-j646z" Jan 27 15:46:16 crc kubenswrapper[4697]: I0127 15:46:16.867233 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59c9a20e-f30b-44c1-86ff-fc751969cb24-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-j646z\" (UID: \"59c9a20e-f30b-44c1-86ff-fc751969cb24\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-j646z" Jan 27 15:46:16 crc kubenswrapper[4697]: I0127 15:46:16.872181 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59c9a20e-f30b-44c1-86ff-fc751969cb24-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-j646z\" (UID: \"59c9a20e-f30b-44c1-86ff-fc751969cb24\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-j646z" Jan 27 15:46:16 crc kubenswrapper[4697]: I0127 15:46:16.872681 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/59c9a20e-f30b-44c1-86ff-fc751969cb24-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-j646z\" (UID: \"59c9a20e-f30b-44c1-86ff-fc751969cb24\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-j646z" Jan 27 15:46:16 crc kubenswrapper[4697]: I0127 15:46:16.873672 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/59c9a20e-f30b-44c1-86ff-fc751969cb24-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-j646z\" (UID: \"59c9a20e-f30b-44c1-86ff-fc751969cb24\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-j646z" Jan 27 15:46:16 crc kubenswrapper[4697]: I0127 15:46:16.874514 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbj67\" (UniqueName: \"kubernetes.io/projected/59c9a20e-f30b-44c1-86ff-fc751969cb24-kube-api-access-nbj67\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-j646z\" (UID: \"59c9a20e-f30b-44c1-86ff-fc751969cb24\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-j646z" Jan 27 15:46:16 crc kubenswrapper[4697]: I0127 15:46:16.875462 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/59c9a20e-f30b-44c1-86ff-fc751969cb24-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-j646z\" (UID: \"59c9a20e-f30b-44c1-86ff-fc751969cb24\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-j646z" Jan 27 15:46:16 crc kubenswrapper[4697]: I0127 15:46:16.984088 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-j646z" Jan 27 15:46:17 crc kubenswrapper[4697]: I0127 15:46:17.510267 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-j646z"] Jan 27 15:46:17 crc kubenswrapper[4697]: I0127 15:46:17.596547 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-j646z" event={"ID":"59c9a20e-f30b-44c1-86ff-fc751969cb24","Type":"ContainerStarted","Data":"50c774c7631b0f963cf80d32f7579ca0e42668f437b054d8289e31158e839bbb"} Jan 27 15:46:18 crc kubenswrapper[4697]: I0127 15:46:18.606223 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-j646z" event={"ID":"59c9a20e-f30b-44c1-86ff-fc751969cb24","Type":"ContainerStarted","Data":"f289e6897659af80fb6d4e196c40aa9663169bb1342151f24b13d51c63f34a64"} Jan 27 15:46:18 crc kubenswrapper[4697]: I0127 15:46:18.628724 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-j646z" podStartSLOduration=2.230511957 podStartE2EDuration="2.628702513s" podCreationTimestamp="2026-01-27 15:46:16 +0000 UTC" firstStartedPulling="2026-01-27 15:46:17.514055957 +0000 UTC m=+2273.686455738" lastFinishedPulling="2026-01-27 15:46:17.912246513 +0000 UTC m=+2274.084646294" observedRunningTime="2026-01-27 15:46:18.622851108 +0000 UTC m=+2274.795250899" watchObservedRunningTime="2026-01-27 15:46:18.628702513 +0000 UTC m=+2274.801102294" Jan 27 15:46:30 crc kubenswrapper[4697]: I0127 15:46:30.569272 4697 scope.go:117] "RemoveContainer" containerID="5c02510c7f6b98a79ed557ec4fec93e0bac6410a1ead6771ce500bd398b9cca6" Jan 27 15:46:30 crc kubenswrapper[4697]: E0127 15:46:30.571111 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wz495_openshift-machine-config-operator(e9bec8bc-b2a6-4865-83ca-692ae5c022a6)\"" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" Jan 27 15:46:44 crc kubenswrapper[4697]: I0127 15:46:44.578273 4697 scope.go:117] "RemoveContainer" containerID="5c02510c7f6b98a79ed557ec4fec93e0bac6410a1ead6771ce500bd398b9cca6" Jan 27 15:46:44 crc kubenswrapper[4697]: E0127 15:46:44.579379 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wz495_openshift-machine-config-operator(e9bec8bc-b2a6-4865-83ca-692ae5c022a6)\"" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" Jan 27 15:46:56 crc kubenswrapper[4697]: I0127 15:46:56.923855 4697 generic.go:334] "Generic (PLEG): container finished" podID="59c9a20e-f30b-44c1-86ff-fc751969cb24" containerID="f289e6897659af80fb6d4e196c40aa9663169bb1342151f24b13d51c63f34a64" exitCode=0 Jan 27 15:46:56 crc kubenswrapper[4697]: I0127 15:46:56.923951 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-j646z" event={"ID":"59c9a20e-f30b-44c1-86ff-fc751969cb24","Type":"ContainerDied","Data":"f289e6897659af80fb6d4e196c40aa9663169bb1342151f24b13d51c63f34a64"} Jan 27 15:46:58 crc kubenswrapper[4697]: I0127 15:46:58.368031 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-j646z" Jan 27 15:46:58 crc kubenswrapper[4697]: I0127 15:46:58.453494 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/59c9a20e-f30b-44c1-86ff-fc751969cb24-inventory\") pod \"59c9a20e-f30b-44c1-86ff-fc751969cb24\" (UID: \"59c9a20e-f30b-44c1-86ff-fc751969cb24\") " Jan 27 15:46:58 crc kubenswrapper[4697]: I0127 15:46:58.454006 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59c9a20e-f30b-44c1-86ff-fc751969cb24-bootstrap-combined-ca-bundle\") pod \"59c9a20e-f30b-44c1-86ff-fc751969cb24\" (UID: \"59c9a20e-f30b-44c1-86ff-fc751969cb24\") " Jan 27 15:46:58 crc kubenswrapper[4697]: I0127 15:46:58.454065 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nbj67\" (UniqueName: \"kubernetes.io/projected/59c9a20e-f30b-44c1-86ff-fc751969cb24-kube-api-access-nbj67\") pod \"59c9a20e-f30b-44c1-86ff-fc751969cb24\" (UID: \"59c9a20e-f30b-44c1-86ff-fc751969cb24\") " Jan 27 15:46:58 crc kubenswrapper[4697]: I0127 15:46:58.454102 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59c9a20e-f30b-44c1-86ff-fc751969cb24-nova-combined-ca-bundle\") pod \"59c9a20e-f30b-44c1-86ff-fc751969cb24\" (UID: \"59c9a20e-f30b-44c1-86ff-fc751969cb24\") " Jan 27 15:46:58 crc kubenswrapper[4697]: I0127 15:46:58.454149 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/59c9a20e-f30b-44c1-86ff-fc751969cb24-openstack-edpm-ipam-ovn-default-certs-0\") pod \"59c9a20e-f30b-44c1-86ff-fc751969cb24\" (UID: \"59c9a20e-f30b-44c1-86ff-fc751969cb24\") " Jan 27 15:46:58 crc kubenswrapper[4697]: I0127 15:46:58.454174 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59c9a20e-f30b-44c1-86ff-fc751969cb24-repo-setup-combined-ca-bundle\") pod \"59c9a20e-f30b-44c1-86ff-fc751969cb24\" (UID: \"59c9a20e-f30b-44c1-86ff-fc751969cb24\") " Jan 27 15:46:58 crc kubenswrapper[4697]: I0127 15:46:58.454244 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59c9a20e-f30b-44c1-86ff-fc751969cb24-telemetry-combined-ca-bundle\") pod \"59c9a20e-f30b-44c1-86ff-fc751969cb24\" (UID: \"59c9a20e-f30b-44c1-86ff-fc751969cb24\") " Jan 27 15:46:58 crc kubenswrapper[4697]: I0127 15:46:58.454344 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59c9a20e-f30b-44c1-86ff-fc751969cb24-libvirt-combined-ca-bundle\") pod \"59c9a20e-f30b-44c1-86ff-fc751969cb24\" (UID: \"59c9a20e-f30b-44c1-86ff-fc751969cb24\") " Jan 27 15:46:58 crc kubenswrapper[4697]: I0127 15:46:58.454368 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59c9a20e-f30b-44c1-86ff-fc751969cb24-ovn-combined-ca-bundle\") pod \"59c9a20e-f30b-44c1-86ff-fc751969cb24\" (UID: \"59c9a20e-f30b-44c1-86ff-fc751969cb24\") " Jan 27 15:46:58 crc kubenswrapper[4697]: I0127 15:46:58.454429 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/59c9a20e-f30b-44c1-86ff-fc751969cb24-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"59c9a20e-f30b-44c1-86ff-fc751969cb24\" (UID: \"59c9a20e-f30b-44c1-86ff-fc751969cb24\") " Jan 27 15:46:58 crc kubenswrapper[4697]: I0127 15:46:58.454460 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/59c9a20e-f30b-44c1-86ff-fc751969cb24-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"59c9a20e-f30b-44c1-86ff-fc751969cb24\" (UID: \"59c9a20e-f30b-44c1-86ff-fc751969cb24\") " Jan 27 15:46:58 crc kubenswrapper[4697]: I0127 15:46:58.454487 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/59c9a20e-f30b-44c1-86ff-fc751969cb24-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"59c9a20e-f30b-44c1-86ff-fc751969cb24\" (UID: \"59c9a20e-f30b-44c1-86ff-fc751969cb24\") " Jan 27 15:46:58 crc kubenswrapper[4697]: I0127 15:46:58.454534 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/59c9a20e-f30b-44c1-86ff-fc751969cb24-ssh-key-openstack-edpm-ipam\") pod \"59c9a20e-f30b-44c1-86ff-fc751969cb24\" (UID: \"59c9a20e-f30b-44c1-86ff-fc751969cb24\") " Jan 27 15:46:58 crc kubenswrapper[4697]: I0127 15:46:58.454592 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59c9a20e-f30b-44c1-86ff-fc751969cb24-neutron-metadata-combined-ca-bundle\") pod \"59c9a20e-f30b-44c1-86ff-fc751969cb24\" (UID: \"59c9a20e-f30b-44c1-86ff-fc751969cb24\") " Jan 27 15:46:58 crc kubenswrapper[4697]: I0127 15:46:58.461383 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59c9a20e-f30b-44c1-86ff-fc751969cb24-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "59c9a20e-f30b-44c1-86ff-fc751969cb24" (UID: "59c9a20e-f30b-44c1-86ff-fc751969cb24"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:46:58 crc kubenswrapper[4697]: I0127 15:46:58.461634 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59c9a20e-f30b-44c1-86ff-fc751969cb24-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "59c9a20e-f30b-44c1-86ff-fc751969cb24" (UID: "59c9a20e-f30b-44c1-86ff-fc751969cb24"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:46:58 crc kubenswrapper[4697]: I0127 15:46:58.464202 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59c9a20e-f30b-44c1-86ff-fc751969cb24-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "59c9a20e-f30b-44c1-86ff-fc751969cb24" (UID: "59c9a20e-f30b-44c1-86ff-fc751969cb24"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:46:58 crc kubenswrapper[4697]: I0127 15:46:58.464371 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59c9a20e-f30b-44c1-86ff-fc751969cb24-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "59c9a20e-f30b-44c1-86ff-fc751969cb24" (UID: "59c9a20e-f30b-44c1-86ff-fc751969cb24"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:46:58 crc kubenswrapper[4697]: I0127 15:46:58.465188 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59c9a20e-f30b-44c1-86ff-fc751969cb24-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "59c9a20e-f30b-44c1-86ff-fc751969cb24" (UID: "59c9a20e-f30b-44c1-86ff-fc751969cb24"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:46:58 crc kubenswrapper[4697]: I0127 15:46:58.465291 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59c9a20e-f30b-44c1-86ff-fc751969cb24-kube-api-access-nbj67" (OuterVolumeSpecName: "kube-api-access-nbj67") pod "59c9a20e-f30b-44c1-86ff-fc751969cb24" (UID: "59c9a20e-f30b-44c1-86ff-fc751969cb24"). InnerVolumeSpecName "kube-api-access-nbj67". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:46:58 crc kubenswrapper[4697]: I0127 15:46:58.465353 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59c9a20e-f30b-44c1-86ff-fc751969cb24-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "59c9a20e-f30b-44c1-86ff-fc751969cb24" (UID: "59c9a20e-f30b-44c1-86ff-fc751969cb24"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:46:58 crc kubenswrapper[4697]: I0127 15:46:58.466608 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59c9a20e-f30b-44c1-86ff-fc751969cb24-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "59c9a20e-f30b-44c1-86ff-fc751969cb24" (UID: "59c9a20e-f30b-44c1-86ff-fc751969cb24"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:46:58 crc kubenswrapper[4697]: I0127 15:46:58.467138 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59c9a20e-f30b-44c1-86ff-fc751969cb24-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "59c9a20e-f30b-44c1-86ff-fc751969cb24" (UID: "59c9a20e-f30b-44c1-86ff-fc751969cb24"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:46:58 crc kubenswrapper[4697]: I0127 15:46:58.472148 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59c9a20e-f30b-44c1-86ff-fc751969cb24-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "59c9a20e-f30b-44c1-86ff-fc751969cb24" (UID: "59c9a20e-f30b-44c1-86ff-fc751969cb24"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:46:58 crc kubenswrapper[4697]: I0127 15:46:58.472248 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59c9a20e-f30b-44c1-86ff-fc751969cb24-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "59c9a20e-f30b-44c1-86ff-fc751969cb24" (UID: "59c9a20e-f30b-44c1-86ff-fc751969cb24"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:46:58 crc kubenswrapper[4697]: I0127 15:46:58.479951 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59c9a20e-f30b-44c1-86ff-fc751969cb24-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "59c9a20e-f30b-44c1-86ff-fc751969cb24" (UID: "59c9a20e-f30b-44c1-86ff-fc751969cb24"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:46:58 crc kubenswrapper[4697]: I0127 15:46:58.489418 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59c9a20e-f30b-44c1-86ff-fc751969cb24-inventory" (OuterVolumeSpecName: "inventory") pod "59c9a20e-f30b-44c1-86ff-fc751969cb24" (UID: "59c9a20e-f30b-44c1-86ff-fc751969cb24"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:46:58 crc kubenswrapper[4697]: I0127 15:46:58.496704 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59c9a20e-f30b-44c1-86ff-fc751969cb24-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "59c9a20e-f30b-44c1-86ff-fc751969cb24" (UID: "59c9a20e-f30b-44c1-86ff-fc751969cb24"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:46:58 crc kubenswrapper[4697]: I0127 15:46:58.556204 4697 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/59c9a20e-f30b-44c1-86ff-fc751969cb24-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 15:46:58 crc kubenswrapper[4697]: I0127 15:46:58.556248 4697 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59c9a20e-f30b-44c1-86ff-fc751969cb24-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 15:46:58 crc kubenswrapper[4697]: I0127 15:46:58.556267 4697 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/59c9a20e-f30b-44c1-86ff-fc751969cb24-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 15:46:58 crc kubenswrapper[4697]: I0127 15:46:58.556285 4697 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59c9a20e-f30b-44c1-86ff-fc751969cb24-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 15:46:58 crc kubenswrapper[4697]: I0127 15:46:58.556299 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nbj67\" (UniqueName: \"kubernetes.io/projected/59c9a20e-f30b-44c1-86ff-fc751969cb24-kube-api-access-nbj67\") on node \"crc\" DevicePath \"\"" Jan 27 15:46:58 crc kubenswrapper[4697]: I0127 15:46:58.556310 4697 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59c9a20e-f30b-44c1-86ff-fc751969cb24-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 15:46:58 crc kubenswrapper[4697]: I0127 15:46:58.556322 4697 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/59c9a20e-f30b-44c1-86ff-fc751969cb24-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 27 15:46:58 crc kubenswrapper[4697]: I0127 15:46:58.556334 4697 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59c9a20e-f30b-44c1-86ff-fc751969cb24-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 15:46:58 crc kubenswrapper[4697]: I0127 15:46:58.556348 4697 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59c9a20e-f30b-44c1-86ff-fc751969cb24-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 15:46:58 crc kubenswrapper[4697]: I0127 15:46:58.556359 4697 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59c9a20e-f30b-44c1-86ff-fc751969cb24-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 15:46:58 crc kubenswrapper[4697]: I0127 15:46:58.556373 4697 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59c9a20e-f30b-44c1-86ff-fc751969cb24-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 15:46:58 crc kubenswrapper[4697]: I0127 15:46:58.556386 4697 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/59c9a20e-f30b-44c1-86ff-fc751969cb24-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 27 15:46:58 crc kubenswrapper[4697]: I0127 15:46:58.556400 4697 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/59c9a20e-f30b-44c1-86ff-fc751969cb24-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 27 15:46:58 crc kubenswrapper[4697]: I0127 15:46:58.556414 4697 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/59c9a20e-f30b-44c1-86ff-fc751969cb24-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 27 15:46:58 crc kubenswrapper[4697]: I0127 15:46:58.941679 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-j646z" event={"ID":"59c9a20e-f30b-44c1-86ff-fc751969cb24","Type":"ContainerDied","Data":"50c774c7631b0f963cf80d32f7579ca0e42668f437b054d8289e31158e839bbb"} Jan 27 15:46:58 crc kubenswrapper[4697]: I0127 15:46:58.941727 4697 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="50c774c7631b0f963cf80d32f7579ca0e42668f437b054d8289e31158e839bbb" Jan 27 15:46:58 crc kubenswrapper[4697]: I0127 15:46:58.941882 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-j646z" Jan 27 15:46:59 crc kubenswrapper[4697]: I0127 15:46:59.060664 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-ljqck"] Jan 27 15:46:59 crc kubenswrapper[4697]: E0127 15:46:59.061079 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59c9a20e-f30b-44c1-86ff-fc751969cb24" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 27 15:46:59 crc kubenswrapper[4697]: I0127 15:46:59.061099 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="59c9a20e-f30b-44c1-86ff-fc751969cb24" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 27 15:46:59 crc kubenswrapper[4697]: I0127 15:46:59.061294 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="59c9a20e-f30b-44c1-86ff-fc751969cb24" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 27 15:46:59 crc kubenswrapper[4697]: I0127 15:46:59.062005 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ljqck" Jan 27 15:46:59 crc kubenswrapper[4697]: I0127 15:46:59.065230 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Jan 27 15:46:59 crc kubenswrapper[4697]: I0127 15:46:59.065256 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 15:46:59 crc kubenswrapper[4697]: I0127 15:46:59.066530 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/679f5e04-5c46-49e5-9216-f850ca38d84d-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ljqck\" (UID: \"679f5e04-5c46-49e5-9216-f850ca38d84d\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ljqck" Jan 27 15:46:59 crc kubenswrapper[4697]: I0127 15:46:59.066578 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/679f5e04-5c46-49e5-9216-f850ca38d84d-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ljqck\" (UID: \"679f5e04-5c46-49e5-9216-f850ca38d84d\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ljqck" Jan 27 15:46:59 crc kubenswrapper[4697]: I0127 15:46:59.066608 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggtt6\" (UniqueName: \"kubernetes.io/projected/679f5e04-5c46-49e5-9216-f850ca38d84d-kube-api-access-ggtt6\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ljqck\" (UID: \"679f5e04-5c46-49e5-9216-f850ca38d84d\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ljqck" Jan 27 15:46:59 crc kubenswrapper[4697]: I0127 15:46:59.066715 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/679f5e04-5c46-49e5-9216-f850ca38d84d-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ljqck\" (UID: \"679f5e04-5c46-49e5-9216-f850ca38d84d\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ljqck" Jan 27 15:46:59 crc kubenswrapper[4697]: I0127 15:46:59.066773 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/679f5e04-5c46-49e5-9216-f850ca38d84d-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ljqck\" (UID: \"679f5e04-5c46-49e5-9216-f850ca38d84d\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ljqck" Jan 27 15:46:59 crc kubenswrapper[4697]: I0127 15:46:59.067890 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 15:46:59 crc kubenswrapper[4697]: I0127 15:46:59.068288 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 15:46:59 crc kubenswrapper[4697]: I0127 15:46:59.068460 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ctbjc" Jan 27 15:46:59 crc kubenswrapper[4697]: I0127 15:46:59.076615 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-ljqck"] Jan 27 15:46:59 crc kubenswrapper[4697]: I0127 15:46:59.168859 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/679f5e04-5c46-49e5-9216-f850ca38d84d-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ljqck\" (UID: \"679f5e04-5c46-49e5-9216-f850ca38d84d\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ljqck" Jan 27 15:46:59 crc kubenswrapper[4697]: I0127 15:46:59.169036 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/679f5e04-5c46-49e5-9216-f850ca38d84d-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ljqck\" (UID: \"679f5e04-5c46-49e5-9216-f850ca38d84d\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ljqck" Jan 27 15:46:59 crc kubenswrapper[4697]: I0127 15:46:59.169085 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/679f5e04-5c46-49e5-9216-f850ca38d84d-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ljqck\" (UID: \"679f5e04-5c46-49e5-9216-f850ca38d84d\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ljqck" Jan 27 15:46:59 crc kubenswrapper[4697]: I0127 15:46:59.169142 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ggtt6\" (UniqueName: \"kubernetes.io/projected/679f5e04-5c46-49e5-9216-f850ca38d84d-kube-api-access-ggtt6\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ljqck\" (UID: \"679f5e04-5c46-49e5-9216-f850ca38d84d\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ljqck" Jan 27 15:46:59 crc kubenswrapper[4697]: I0127 15:46:59.169261 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/679f5e04-5c46-49e5-9216-f850ca38d84d-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ljqck\" (UID: \"679f5e04-5c46-49e5-9216-f850ca38d84d\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ljqck" Jan 27 15:46:59 crc kubenswrapper[4697]: I0127 15:46:59.170281 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/679f5e04-5c46-49e5-9216-f850ca38d84d-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ljqck\" (UID: \"679f5e04-5c46-49e5-9216-f850ca38d84d\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ljqck" Jan 27 15:46:59 crc kubenswrapper[4697]: I0127 15:46:59.172630 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/679f5e04-5c46-49e5-9216-f850ca38d84d-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ljqck\" (UID: \"679f5e04-5c46-49e5-9216-f850ca38d84d\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ljqck" Jan 27 15:46:59 crc kubenswrapper[4697]: I0127 15:46:59.173192 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/679f5e04-5c46-49e5-9216-f850ca38d84d-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ljqck\" (UID: \"679f5e04-5c46-49e5-9216-f850ca38d84d\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ljqck" Jan 27 15:46:59 crc kubenswrapper[4697]: I0127 15:46:59.175594 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/679f5e04-5c46-49e5-9216-f850ca38d84d-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ljqck\" (UID: \"679f5e04-5c46-49e5-9216-f850ca38d84d\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ljqck" Jan 27 15:46:59 crc kubenswrapper[4697]: I0127 15:46:59.189360 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ggtt6\" (UniqueName: \"kubernetes.io/projected/679f5e04-5c46-49e5-9216-f850ca38d84d-kube-api-access-ggtt6\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ljqck\" (UID: \"679f5e04-5c46-49e5-9216-f850ca38d84d\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ljqck" Jan 27 15:46:59 crc kubenswrapper[4697]: I0127 15:46:59.382586 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ljqck" Jan 27 15:46:59 crc kubenswrapper[4697]: I0127 15:46:59.569398 4697 scope.go:117] "RemoveContainer" containerID="5c02510c7f6b98a79ed557ec4fec93e0bac6410a1ead6771ce500bd398b9cca6" Jan 27 15:46:59 crc kubenswrapper[4697]: E0127 15:46:59.569950 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wz495_openshift-machine-config-operator(e9bec8bc-b2a6-4865-83ca-692ae5c022a6)\"" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" Jan 27 15:46:59 crc kubenswrapper[4697]: I0127 15:46:59.882461 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-ljqck"] Jan 27 15:46:59 crc kubenswrapper[4697]: I0127 15:46:59.953222 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ljqck" event={"ID":"679f5e04-5c46-49e5-9216-f850ca38d84d","Type":"ContainerStarted","Data":"3312d2b63a66de5248640a382d75755e35771d3ac31c0931325709c45b0920ba"} Jan 27 15:47:00 crc kubenswrapper[4697]: I0127 15:47:00.961483 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ljqck" event={"ID":"679f5e04-5c46-49e5-9216-f850ca38d84d","Type":"ContainerStarted","Data":"bf381402ce0acdf780e796b1136a779df17ac66c8ffb5bc83614bd9ef1e93616"} Jan 27 15:47:00 crc kubenswrapper[4697]: I0127 15:47:00.983061 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ljqck" podStartSLOduration=1.556251342 podStartE2EDuration="1.983036416s" podCreationTimestamp="2026-01-27 15:46:59 +0000 UTC" firstStartedPulling="2026-01-27 15:46:59.890355695 +0000 UTC m=+2316.062755476" lastFinishedPulling="2026-01-27 15:47:00.317140769 +0000 UTC m=+2316.489540550" observedRunningTime="2026-01-27 15:47:00.974677649 +0000 UTC m=+2317.147077430" watchObservedRunningTime="2026-01-27 15:47:00.983036416 +0000 UTC m=+2317.155436197" Jan 27 15:47:10 crc kubenswrapper[4697]: I0127 15:47:10.568941 4697 scope.go:117] "RemoveContainer" containerID="5c02510c7f6b98a79ed557ec4fec93e0bac6410a1ead6771ce500bd398b9cca6" Jan 27 15:47:10 crc kubenswrapper[4697]: E0127 15:47:10.569730 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wz495_openshift-machine-config-operator(e9bec8bc-b2a6-4865-83ca-692ae5c022a6)\"" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" Jan 27 15:47:24 crc kubenswrapper[4697]: I0127 15:47:24.574336 4697 scope.go:117] "RemoveContainer" containerID="5c02510c7f6b98a79ed557ec4fec93e0bac6410a1ead6771ce500bd398b9cca6" Jan 27 15:47:24 crc kubenswrapper[4697]: E0127 15:47:24.575282 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wz495_openshift-machine-config-operator(e9bec8bc-b2a6-4865-83ca-692ae5c022a6)\"" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" Jan 27 15:47:37 crc kubenswrapper[4697]: I0127 15:47:37.568496 4697 scope.go:117] "RemoveContainer" containerID="5c02510c7f6b98a79ed557ec4fec93e0bac6410a1ead6771ce500bd398b9cca6" Jan 27 15:47:37 crc kubenswrapper[4697]: E0127 15:47:37.569201 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wz495_openshift-machine-config-operator(e9bec8bc-b2a6-4865-83ca-692ae5c022a6)\"" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" Jan 27 15:47:48 crc kubenswrapper[4697]: I0127 15:47:48.570488 4697 scope.go:117] "RemoveContainer" containerID="5c02510c7f6b98a79ed557ec4fec93e0bac6410a1ead6771ce500bd398b9cca6" Jan 27 15:47:48 crc kubenswrapper[4697]: E0127 15:47:48.571833 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wz495_openshift-machine-config-operator(e9bec8bc-b2a6-4865-83ca-692ae5c022a6)\"" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" Jan 27 15:47:59 crc kubenswrapper[4697]: I0127 15:47:59.568767 4697 scope.go:117] "RemoveContainer" containerID="5c02510c7f6b98a79ed557ec4fec93e0bac6410a1ead6771ce500bd398b9cca6" Jan 27 15:47:59 crc kubenswrapper[4697]: E0127 15:47:59.569503 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wz495_openshift-machine-config-operator(e9bec8bc-b2a6-4865-83ca-692ae5c022a6)\"" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" Jan 27 15:48:11 crc kubenswrapper[4697]: I0127 15:48:11.572393 4697 generic.go:334] "Generic (PLEG): container finished" podID="679f5e04-5c46-49e5-9216-f850ca38d84d" containerID="bf381402ce0acdf780e796b1136a779df17ac66c8ffb5bc83614bd9ef1e93616" exitCode=0 Jan 27 15:48:11 crc kubenswrapper[4697]: I0127 15:48:11.572484 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ljqck" event={"ID":"679f5e04-5c46-49e5-9216-f850ca38d84d","Type":"ContainerDied","Data":"bf381402ce0acdf780e796b1136a779df17ac66c8ffb5bc83614bd9ef1e93616"} Jan 27 15:48:12 crc kubenswrapper[4697]: I0127 15:48:12.574137 4697 scope.go:117] "RemoveContainer" containerID="5c02510c7f6b98a79ed557ec4fec93e0bac6410a1ead6771ce500bd398b9cca6" Jan 27 15:48:12 crc kubenswrapper[4697]: E0127 15:48:12.575401 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wz495_openshift-machine-config-operator(e9bec8bc-b2a6-4865-83ca-692ae5c022a6)\"" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" Jan 27 15:48:13 crc kubenswrapper[4697]: I0127 15:48:13.188204 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ljqck" Jan 27 15:48:13 crc kubenswrapper[4697]: I0127 15:48:13.359503 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ggtt6\" (UniqueName: \"kubernetes.io/projected/679f5e04-5c46-49e5-9216-f850ca38d84d-kube-api-access-ggtt6\") pod \"679f5e04-5c46-49e5-9216-f850ca38d84d\" (UID: \"679f5e04-5c46-49e5-9216-f850ca38d84d\") " Jan 27 15:48:13 crc kubenswrapper[4697]: I0127 15:48:13.360003 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/679f5e04-5c46-49e5-9216-f850ca38d84d-ovncontroller-config-0\") pod \"679f5e04-5c46-49e5-9216-f850ca38d84d\" (UID: \"679f5e04-5c46-49e5-9216-f850ca38d84d\") " Jan 27 15:48:13 crc kubenswrapper[4697]: I0127 15:48:13.360634 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/679f5e04-5c46-49e5-9216-f850ca38d84d-inventory\") pod \"679f5e04-5c46-49e5-9216-f850ca38d84d\" (UID: \"679f5e04-5c46-49e5-9216-f850ca38d84d\") " Jan 27 15:48:13 crc kubenswrapper[4697]: I0127 15:48:13.360750 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/679f5e04-5c46-49e5-9216-f850ca38d84d-ovn-combined-ca-bundle\") pod \"679f5e04-5c46-49e5-9216-f850ca38d84d\" (UID: \"679f5e04-5c46-49e5-9216-f850ca38d84d\") " Jan 27 15:48:13 crc kubenswrapper[4697]: I0127 15:48:13.360962 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/679f5e04-5c46-49e5-9216-f850ca38d84d-ssh-key-openstack-edpm-ipam\") pod \"679f5e04-5c46-49e5-9216-f850ca38d84d\" (UID: \"679f5e04-5c46-49e5-9216-f850ca38d84d\") " Jan 27 15:48:13 crc kubenswrapper[4697]: I0127 15:48:13.365391 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/679f5e04-5c46-49e5-9216-f850ca38d84d-kube-api-access-ggtt6" (OuterVolumeSpecName: "kube-api-access-ggtt6") pod "679f5e04-5c46-49e5-9216-f850ca38d84d" (UID: "679f5e04-5c46-49e5-9216-f850ca38d84d"). InnerVolumeSpecName "kube-api-access-ggtt6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:48:13 crc kubenswrapper[4697]: I0127 15:48:13.365759 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/679f5e04-5c46-49e5-9216-f850ca38d84d-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "679f5e04-5c46-49e5-9216-f850ca38d84d" (UID: "679f5e04-5c46-49e5-9216-f850ca38d84d"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:48:13 crc kubenswrapper[4697]: I0127 15:48:13.386225 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/679f5e04-5c46-49e5-9216-f850ca38d84d-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "679f5e04-5c46-49e5-9216-f850ca38d84d" (UID: "679f5e04-5c46-49e5-9216-f850ca38d84d"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:48:13 crc kubenswrapper[4697]: I0127 15:48:13.387097 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/679f5e04-5c46-49e5-9216-f850ca38d84d-inventory" (OuterVolumeSpecName: "inventory") pod "679f5e04-5c46-49e5-9216-f850ca38d84d" (UID: "679f5e04-5c46-49e5-9216-f850ca38d84d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:48:13 crc kubenswrapper[4697]: I0127 15:48:13.393429 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/679f5e04-5c46-49e5-9216-f850ca38d84d-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "679f5e04-5c46-49e5-9216-f850ca38d84d" (UID: "679f5e04-5c46-49e5-9216-f850ca38d84d"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:48:13 crc kubenswrapper[4697]: I0127 15:48:13.462326 4697 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/679f5e04-5c46-49e5-9216-f850ca38d84d-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Jan 27 15:48:13 crc kubenswrapper[4697]: I0127 15:48:13.462365 4697 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/679f5e04-5c46-49e5-9216-f850ca38d84d-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 15:48:13 crc kubenswrapper[4697]: I0127 15:48:13.462377 4697 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/679f5e04-5c46-49e5-9216-f850ca38d84d-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 15:48:13 crc kubenswrapper[4697]: I0127 15:48:13.462390 4697 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/679f5e04-5c46-49e5-9216-f850ca38d84d-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 15:48:13 crc kubenswrapper[4697]: I0127 15:48:13.462404 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ggtt6\" (UniqueName: \"kubernetes.io/projected/679f5e04-5c46-49e5-9216-f850ca38d84d-kube-api-access-ggtt6\") on node \"crc\" DevicePath \"\"" Jan 27 15:48:13 crc kubenswrapper[4697]: I0127 15:48:13.599086 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ljqck" event={"ID":"679f5e04-5c46-49e5-9216-f850ca38d84d","Type":"ContainerDied","Data":"3312d2b63a66de5248640a382d75755e35771d3ac31c0931325709c45b0920ba"} Jan 27 15:48:13 crc kubenswrapper[4697]: I0127 15:48:13.599126 4697 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3312d2b63a66de5248640a382d75755e35771d3ac31c0931325709c45b0920ba" Jan 27 15:48:13 crc kubenswrapper[4697]: I0127 15:48:13.599180 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ljqck" Jan 27 15:48:13 crc kubenswrapper[4697]: I0127 15:48:13.694815 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-gtfnf"] Jan 27 15:48:13 crc kubenswrapper[4697]: E0127 15:48:13.695207 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="679f5e04-5c46-49e5-9216-f850ca38d84d" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 27 15:48:13 crc kubenswrapper[4697]: I0127 15:48:13.695227 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="679f5e04-5c46-49e5-9216-f850ca38d84d" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 27 15:48:13 crc kubenswrapper[4697]: I0127 15:48:13.695434 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="679f5e04-5c46-49e5-9216-f850ca38d84d" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 27 15:48:13 crc kubenswrapper[4697]: I0127 15:48:13.696196 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-gtfnf" Jan 27 15:48:13 crc kubenswrapper[4697]: I0127 15:48:13.700162 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 15:48:13 crc kubenswrapper[4697]: I0127 15:48:13.700209 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 15:48:13 crc kubenswrapper[4697]: I0127 15:48:13.700293 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Jan 27 15:48:13 crc kubenswrapper[4697]: I0127 15:48:13.700444 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 15:48:13 crc kubenswrapper[4697]: I0127 15:48:13.701342 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ctbjc" Jan 27 15:48:13 crc kubenswrapper[4697]: I0127 15:48:13.702002 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Jan 27 15:48:13 crc kubenswrapper[4697]: I0127 15:48:13.792529 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38a907af-3d24-434c-a097-3b3635db95d3-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-gtfnf\" (UID: \"38a907af-3d24-434c-a097-3b3635db95d3\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-gtfnf" Jan 27 15:48:13 crc kubenswrapper[4697]: I0127 15:48:13.792591 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/38a907af-3d24-434c-a097-3b3635db95d3-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-gtfnf\" (UID: \"38a907af-3d24-434c-a097-3b3635db95d3\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-gtfnf" Jan 27 15:48:13 crc kubenswrapper[4697]: I0127 15:48:13.792647 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/38a907af-3d24-434c-a097-3b3635db95d3-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-gtfnf\" (UID: \"38a907af-3d24-434c-a097-3b3635db95d3\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-gtfnf" Jan 27 15:48:13 crc kubenswrapper[4697]: I0127 15:48:13.792672 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/38a907af-3d24-434c-a097-3b3635db95d3-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-gtfnf\" (UID: \"38a907af-3d24-434c-a097-3b3635db95d3\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-gtfnf" Jan 27 15:48:13 crc kubenswrapper[4697]: I0127 15:48:13.792706 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s86m8\" (UniqueName: \"kubernetes.io/projected/38a907af-3d24-434c-a097-3b3635db95d3-kube-api-access-s86m8\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-gtfnf\" (UID: \"38a907af-3d24-434c-a097-3b3635db95d3\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-gtfnf" Jan 27 15:48:13 crc kubenswrapper[4697]: I0127 15:48:13.792732 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/38a907af-3d24-434c-a097-3b3635db95d3-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-gtfnf\" (UID: \"38a907af-3d24-434c-a097-3b3635db95d3\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-gtfnf" Jan 27 15:48:13 crc kubenswrapper[4697]: I0127 15:48:13.796062 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-gtfnf"] Jan 27 15:48:13 crc kubenswrapper[4697]: I0127 15:48:13.894957 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38a907af-3d24-434c-a097-3b3635db95d3-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-gtfnf\" (UID: \"38a907af-3d24-434c-a097-3b3635db95d3\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-gtfnf" Jan 27 15:48:13 crc kubenswrapper[4697]: I0127 15:48:13.895386 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/38a907af-3d24-434c-a097-3b3635db95d3-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-gtfnf\" (UID: \"38a907af-3d24-434c-a097-3b3635db95d3\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-gtfnf" Jan 27 15:48:13 crc kubenswrapper[4697]: I0127 15:48:13.895584 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/38a907af-3d24-434c-a097-3b3635db95d3-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-gtfnf\" (UID: \"38a907af-3d24-434c-a097-3b3635db95d3\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-gtfnf" Jan 27 15:48:13 crc kubenswrapper[4697]: I0127 15:48:13.895719 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/38a907af-3d24-434c-a097-3b3635db95d3-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-gtfnf\" (UID: \"38a907af-3d24-434c-a097-3b3635db95d3\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-gtfnf" Jan 27 15:48:13 crc kubenswrapper[4697]: I0127 15:48:13.895852 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s86m8\" (UniqueName: \"kubernetes.io/projected/38a907af-3d24-434c-a097-3b3635db95d3-kube-api-access-s86m8\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-gtfnf\" (UID: \"38a907af-3d24-434c-a097-3b3635db95d3\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-gtfnf" Jan 27 15:48:13 crc kubenswrapper[4697]: I0127 15:48:13.896006 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/38a907af-3d24-434c-a097-3b3635db95d3-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-gtfnf\" (UID: \"38a907af-3d24-434c-a097-3b3635db95d3\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-gtfnf" Jan 27 15:48:13 crc kubenswrapper[4697]: I0127 15:48:13.899869 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/38a907af-3d24-434c-a097-3b3635db95d3-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-gtfnf\" (UID: \"38a907af-3d24-434c-a097-3b3635db95d3\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-gtfnf" Jan 27 15:48:13 crc kubenswrapper[4697]: I0127 15:48:13.910709 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/38a907af-3d24-434c-a097-3b3635db95d3-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-gtfnf\" (UID: \"38a907af-3d24-434c-a097-3b3635db95d3\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-gtfnf" Jan 27 15:48:13 crc kubenswrapper[4697]: I0127 15:48:13.910754 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/38a907af-3d24-434c-a097-3b3635db95d3-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-gtfnf\" (UID: \"38a907af-3d24-434c-a097-3b3635db95d3\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-gtfnf" Jan 27 15:48:13 crc kubenswrapper[4697]: I0127 15:48:13.910878 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/38a907af-3d24-434c-a097-3b3635db95d3-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-gtfnf\" (UID: \"38a907af-3d24-434c-a097-3b3635db95d3\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-gtfnf" Jan 27 15:48:13 crc kubenswrapper[4697]: I0127 15:48:13.911526 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38a907af-3d24-434c-a097-3b3635db95d3-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-gtfnf\" (UID: \"38a907af-3d24-434c-a097-3b3635db95d3\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-gtfnf" Jan 27 15:48:13 crc kubenswrapper[4697]: I0127 15:48:13.915080 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s86m8\" (UniqueName: \"kubernetes.io/projected/38a907af-3d24-434c-a097-3b3635db95d3-kube-api-access-s86m8\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-gtfnf\" (UID: \"38a907af-3d24-434c-a097-3b3635db95d3\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-gtfnf" Jan 27 15:48:14 crc kubenswrapper[4697]: I0127 15:48:14.016675 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-gtfnf" Jan 27 15:48:14 crc kubenswrapper[4697]: I0127 15:48:14.613475 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-gtfnf"] Jan 27 15:48:14 crc kubenswrapper[4697]: I0127 15:48:14.622193 4697 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 15:48:15 crc kubenswrapper[4697]: I0127 15:48:15.636637 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-gtfnf" event={"ID":"38a907af-3d24-434c-a097-3b3635db95d3","Type":"ContainerStarted","Data":"9d6f5d454890d8a4d8853891df13446ee9548c928e5ea0bf0a85055e8a579e04"} Jan 27 15:48:15 crc kubenswrapper[4697]: I0127 15:48:15.637009 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-gtfnf" event={"ID":"38a907af-3d24-434c-a097-3b3635db95d3","Type":"ContainerStarted","Data":"a323472b26262eece24ccb7767a77d46c0414b824584910bbd9f7ece1b1bb611"} Jan 27 15:48:15 crc kubenswrapper[4697]: I0127 15:48:15.681666 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-gtfnf" podStartSLOduration=2.09781737 podStartE2EDuration="2.681643874s" podCreationTimestamp="2026-01-27 15:48:13 +0000 UTC" firstStartedPulling="2026-01-27 15:48:14.62195375 +0000 UTC m=+2390.794353531" lastFinishedPulling="2026-01-27 15:48:15.205780254 +0000 UTC m=+2391.378180035" observedRunningTime="2026-01-27 15:48:15.672759724 +0000 UTC m=+2391.845159505" watchObservedRunningTime="2026-01-27 15:48:15.681643874 +0000 UTC m=+2391.854043655" Jan 27 15:48:24 crc kubenswrapper[4697]: I0127 15:48:24.574470 4697 scope.go:117] "RemoveContainer" containerID="5c02510c7f6b98a79ed557ec4fec93e0bac6410a1ead6771ce500bd398b9cca6" Jan 27 15:48:24 crc kubenswrapper[4697]: E0127 15:48:24.576542 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wz495_openshift-machine-config-operator(e9bec8bc-b2a6-4865-83ca-692ae5c022a6)\"" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" Jan 27 15:48:36 crc kubenswrapper[4697]: I0127 15:48:36.570182 4697 scope.go:117] "RemoveContainer" containerID="5c02510c7f6b98a79ed557ec4fec93e0bac6410a1ead6771ce500bd398b9cca6" Jan 27 15:48:36 crc kubenswrapper[4697]: E0127 15:48:36.572973 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wz495_openshift-machine-config-operator(e9bec8bc-b2a6-4865-83ca-692ae5c022a6)\"" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" Jan 27 15:48:49 crc kubenswrapper[4697]: I0127 15:48:49.569470 4697 scope.go:117] "RemoveContainer" containerID="5c02510c7f6b98a79ed557ec4fec93e0bac6410a1ead6771ce500bd398b9cca6" Jan 27 15:48:49 crc kubenswrapper[4697]: E0127 15:48:49.570446 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wz495_openshift-machine-config-operator(e9bec8bc-b2a6-4865-83ca-692ae5c022a6)\"" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" Jan 27 15:49:01 crc kubenswrapper[4697]: I0127 15:49:01.568216 4697 scope.go:117] "RemoveContainer" containerID="5c02510c7f6b98a79ed557ec4fec93e0bac6410a1ead6771ce500bd398b9cca6" Jan 27 15:49:01 crc kubenswrapper[4697]: E0127 15:49:01.569023 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wz495_openshift-machine-config-operator(e9bec8bc-b2a6-4865-83ca-692ae5c022a6)\"" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" Jan 27 15:49:10 crc kubenswrapper[4697]: I0127 15:49:10.092622 4697 generic.go:334] "Generic (PLEG): container finished" podID="38a907af-3d24-434c-a097-3b3635db95d3" containerID="9d6f5d454890d8a4d8853891df13446ee9548c928e5ea0bf0a85055e8a579e04" exitCode=0 Jan 27 15:49:10 crc kubenswrapper[4697]: I0127 15:49:10.092909 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-gtfnf" event={"ID":"38a907af-3d24-434c-a097-3b3635db95d3","Type":"ContainerDied","Data":"9d6f5d454890d8a4d8853891df13446ee9548c928e5ea0bf0a85055e8a579e04"} Jan 27 15:49:11 crc kubenswrapper[4697]: I0127 15:49:11.552756 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-gtfnf" Jan 27 15:49:11 crc kubenswrapper[4697]: I0127 15:49:11.726664 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/38a907af-3d24-434c-a097-3b3635db95d3-inventory\") pod \"38a907af-3d24-434c-a097-3b3635db95d3\" (UID: \"38a907af-3d24-434c-a097-3b3635db95d3\") " Jan 27 15:49:11 crc kubenswrapper[4697]: I0127 15:49:11.727126 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/38a907af-3d24-434c-a097-3b3635db95d3-neutron-ovn-metadata-agent-neutron-config-0\") pod \"38a907af-3d24-434c-a097-3b3635db95d3\" (UID: \"38a907af-3d24-434c-a097-3b3635db95d3\") " Jan 27 15:49:11 crc kubenswrapper[4697]: I0127 15:49:11.727736 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/38a907af-3d24-434c-a097-3b3635db95d3-nova-metadata-neutron-config-0\") pod \"38a907af-3d24-434c-a097-3b3635db95d3\" (UID: \"38a907af-3d24-434c-a097-3b3635db95d3\") " Jan 27 15:49:11 crc kubenswrapper[4697]: I0127 15:49:11.727842 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38a907af-3d24-434c-a097-3b3635db95d3-neutron-metadata-combined-ca-bundle\") pod \"38a907af-3d24-434c-a097-3b3635db95d3\" (UID: \"38a907af-3d24-434c-a097-3b3635db95d3\") " Jan 27 15:49:11 crc kubenswrapper[4697]: I0127 15:49:11.728410 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s86m8\" (UniqueName: \"kubernetes.io/projected/38a907af-3d24-434c-a097-3b3635db95d3-kube-api-access-s86m8\") pod \"38a907af-3d24-434c-a097-3b3635db95d3\" (UID: \"38a907af-3d24-434c-a097-3b3635db95d3\") " Jan 27 15:49:11 crc kubenswrapper[4697]: I0127 15:49:11.728460 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/38a907af-3d24-434c-a097-3b3635db95d3-ssh-key-openstack-edpm-ipam\") pod \"38a907af-3d24-434c-a097-3b3635db95d3\" (UID: \"38a907af-3d24-434c-a097-3b3635db95d3\") " Jan 27 15:49:11 crc kubenswrapper[4697]: I0127 15:49:11.746147 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38a907af-3d24-434c-a097-3b3635db95d3-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "38a907af-3d24-434c-a097-3b3635db95d3" (UID: "38a907af-3d24-434c-a097-3b3635db95d3"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:49:11 crc kubenswrapper[4697]: I0127 15:49:11.748540 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38a907af-3d24-434c-a097-3b3635db95d3-kube-api-access-s86m8" (OuterVolumeSpecName: "kube-api-access-s86m8") pod "38a907af-3d24-434c-a097-3b3635db95d3" (UID: "38a907af-3d24-434c-a097-3b3635db95d3"). InnerVolumeSpecName "kube-api-access-s86m8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:49:11 crc kubenswrapper[4697]: I0127 15:49:11.761830 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38a907af-3d24-434c-a097-3b3635db95d3-inventory" (OuterVolumeSpecName: "inventory") pod "38a907af-3d24-434c-a097-3b3635db95d3" (UID: "38a907af-3d24-434c-a097-3b3635db95d3"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:49:11 crc kubenswrapper[4697]: I0127 15:49:11.768131 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38a907af-3d24-434c-a097-3b3635db95d3-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "38a907af-3d24-434c-a097-3b3635db95d3" (UID: "38a907af-3d24-434c-a097-3b3635db95d3"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:49:11 crc kubenswrapper[4697]: I0127 15:49:11.772334 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38a907af-3d24-434c-a097-3b3635db95d3-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "38a907af-3d24-434c-a097-3b3635db95d3" (UID: "38a907af-3d24-434c-a097-3b3635db95d3"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:49:11 crc kubenswrapper[4697]: I0127 15:49:11.774350 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38a907af-3d24-434c-a097-3b3635db95d3-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "38a907af-3d24-434c-a097-3b3635db95d3" (UID: "38a907af-3d24-434c-a097-3b3635db95d3"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:49:11 crc kubenswrapper[4697]: I0127 15:49:11.831285 4697 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/38a907af-3d24-434c-a097-3b3635db95d3-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:11 crc kubenswrapper[4697]: I0127 15:49:11.831363 4697 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/38a907af-3d24-434c-a097-3b3635db95d3-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:11 crc kubenswrapper[4697]: I0127 15:49:11.831384 4697 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/38a907af-3d24-434c-a097-3b3635db95d3-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:11 crc kubenswrapper[4697]: I0127 15:49:11.831401 4697 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38a907af-3d24-434c-a097-3b3635db95d3-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:11 crc kubenswrapper[4697]: I0127 15:49:11.831417 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s86m8\" (UniqueName: \"kubernetes.io/projected/38a907af-3d24-434c-a097-3b3635db95d3-kube-api-access-s86m8\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:11 crc kubenswrapper[4697]: I0127 15:49:11.831431 4697 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/38a907af-3d24-434c-a097-3b3635db95d3-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:12 crc kubenswrapper[4697]: I0127 15:49:12.112906 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-gtfnf" event={"ID":"38a907af-3d24-434c-a097-3b3635db95d3","Type":"ContainerDied","Data":"a323472b26262eece24ccb7767a77d46c0414b824584910bbd9f7ece1b1bb611"} Jan 27 15:49:12 crc kubenswrapper[4697]: I0127 15:49:12.112955 4697 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a323472b26262eece24ccb7767a77d46c0414b824584910bbd9f7ece1b1bb611" Jan 27 15:49:12 crc kubenswrapper[4697]: I0127 15:49:12.112971 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-gtfnf" Jan 27 15:49:12 crc kubenswrapper[4697]: I0127 15:49:12.227453 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4cl49"] Jan 27 15:49:12 crc kubenswrapper[4697]: E0127 15:49:12.228681 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38a907af-3d24-434c-a097-3b3635db95d3" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 27 15:49:12 crc kubenswrapper[4697]: I0127 15:49:12.228869 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="38a907af-3d24-434c-a097-3b3635db95d3" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 27 15:49:12 crc kubenswrapper[4697]: I0127 15:49:12.229537 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="38a907af-3d24-434c-a097-3b3635db95d3" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 27 15:49:12 crc kubenswrapper[4697]: I0127 15:49:12.230642 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4cl49" Jan 27 15:49:12 crc kubenswrapper[4697]: I0127 15:49:12.238551 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ctbjc" Jan 27 15:49:12 crc kubenswrapper[4697]: I0127 15:49:12.238773 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Jan 27 15:49:12 crc kubenswrapper[4697]: I0127 15:49:12.238889 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 15:49:12 crc kubenswrapper[4697]: I0127 15:49:12.238934 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 15:49:12 crc kubenswrapper[4697]: I0127 15:49:12.239350 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 15:49:12 crc kubenswrapper[4697]: I0127 15:49:12.249963 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4cl49"] Jan 27 15:49:12 crc kubenswrapper[4697]: I0127 15:49:12.343172 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb80b572-758d-4bd1-b54a-eb5b40cce9db-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4cl49\" (UID: \"cb80b572-758d-4bd1-b54a-eb5b40cce9db\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4cl49" Jan 27 15:49:12 crc kubenswrapper[4697]: I0127 15:49:12.343226 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cb80b572-758d-4bd1-b54a-eb5b40cce9db-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4cl49\" (UID: \"cb80b572-758d-4bd1-b54a-eb5b40cce9db\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4cl49" Jan 27 15:49:12 crc kubenswrapper[4697]: I0127 15:49:12.343279 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cb80b572-758d-4bd1-b54a-eb5b40cce9db-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4cl49\" (UID: \"cb80b572-758d-4bd1-b54a-eb5b40cce9db\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4cl49" Jan 27 15:49:12 crc kubenswrapper[4697]: I0127 15:49:12.343447 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98lcn\" (UniqueName: \"kubernetes.io/projected/cb80b572-758d-4bd1-b54a-eb5b40cce9db-kube-api-access-98lcn\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4cl49\" (UID: \"cb80b572-758d-4bd1-b54a-eb5b40cce9db\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4cl49" Jan 27 15:49:12 crc kubenswrapper[4697]: I0127 15:49:12.343474 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/cb80b572-758d-4bd1-b54a-eb5b40cce9db-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4cl49\" (UID: \"cb80b572-758d-4bd1-b54a-eb5b40cce9db\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4cl49" Jan 27 15:49:12 crc kubenswrapper[4697]: I0127 15:49:12.444867 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98lcn\" (UniqueName: \"kubernetes.io/projected/cb80b572-758d-4bd1-b54a-eb5b40cce9db-kube-api-access-98lcn\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4cl49\" (UID: \"cb80b572-758d-4bd1-b54a-eb5b40cce9db\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4cl49" Jan 27 15:49:12 crc kubenswrapper[4697]: I0127 15:49:12.444910 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/cb80b572-758d-4bd1-b54a-eb5b40cce9db-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4cl49\" (UID: \"cb80b572-758d-4bd1-b54a-eb5b40cce9db\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4cl49" Jan 27 15:49:12 crc kubenswrapper[4697]: I0127 15:49:12.444986 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb80b572-758d-4bd1-b54a-eb5b40cce9db-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4cl49\" (UID: \"cb80b572-758d-4bd1-b54a-eb5b40cce9db\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4cl49" Jan 27 15:49:12 crc kubenswrapper[4697]: I0127 15:49:12.445013 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cb80b572-758d-4bd1-b54a-eb5b40cce9db-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4cl49\" (UID: \"cb80b572-758d-4bd1-b54a-eb5b40cce9db\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4cl49" Jan 27 15:49:12 crc kubenswrapper[4697]: I0127 15:49:12.445056 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cb80b572-758d-4bd1-b54a-eb5b40cce9db-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4cl49\" (UID: \"cb80b572-758d-4bd1-b54a-eb5b40cce9db\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4cl49" Jan 27 15:49:12 crc kubenswrapper[4697]: I0127 15:49:12.448806 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cb80b572-758d-4bd1-b54a-eb5b40cce9db-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4cl49\" (UID: \"cb80b572-758d-4bd1-b54a-eb5b40cce9db\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4cl49" Jan 27 15:49:12 crc kubenswrapper[4697]: I0127 15:49:12.450050 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cb80b572-758d-4bd1-b54a-eb5b40cce9db-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4cl49\" (UID: \"cb80b572-758d-4bd1-b54a-eb5b40cce9db\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4cl49" Jan 27 15:49:12 crc kubenswrapper[4697]: I0127 15:49:12.457888 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/cb80b572-758d-4bd1-b54a-eb5b40cce9db-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4cl49\" (UID: \"cb80b572-758d-4bd1-b54a-eb5b40cce9db\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4cl49" Jan 27 15:49:12 crc kubenswrapper[4697]: I0127 15:49:12.459879 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb80b572-758d-4bd1-b54a-eb5b40cce9db-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4cl49\" (UID: \"cb80b572-758d-4bd1-b54a-eb5b40cce9db\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4cl49" Jan 27 15:49:12 crc kubenswrapper[4697]: I0127 15:49:12.463971 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98lcn\" (UniqueName: \"kubernetes.io/projected/cb80b572-758d-4bd1-b54a-eb5b40cce9db-kube-api-access-98lcn\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4cl49\" (UID: \"cb80b572-758d-4bd1-b54a-eb5b40cce9db\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4cl49" Jan 27 15:49:12 crc kubenswrapper[4697]: I0127 15:49:12.554039 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4cl49" Jan 27 15:49:13 crc kubenswrapper[4697]: I0127 15:49:13.150966 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4cl49"] Jan 27 15:49:14 crc kubenswrapper[4697]: I0127 15:49:14.133692 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4cl49" event={"ID":"cb80b572-758d-4bd1-b54a-eb5b40cce9db","Type":"ContainerStarted","Data":"3af2653ca6538ae53c70832639fc1fc8412fa8f3bbd40c27a3428b1dfd844307"} Jan 27 15:49:15 crc kubenswrapper[4697]: I0127 15:49:15.146282 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4cl49" event={"ID":"cb80b572-758d-4bd1-b54a-eb5b40cce9db","Type":"ContainerStarted","Data":"02af6775b6aeea32d5e6589f722174f1733497310687a7701eb79f24900967fc"} Jan 27 15:49:15 crc kubenswrapper[4697]: I0127 15:49:15.169080 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4cl49" podStartSLOduration=2.308450484 podStartE2EDuration="3.169065436s" podCreationTimestamp="2026-01-27 15:49:12 +0000 UTC" firstStartedPulling="2026-01-27 15:49:13.161857926 +0000 UTC m=+2449.334257707" lastFinishedPulling="2026-01-27 15:49:14.022472878 +0000 UTC m=+2450.194872659" observedRunningTime="2026-01-27 15:49:15.167062656 +0000 UTC m=+2451.339462437" watchObservedRunningTime="2026-01-27 15:49:15.169065436 +0000 UTC m=+2451.341465217" Jan 27 15:49:16 crc kubenswrapper[4697]: I0127 15:49:16.569237 4697 scope.go:117] "RemoveContainer" containerID="5c02510c7f6b98a79ed557ec4fec93e0bac6410a1ead6771ce500bd398b9cca6" Jan 27 15:49:16 crc kubenswrapper[4697]: E0127 15:49:16.569911 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wz495_openshift-machine-config-operator(e9bec8bc-b2a6-4865-83ca-692ae5c022a6)\"" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" Jan 27 15:49:31 crc kubenswrapper[4697]: I0127 15:49:31.569243 4697 scope.go:117] "RemoveContainer" containerID="5c02510c7f6b98a79ed557ec4fec93e0bac6410a1ead6771ce500bd398b9cca6" Jan 27 15:49:31 crc kubenswrapper[4697]: E0127 15:49:31.570057 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wz495_openshift-machine-config-operator(e9bec8bc-b2a6-4865-83ca-692ae5c022a6)\"" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" Jan 27 15:49:44 crc kubenswrapper[4697]: I0127 15:49:44.575948 4697 scope.go:117] "RemoveContainer" containerID="5c02510c7f6b98a79ed557ec4fec93e0bac6410a1ead6771ce500bd398b9cca6" Jan 27 15:49:44 crc kubenswrapper[4697]: E0127 15:49:44.576670 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wz495_openshift-machine-config-operator(e9bec8bc-b2a6-4865-83ca-692ae5c022a6)\"" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" Jan 27 15:49:55 crc kubenswrapper[4697]: I0127 15:49:55.570500 4697 scope.go:117] "RemoveContainer" containerID="5c02510c7f6b98a79ed557ec4fec93e0bac6410a1ead6771ce500bd398b9cca6" Jan 27 15:49:55 crc kubenswrapper[4697]: E0127 15:49:55.571032 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wz495_openshift-machine-config-operator(e9bec8bc-b2a6-4865-83ca-692ae5c022a6)\"" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" Jan 27 15:50:09 crc kubenswrapper[4697]: I0127 15:50:09.569608 4697 scope.go:117] "RemoveContainer" containerID="5c02510c7f6b98a79ed557ec4fec93e0bac6410a1ead6771ce500bd398b9cca6" Jan 27 15:50:09 crc kubenswrapper[4697]: E0127 15:50:09.570910 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wz495_openshift-machine-config-operator(e9bec8bc-b2a6-4865-83ca-692ae5c022a6)\"" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" Jan 27 15:50:22 crc kubenswrapper[4697]: I0127 15:50:22.569698 4697 scope.go:117] "RemoveContainer" containerID="5c02510c7f6b98a79ed557ec4fec93e0bac6410a1ead6771ce500bd398b9cca6" Jan 27 15:50:22 crc kubenswrapper[4697]: E0127 15:50:22.570527 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wz495_openshift-machine-config-operator(e9bec8bc-b2a6-4865-83ca-692ae5c022a6)\"" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" Jan 27 15:50:33 crc kubenswrapper[4697]: I0127 15:50:33.568241 4697 scope.go:117] "RemoveContainer" containerID="5c02510c7f6b98a79ed557ec4fec93e0bac6410a1ead6771ce500bd398b9cca6" Jan 27 15:50:33 crc kubenswrapper[4697]: I0127 15:50:33.835609 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wz495" event={"ID":"e9bec8bc-b2a6-4865-83ca-692ae5c022a6","Type":"ContainerStarted","Data":"f6fb7c40b002bba85b4cc9851083ccbb1cbbdf65b8f193243e0beeee887e82e7"} Jan 27 15:52:55 crc kubenswrapper[4697]: I0127 15:52:55.108695 4697 patch_prober.go:28] interesting pod/machine-config-daemon-wz495 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 15:52:55 crc kubenswrapper[4697]: I0127 15:52:55.109251 4697 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 15:53:25 crc kubenswrapper[4697]: I0127 15:53:25.109276 4697 patch_prober.go:28] interesting pod/machine-config-daemon-wz495 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 15:53:25 crc kubenswrapper[4697]: I0127 15:53:25.109825 4697 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 15:53:55 crc kubenswrapper[4697]: I0127 15:53:55.109313 4697 patch_prober.go:28] interesting pod/machine-config-daemon-wz495 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 15:53:55 crc kubenswrapper[4697]: I0127 15:53:55.109876 4697 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 15:53:55 crc kubenswrapper[4697]: I0127 15:53:55.109931 4697 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wz495" Jan 27 15:53:55 crc kubenswrapper[4697]: I0127 15:53:55.110627 4697 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f6fb7c40b002bba85b4cc9851083ccbb1cbbdf65b8f193243e0beeee887e82e7"} pod="openshift-machine-config-operator/machine-config-daemon-wz495" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 15:53:55 crc kubenswrapper[4697]: I0127 15:53:55.110679 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" containerName="machine-config-daemon" containerID="cri-o://f6fb7c40b002bba85b4cc9851083ccbb1cbbdf65b8f193243e0beeee887e82e7" gracePeriod=600 Jan 27 15:53:55 crc kubenswrapper[4697]: I0127 15:53:55.601581 4697 generic.go:334] "Generic (PLEG): container finished" podID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" containerID="f6fb7c40b002bba85b4cc9851083ccbb1cbbdf65b8f193243e0beeee887e82e7" exitCode=0 Jan 27 15:53:55 crc kubenswrapper[4697]: I0127 15:53:55.601924 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wz495" event={"ID":"e9bec8bc-b2a6-4865-83ca-692ae5c022a6","Type":"ContainerDied","Data":"f6fb7c40b002bba85b4cc9851083ccbb1cbbdf65b8f193243e0beeee887e82e7"} Jan 27 15:53:55 crc kubenswrapper[4697]: I0127 15:53:55.601952 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wz495" event={"ID":"e9bec8bc-b2a6-4865-83ca-692ae5c022a6","Type":"ContainerStarted","Data":"070df30899c85498f8cec50dcfed85b20e4ba889e263f6d29311290475d4f7df"} Jan 27 15:53:55 crc kubenswrapper[4697]: I0127 15:53:55.601967 4697 scope.go:117] "RemoveContainer" containerID="5c02510c7f6b98a79ed557ec4fec93e0bac6410a1ead6771ce500bd398b9cca6" Jan 27 15:54:13 crc kubenswrapper[4697]: I0127 15:54:13.774428 4697 generic.go:334] "Generic (PLEG): container finished" podID="cb80b572-758d-4bd1-b54a-eb5b40cce9db" containerID="02af6775b6aeea32d5e6589f722174f1733497310687a7701eb79f24900967fc" exitCode=0 Jan 27 15:54:13 crc kubenswrapper[4697]: I0127 15:54:13.775224 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4cl49" event={"ID":"cb80b572-758d-4bd1-b54a-eb5b40cce9db","Type":"ContainerDied","Data":"02af6775b6aeea32d5e6589f722174f1733497310687a7701eb79f24900967fc"} Jan 27 15:54:15 crc kubenswrapper[4697]: I0127 15:54:15.177960 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4cl49" Jan 27 15:54:15 crc kubenswrapper[4697]: I0127 15:54:15.224021 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cb80b572-758d-4bd1-b54a-eb5b40cce9db-ssh-key-openstack-edpm-ipam\") pod \"cb80b572-758d-4bd1-b54a-eb5b40cce9db\" (UID: \"cb80b572-758d-4bd1-b54a-eb5b40cce9db\") " Jan 27 15:54:15 crc kubenswrapper[4697]: I0127 15:54:15.224086 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-98lcn\" (UniqueName: \"kubernetes.io/projected/cb80b572-758d-4bd1-b54a-eb5b40cce9db-kube-api-access-98lcn\") pod \"cb80b572-758d-4bd1-b54a-eb5b40cce9db\" (UID: \"cb80b572-758d-4bd1-b54a-eb5b40cce9db\") " Jan 27 15:54:15 crc kubenswrapper[4697]: I0127 15:54:15.237019 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb80b572-758d-4bd1-b54a-eb5b40cce9db-kube-api-access-98lcn" (OuterVolumeSpecName: "kube-api-access-98lcn") pod "cb80b572-758d-4bd1-b54a-eb5b40cce9db" (UID: "cb80b572-758d-4bd1-b54a-eb5b40cce9db"). InnerVolumeSpecName "kube-api-access-98lcn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:54:15 crc kubenswrapper[4697]: I0127 15:54:15.265153 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb80b572-758d-4bd1-b54a-eb5b40cce9db-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "cb80b572-758d-4bd1-b54a-eb5b40cce9db" (UID: "cb80b572-758d-4bd1-b54a-eb5b40cce9db"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:54:15 crc kubenswrapper[4697]: I0127 15:54:15.325270 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/cb80b572-758d-4bd1-b54a-eb5b40cce9db-libvirt-secret-0\") pod \"cb80b572-758d-4bd1-b54a-eb5b40cce9db\" (UID: \"cb80b572-758d-4bd1-b54a-eb5b40cce9db\") " Jan 27 15:54:15 crc kubenswrapper[4697]: I0127 15:54:15.325313 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cb80b572-758d-4bd1-b54a-eb5b40cce9db-inventory\") pod \"cb80b572-758d-4bd1-b54a-eb5b40cce9db\" (UID: \"cb80b572-758d-4bd1-b54a-eb5b40cce9db\") " Jan 27 15:54:15 crc kubenswrapper[4697]: I0127 15:54:15.325358 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb80b572-758d-4bd1-b54a-eb5b40cce9db-libvirt-combined-ca-bundle\") pod \"cb80b572-758d-4bd1-b54a-eb5b40cce9db\" (UID: \"cb80b572-758d-4bd1-b54a-eb5b40cce9db\") " Jan 27 15:54:15 crc kubenswrapper[4697]: I0127 15:54:15.329147 4697 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cb80b572-758d-4bd1-b54a-eb5b40cce9db-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 15:54:15 crc kubenswrapper[4697]: I0127 15:54:15.329190 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-98lcn\" (UniqueName: \"kubernetes.io/projected/cb80b572-758d-4bd1-b54a-eb5b40cce9db-kube-api-access-98lcn\") on node \"crc\" DevicePath \"\"" Jan 27 15:54:15 crc kubenswrapper[4697]: I0127 15:54:15.331365 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb80b572-758d-4bd1-b54a-eb5b40cce9db-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "cb80b572-758d-4bd1-b54a-eb5b40cce9db" (UID: "cb80b572-758d-4bd1-b54a-eb5b40cce9db"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:54:15 crc kubenswrapper[4697]: I0127 15:54:15.354311 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb80b572-758d-4bd1-b54a-eb5b40cce9db-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "cb80b572-758d-4bd1-b54a-eb5b40cce9db" (UID: "cb80b572-758d-4bd1-b54a-eb5b40cce9db"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:54:15 crc kubenswrapper[4697]: I0127 15:54:15.354618 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb80b572-758d-4bd1-b54a-eb5b40cce9db-inventory" (OuterVolumeSpecName: "inventory") pod "cb80b572-758d-4bd1-b54a-eb5b40cce9db" (UID: "cb80b572-758d-4bd1-b54a-eb5b40cce9db"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:54:15 crc kubenswrapper[4697]: I0127 15:54:15.430921 4697 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/cb80b572-758d-4bd1-b54a-eb5b40cce9db-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Jan 27 15:54:15 crc kubenswrapper[4697]: I0127 15:54:15.431022 4697 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cb80b572-758d-4bd1-b54a-eb5b40cce9db-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 15:54:15 crc kubenswrapper[4697]: I0127 15:54:15.431033 4697 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb80b572-758d-4bd1-b54a-eb5b40cce9db-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 15:54:15 crc kubenswrapper[4697]: I0127 15:54:15.793661 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4cl49" event={"ID":"cb80b572-758d-4bd1-b54a-eb5b40cce9db","Type":"ContainerDied","Data":"3af2653ca6538ae53c70832639fc1fc8412fa8f3bbd40c27a3428b1dfd844307"} Jan 27 15:54:15 crc kubenswrapper[4697]: I0127 15:54:15.794004 4697 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3af2653ca6538ae53c70832639fc1fc8412fa8f3bbd40c27a3428b1dfd844307" Jan 27 15:54:15 crc kubenswrapper[4697]: I0127 15:54:15.793718 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4cl49" Jan 27 15:54:15 crc kubenswrapper[4697]: I0127 15:54:15.960827 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-jk5zz"] Jan 27 15:54:15 crc kubenswrapper[4697]: E0127 15:54:15.961259 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb80b572-758d-4bd1-b54a-eb5b40cce9db" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 27 15:54:15 crc kubenswrapper[4697]: I0127 15:54:15.961282 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb80b572-758d-4bd1-b54a-eb5b40cce9db" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 27 15:54:15 crc kubenswrapper[4697]: I0127 15:54:15.961490 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb80b572-758d-4bd1-b54a-eb5b40cce9db" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 27 15:54:15 crc kubenswrapper[4697]: I0127 15:54:15.962228 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jk5zz" Jan 27 15:54:15 crc kubenswrapper[4697]: I0127 15:54:15.967364 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Jan 27 15:54:15 crc kubenswrapper[4697]: I0127 15:54:15.967847 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ctbjc" Jan 27 15:54:15 crc kubenswrapper[4697]: I0127 15:54:15.968630 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 15:54:15 crc kubenswrapper[4697]: I0127 15:54:15.968863 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Jan 27 15:54:15 crc kubenswrapper[4697]: I0127 15:54:15.968995 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 15:54:15 crc kubenswrapper[4697]: I0127 15:54:15.969833 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Jan 27 15:54:15 crc kubenswrapper[4697]: I0127 15:54:15.970966 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 15:54:16 crc kubenswrapper[4697]: I0127 15:54:16.022853 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-jk5zz"] Jan 27 15:54:16 crc kubenswrapper[4697]: I0127 15:54:16.043889 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/8b858060-b802-452d-aa2a-1be4f38efe74-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-jk5zz\" (UID: \"8b858060-b802-452d-aa2a-1be4f38efe74\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jk5zz" Jan 27 15:54:16 crc kubenswrapper[4697]: I0127 15:54:16.043972 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/8b858060-b802-452d-aa2a-1be4f38efe74-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-jk5zz\" (UID: \"8b858060-b802-452d-aa2a-1be4f38efe74\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jk5zz" Jan 27 15:54:16 crc kubenswrapper[4697]: I0127 15:54:16.044154 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/8b858060-b802-452d-aa2a-1be4f38efe74-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-jk5zz\" (UID: \"8b858060-b802-452d-aa2a-1be4f38efe74\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jk5zz" Jan 27 15:54:16 crc kubenswrapper[4697]: I0127 15:54:16.044179 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/8b858060-b802-452d-aa2a-1be4f38efe74-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-jk5zz\" (UID: \"8b858060-b802-452d-aa2a-1be4f38efe74\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jk5zz" Jan 27 15:54:16 crc kubenswrapper[4697]: I0127 15:54:16.044601 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/8b858060-b802-452d-aa2a-1be4f38efe74-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-jk5zz\" (UID: \"8b858060-b802-452d-aa2a-1be4f38efe74\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jk5zz" Jan 27 15:54:16 crc kubenswrapper[4697]: I0127 15:54:16.044694 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b858060-b802-452d-aa2a-1be4f38efe74-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-jk5zz\" (UID: \"8b858060-b802-452d-aa2a-1be4f38efe74\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jk5zz" Jan 27 15:54:16 crc kubenswrapper[4697]: I0127 15:54:16.044723 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8b858060-b802-452d-aa2a-1be4f38efe74-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-jk5zz\" (UID: \"8b858060-b802-452d-aa2a-1be4f38efe74\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jk5zz" Jan 27 15:54:16 crc kubenswrapper[4697]: I0127 15:54:16.044745 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8b858060-b802-452d-aa2a-1be4f38efe74-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-jk5zz\" (UID: \"8b858060-b802-452d-aa2a-1be4f38efe74\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jk5zz" Jan 27 15:54:16 crc kubenswrapper[4697]: I0127 15:54:16.044863 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-844f8\" (UniqueName: \"kubernetes.io/projected/8b858060-b802-452d-aa2a-1be4f38efe74-kube-api-access-844f8\") pod \"nova-edpm-deployment-openstack-edpm-ipam-jk5zz\" (UID: \"8b858060-b802-452d-aa2a-1be4f38efe74\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jk5zz" Jan 27 15:54:16 crc kubenswrapper[4697]: I0127 15:54:16.149011 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b858060-b802-452d-aa2a-1be4f38efe74-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-jk5zz\" (UID: \"8b858060-b802-452d-aa2a-1be4f38efe74\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jk5zz" Jan 27 15:54:16 crc kubenswrapper[4697]: I0127 15:54:16.149061 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8b858060-b802-452d-aa2a-1be4f38efe74-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-jk5zz\" (UID: \"8b858060-b802-452d-aa2a-1be4f38efe74\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jk5zz" Jan 27 15:54:16 crc kubenswrapper[4697]: I0127 15:54:16.149083 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8b858060-b802-452d-aa2a-1be4f38efe74-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-jk5zz\" (UID: \"8b858060-b802-452d-aa2a-1be4f38efe74\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jk5zz" Jan 27 15:54:16 crc kubenswrapper[4697]: I0127 15:54:16.149114 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-844f8\" (UniqueName: \"kubernetes.io/projected/8b858060-b802-452d-aa2a-1be4f38efe74-kube-api-access-844f8\") pod \"nova-edpm-deployment-openstack-edpm-ipam-jk5zz\" (UID: \"8b858060-b802-452d-aa2a-1be4f38efe74\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jk5zz" Jan 27 15:54:16 crc kubenswrapper[4697]: I0127 15:54:16.149133 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/8b858060-b802-452d-aa2a-1be4f38efe74-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-jk5zz\" (UID: \"8b858060-b802-452d-aa2a-1be4f38efe74\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jk5zz" Jan 27 15:54:16 crc kubenswrapper[4697]: I0127 15:54:16.149168 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/8b858060-b802-452d-aa2a-1be4f38efe74-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-jk5zz\" (UID: \"8b858060-b802-452d-aa2a-1be4f38efe74\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jk5zz" Jan 27 15:54:16 crc kubenswrapper[4697]: I0127 15:54:16.149198 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/8b858060-b802-452d-aa2a-1be4f38efe74-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-jk5zz\" (UID: \"8b858060-b802-452d-aa2a-1be4f38efe74\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jk5zz" Jan 27 15:54:16 crc kubenswrapper[4697]: I0127 15:54:16.149219 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/8b858060-b802-452d-aa2a-1be4f38efe74-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-jk5zz\" (UID: \"8b858060-b802-452d-aa2a-1be4f38efe74\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jk5zz" Jan 27 15:54:16 crc kubenswrapper[4697]: I0127 15:54:16.149264 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/8b858060-b802-452d-aa2a-1be4f38efe74-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-jk5zz\" (UID: \"8b858060-b802-452d-aa2a-1be4f38efe74\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jk5zz" Jan 27 15:54:16 crc kubenswrapper[4697]: I0127 15:54:16.150880 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/8b858060-b802-452d-aa2a-1be4f38efe74-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-jk5zz\" (UID: \"8b858060-b802-452d-aa2a-1be4f38efe74\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jk5zz" Jan 27 15:54:16 crc kubenswrapper[4697]: I0127 15:54:16.161961 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/8b858060-b802-452d-aa2a-1be4f38efe74-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-jk5zz\" (UID: \"8b858060-b802-452d-aa2a-1be4f38efe74\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jk5zz" Jan 27 15:54:16 crc kubenswrapper[4697]: I0127 15:54:16.167189 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/8b858060-b802-452d-aa2a-1be4f38efe74-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-jk5zz\" (UID: \"8b858060-b802-452d-aa2a-1be4f38efe74\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jk5zz" Jan 27 15:54:16 crc kubenswrapper[4697]: I0127 15:54:16.167501 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/8b858060-b802-452d-aa2a-1be4f38efe74-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-jk5zz\" (UID: \"8b858060-b802-452d-aa2a-1be4f38efe74\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jk5zz" Jan 27 15:54:16 crc kubenswrapper[4697]: I0127 15:54:16.167864 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8b858060-b802-452d-aa2a-1be4f38efe74-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-jk5zz\" (UID: \"8b858060-b802-452d-aa2a-1be4f38efe74\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jk5zz" Jan 27 15:54:16 crc kubenswrapper[4697]: I0127 15:54:16.167999 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8b858060-b802-452d-aa2a-1be4f38efe74-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-jk5zz\" (UID: \"8b858060-b802-452d-aa2a-1be4f38efe74\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jk5zz" Jan 27 15:54:16 crc kubenswrapper[4697]: I0127 15:54:16.170168 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/8b858060-b802-452d-aa2a-1be4f38efe74-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-jk5zz\" (UID: \"8b858060-b802-452d-aa2a-1be4f38efe74\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jk5zz" Jan 27 15:54:16 crc kubenswrapper[4697]: I0127 15:54:16.170514 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b858060-b802-452d-aa2a-1be4f38efe74-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-jk5zz\" (UID: \"8b858060-b802-452d-aa2a-1be4f38efe74\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jk5zz" Jan 27 15:54:16 crc kubenswrapper[4697]: I0127 15:54:16.174830 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-844f8\" (UniqueName: \"kubernetes.io/projected/8b858060-b802-452d-aa2a-1be4f38efe74-kube-api-access-844f8\") pod \"nova-edpm-deployment-openstack-edpm-ipam-jk5zz\" (UID: \"8b858060-b802-452d-aa2a-1be4f38efe74\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jk5zz" Jan 27 15:54:16 crc kubenswrapper[4697]: I0127 15:54:16.296450 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jk5zz" Jan 27 15:54:16 crc kubenswrapper[4697]: I0127 15:54:16.978214 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-jk5zz"] Jan 27 15:54:16 crc kubenswrapper[4697]: W0127 15:54:16.980682 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8b858060_b802_452d_aa2a_1be4f38efe74.slice/crio-50ea95763f795893a05be5a7385e9dff96525242436ec81222a1be8dfffe7a1d WatchSource:0}: Error finding container 50ea95763f795893a05be5a7385e9dff96525242436ec81222a1be8dfffe7a1d: Status 404 returned error can't find the container with id 50ea95763f795893a05be5a7385e9dff96525242436ec81222a1be8dfffe7a1d Jan 27 15:54:16 crc kubenswrapper[4697]: I0127 15:54:16.983876 4697 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 15:54:17 crc kubenswrapper[4697]: I0127 15:54:17.821541 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jk5zz" event={"ID":"8b858060-b802-452d-aa2a-1be4f38efe74","Type":"ContainerStarted","Data":"6f8a93bc17a7f4f33692491a24dc668a0497a94d52d72e1236f01615f88820d5"} Jan 27 15:54:17 crc kubenswrapper[4697]: I0127 15:54:17.822166 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jk5zz" event={"ID":"8b858060-b802-452d-aa2a-1be4f38efe74","Type":"ContainerStarted","Data":"50ea95763f795893a05be5a7385e9dff96525242436ec81222a1be8dfffe7a1d"} Jan 27 15:54:17 crc kubenswrapper[4697]: I0127 15:54:17.845216 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jk5zz" podStartSLOduration=2.3674667019999998 podStartE2EDuration="2.845194175s" podCreationTimestamp="2026-01-27 15:54:15 +0000 UTC" firstStartedPulling="2026-01-27 15:54:16.983418479 +0000 UTC m=+2753.155818260" lastFinishedPulling="2026-01-27 15:54:17.461145952 +0000 UTC m=+2753.633545733" observedRunningTime="2026-01-27 15:54:17.842855537 +0000 UTC m=+2754.015255318" watchObservedRunningTime="2026-01-27 15:54:17.845194175 +0000 UTC m=+2754.017593956" Jan 27 15:55:00 crc kubenswrapper[4697]: I0127 15:55:00.854353 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-7bjfp"] Jan 27 15:55:00 crc kubenswrapper[4697]: I0127 15:55:00.856613 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7bjfp" Jan 27 15:55:00 crc kubenswrapper[4697]: I0127 15:55:00.883267 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7bjfp"] Jan 27 15:55:01 crc kubenswrapper[4697]: I0127 15:55:01.008269 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r272d\" (UniqueName: \"kubernetes.io/projected/e4e9bd5b-3ba4-4648-993f-fba75fd55fc1-kube-api-access-r272d\") pod \"community-operators-7bjfp\" (UID: \"e4e9bd5b-3ba4-4648-993f-fba75fd55fc1\") " pod="openshift-marketplace/community-operators-7bjfp" Jan 27 15:55:01 crc kubenswrapper[4697]: I0127 15:55:01.008548 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4e9bd5b-3ba4-4648-993f-fba75fd55fc1-utilities\") pod \"community-operators-7bjfp\" (UID: \"e4e9bd5b-3ba4-4648-993f-fba75fd55fc1\") " pod="openshift-marketplace/community-operators-7bjfp" Jan 27 15:55:01 crc kubenswrapper[4697]: I0127 15:55:01.008637 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4e9bd5b-3ba4-4648-993f-fba75fd55fc1-catalog-content\") pod \"community-operators-7bjfp\" (UID: \"e4e9bd5b-3ba4-4648-993f-fba75fd55fc1\") " pod="openshift-marketplace/community-operators-7bjfp" Jan 27 15:55:01 crc kubenswrapper[4697]: I0127 15:55:01.110071 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4e9bd5b-3ba4-4648-993f-fba75fd55fc1-catalog-content\") pod \"community-operators-7bjfp\" (UID: \"e4e9bd5b-3ba4-4648-993f-fba75fd55fc1\") " pod="openshift-marketplace/community-operators-7bjfp" Jan 27 15:55:01 crc kubenswrapper[4697]: I0127 15:55:01.110205 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r272d\" (UniqueName: \"kubernetes.io/projected/e4e9bd5b-3ba4-4648-993f-fba75fd55fc1-kube-api-access-r272d\") pod \"community-operators-7bjfp\" (UID: \"e4e9bd5b-3ba4-4648-993f-fba75fd55fc1\") " pod="openshift-marketplace/community-operators-7bjfp" Jan 27 15:55:01 crc kubenswrapper[4697]: I0127 15:55:01.110239 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4e9bd5b-3ba4-4648-993f-fba75fd55fc1-utilities\") pod \"community-operators-7bjfp\" (UID: \"e4e9bd5b-3ba4-4648-993f-fba75fd55fc1\") " pod="openshift-marketplace/community-operators-7bjfp" Jan 27 15:55:01 crc kubenswrapper[4697]: I0127 15:55:01.110645 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4e9bd5b-3ba4-4648-993f-fba75fd55fc1-catalog-content\") pod \"community-operators-7bjfp\" (UID: \"e4e9bd5b-3ba4-4648-993f-fba75fd55fc1\") " pod="openshift-marketplace/community-operators-7bjfp" Jan 27 15:55:01 crc kubenswrapper[4697]: I0127 15:55:01.110742 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4e9bd5b-3ba4-4648-993f-fba75fd55fc1-utilities\") pod \"community-operators-7bjfp\" (UID: \"e4e9bd5b-3ba4-4648-993f-fba75fd55fc1\") " pod="openshift-marketplace/community-operators-7bjfp" Jan 27 15:55:01 crc kubenswrapper[4697]: I0127 15:55:01.134155 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r272d\" (UniqueName: \"kubernetes.io/projected/e4e9bd5b-3ba4-4648-993f-fba75fd55fc1-kube-api-access-r272d\") pod \"community-operators-7bjfp\" (UID: \"e4e9bd5b-3ba4-4648-993f-fba75fd55fc1\") " pod="openshift-marketplace/community-operators-7bjfp" Jan 27 15:55:01 crc kubenswrapper[4697]: I0127 15:55:01.185907 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7bjfp" Jan 27 15:55:01 crc kubenswrapper[4697]: I0127 15:55:01.910762 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7bjfp"] Jan 27 15:55:02 crc kubenswrapper[4697]: I0127 15:55:02.237871 4697 generic.go:334] "Generic (PLEG): container finished" podID="e4e9bd5b-3ba4-4648-993f-fba75fd55fc1" containerID="93f743b250eb5c1a7971482b09d085ccbce62153b1644968f221a738df7452db" exitCode=0 Jan 27 15:55:02 crc kubenswrapper[4697]: I0127 15:55:02.237909 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7bjfp" event={"ID":"e4e9bd5b-3ba4-4648-993f-fba75fd55fc1","Type":"ContainerDied","Data":"93f743b250eb5c1a7971482b09d085ccbce62153b1644968f221a738df7452db"} Jan 27 15:55:02 crc kubenswrapper[4697]: I0127 15:55:02.237931 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7bjfp" event={"ID":"e4e9bd5b-3ba4-4648-993f-fba75fd55fc1","Type":"ContainerStarted","Data":"84cee5c8b43cea18430d2b11ba40c40ff15b978d84344dee7e4e7b234c4d5f6c"} Jan 27 15:55:03 crc kubenswrapper[4697]: I0127 15:55:03.247893 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7bjfp" event={"ID":"e4e9bd5b-3ba4-4648-993f-fba75fd55fc1","Type":"ContainerStarted","Data":"d9a02dfd89121862f55e9551fff79251d11d1e5ba8203d179c6dd744eb60f3a4"} Jan 27 15:55:06 crc kubenswrapper[4697]: I0127 15:55:06.036521 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-szlh5"] Jan 27 15:55:06 crc kubenswrapper[4697]: I0127 15:55:06.039739 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-szlh5" Jan 27 15:55:06 crc kubenswrapper[4697]: I0127 15:55:06.050317 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-szlh5"] Jan 27 15:55:06 crc kubenswrapper[4697]: I0127 15:55:06.115290 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42ea8a5f-5941-4700-9dbb-896b8c86db42-catalog-content\") pod \"redhat-marketplace-szlh5\" (UID: \"42ea8a5f-5941-4700-9dbb-896b8c86db42\") " pod="openshift-marketplace/redhat-marketplace-szlh5" Jan 27 15:55:06 crc kubenswrapper[4697]: I0127 15:55:06.115434 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42ea8a5f-5941-4700-9dbb-896b8c86db42-utilities\") pod \"redhat-marketplace-szlh5\" (UID: \"42ea8a5f-5941-4700-9dbb-896b8c86db42\") " pod="openshift-marketplace/redhat-marketplace-szlh5" Jan 27 15:55:06 crc kubenswrapper[4697]: I0127 15:55:06.115471 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mlj2\" (UniqueName: \"kubernetes.io/projected/42ea8a5f-5941-4700-9dbb-896b8c86db42-kube-api-access-2mlj2\") pod \"redhat-marketplace-szlh5\" (UID: \"42ea8a5f-5941-4700-9dbb-896b8c86db42\") " pod="openshift-marketplace/redhat-marketplace-szlh5" Jan 27 15:55:06 crc kubenswrapper[4697]: I0127 15:55:06.217874 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42ea8a5f-5941-4700-9dbb-896b8c86db42-catalog-content\") pod \"redhat-marketplace-szlh5\" (UID: \"42ea8a5f-5941-4700-9dbb-896b8c86db42\") " pod="openshift-marketplace/redhat-marketplace-szlh5" Jan 27 15:55:06 crc kubenswrapper[4697]: I0127 15:55:06.217995 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42ea8a5f-5941-4700-9dbb-896b8c86db42-utilities\") pod \"redhat-marketplace-szlh5\" (UID: \"42ea8a5f-5941-4700-9dbb-896b8c86db42\") " pod="openshift-marketplace/redhat-marketplace-szlh5" Jan 27 15:55:06 crc kubenswrapper[4697]: I0127 15:55:06.218029 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2mlj2\" (UniqueName: \"kubernetes.io/projected/42ea8a5f-5941-4700-9dbb-896b8c86db42-kube-api-access-2mlj2\") pod \"redhat-marketplace-szlh5\" (UID: \"42ea8a5f-5941-4700-9dbb-896b8c86db42\") " pod="openshift-marketplace/redhat-marketplace-szlh5" Jan 27 15:55:06 crc kubenswrapper[4697]: I0127 15:55:06.218883 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42ea8a5f-5941-4700-9dbb-896b8c86db42-catalog-content\") pod \"redhat-marketplace-szlh5\" (UID: \"42ea8a5f-5941-4700-9dbb-896b8c86db42\") " pod="openshift-marketplace/redhat-marketplace-szlh5" Jan 27 15:55:06 crc kubenswrapper[4697]: I0127 15:55:06.219133 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42ea8a5f-5941-4700-9dbb-896b8c86db42-utilities\") pod \"redhat-marketplace-szlh5\" (UID: \"42ea8a5f-5941-4700-9dbb-896b8c86db42\") " pod="openshift-marketplace/redhat-marketplace-szlh5" Jan 27 15:55:06 crc kubenswrapper[4697]: I0127 15:55:06.238693 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2mlj2\" (UniqueName: \"kubernetes.io/projected/42ea8a5f-5941-4700-9dbb-896b8c86db42-kube-api-access-2mlj2\") pod \"redhat-marketplace-szlh5\" (UID: \"42ea8a5f-5941-4700-9dbb-896b8c86db42\") " pod="openshift-marketplace/redhat-marketplace-szlh5" Jan 27 15:55:06 crc kubenswrapper[4697]: I0127 15:55:06.276005 4697 generic.go:334] "Generic (PLEG): container finished" podID="e4e9bd5b-3ba4-4648-993f-fba75fd55fc1" containerID="d9a02dfd89121862f55e9551fff79251d11d1e5ba8203d179c6dd744eb60f3a4" exitCode=0 Jan 27 15:55:06 crc kubenswrapper[4697]: I0127 15:55:06.276097 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7bjfp" event={"ID":"e4e9bd5b-3ba4-4648-993f-fba75fd55fc1","Type":"ContainerDied","Data":"d9a02dfd89121862f55e9551fff79251d11d1e5ba8203d179c6dd744eb60f3a4"} Jan 27 15:55:06 crc kubenswrapper[4697]: I0127 15:55:06.377733 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-szlh5" Jan 27 15:55:07 crc kubenswrapper[4697]: I0127 15:55:07.035126 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-szlh5"] Jan 27 15:55:07 crc kubenswrapper[4697]: I0127 15:55:07.284478 4697 generic.go:334] "Generic (PLEG): container finished" podID="42ea8a5f-5941-4700-9dbb-896b8c86db42" containerID="f5a68191dddcd62b25c8adaf9a3ab22f71ed68a51f42fe650832caecd29272c8" exitCode=0 Jan 27 15:55:07 crc kubenswrapper[4697]: I0127 15:55:07.284549 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-szlh5" event={"ID":"42ea8a5f-5941-4700-9dbb-896b8c86db42","Type":"ContainerDied","Data":"f5a68191dddcd62b25c8adaf9a3ab22f71ed68a51f42fe650832caecd29272c8"} Jan 27 15:55:07 crc kubenswrapper[4697]: I0127 15:55:07.284574 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-szlh5" event={"ID":"42ea8a5f-5941-4700-9dbb-896b8c86db42","Type":"ContainerStarted","Data":"148d2bfba3e30bd28010bcc83713b4af63d4306a866725252374d01f3b8c16d9"} Jan 27 15:55:07 crc kubenswrapper[4697]: I0127 15:55:07.287690 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7bjfp" event={"ID":"e4e9bd5b-3ba4-4648-993f-fba75fd55fc1","Type":"ContainerStarted","Data":"1e36cccf727ce96f44c1ef4101a2f281d7c2720143f4241dbdb8bb881e9d3c5a"} Jan 27 15:55:07 crc kubenswrapper[4697]: I0127 15:55:07.342700 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-7bjfp" podStartSLOduration=2.811700079 podStartE2EDuration="7.342681043s" podCreationTimestamp="2026-01-27 15:55:00 +0000 UTC" firstStartedPulling="2026-01-27 15:55:02.240593683 +0000 UTC m=+2798.412993464" lastFinishedPulling="2026-01-27 15:55:06.771574647 +0000 UTC m=+2802.943974428" observedRunningTime="2026-01-27 15:55:07.341579425 +0000 UTC m=+2803.513979206" watchObservedRunningTime="2026-01-27 15:55:07.342681043 +0000 UTC m=+2803.515080834" Jan 27 15:55:08 crc kubenswrapper[4697]: I0127 15:55:08.301416 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-szlh5" event={"ID":"42ea8a5f-5941-4700-9dbb-896b8c86db42","Type":"ContainerStarted","Data":"8edbca1889596f70284914a1f96c56d9131d17c2a5febdd7b805413bc0b93151"} Jan 27 15:55:09 crc kubenswrapper[4697]: I0127 15:55:09.311442 4697 generic.go:334] "Generic (PLEG): container finished" podID="42ea8a5f-5941-4700-9dbb-896b8c86db42" containerID="8edbca1889596f70284914a1f96c56d9131d17c2a5febdd7b805413bc0b93151" exitCode=0 Jan 27 15:55:09 crc kubenswrapper[4697]: I0127 15:55:09.311495 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-szlh5" event={"ID":"42ea8a5f-5941-4700-9dbb-896b8c86db42","Type":"ContainerDied","Data":"8edbca1889596f70284914a1f96c56d9131d17c2a5febdd7b805413bc0b93151"} Jan 27 15:55:10 crc kubenswrapper[4697]: I0127 15:55:10.326918 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-szlh5" event={"ID":"42ea8a5f-5941-4700-9dbb-896b8c86db42","Type":"ContainerStarted","Data":"9c6aaf2c1202ed0fbb7082fe9bff2a0671b4ca813c01345a12b6ff2ea2d6f56d"} Jan 27 15:55:11 crc kubenswrapper[4697]: I0127 15:55:11.186961 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-7bjfp" Jan 27 15:55:11 crc kubenswrapper[4697]: I0127 15:55:11.187213 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-7bjfp" Jan 27 15:55:11 crc kubenswrapper[4697]: I0127 15:55:11.239817 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-7bjfp" Jan 27 15:55:11 crc kubenswrapper[4697]: I0127 15:55:11.273370 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-szlh5" podStartSLOduration=2.595018743 podStartE2EDuration="5.273345894s" podCreationTimestamp="2026-01-27 15:55:06 +0000 UTC" firstStartedPulling="2026-01-27 15:55:07.285847429 +0000 UTC m=+2803.458247210" lastFinishedPulling="2026-01-27 15:55:09.96417459 +0000 UTC m=+2806.136574361" observedRunningTime="2026-01-27 15:55:10.354190562 +0000 UTC m=+2806.526590343" watchObservedRunningTime="2026-01-27 15:55:11.273345894 +0000 UTC m=+2807.445745675" Jan 27 15:55:12 crc kubenswrapper[4697]: I0127 15:55:12.389112 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-7bjfp" Jan 27 15:55:13 crc kubenswrapper[4697]: I0127 15:55:13.425464 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7bjfp"] Jan 27 15:55:14 crc kubenswrapper[4697]: I0127 15:55:14.362426 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-7bjfp" podUID="e4e9bd5b-3ba4-4648-993f-fba75fd55fc1" containerName="registry-server" containerID="cri-o://1e36cccf727ce96f44c1ef4101a2f281d7c2720143f4241dbdb8bb881e9d3c5a" gracePeriod=2 Jan 27 15:55:14 crc kubenswrapper[4697]: I0127 15:55:14.865628 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7bjfp" Jan 27 15:55:15 crc kubenswrapper[4697]: I0127 15:55:15.046553 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4e9bd5b-3ba4-4648-993f-fba75fd55fc1-utilities\") pod \"e4e9bd5b-3ba4-4648-993f-fba75fd55fc1\" (UID: \"e4e9bd5b-3ba4-4648-993f-fba75fd55fc1\") " Jan 27 15:55:15 crc kubenswrapper[4697]: I0127 15:55:15.046875 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r272d\" (UniqueName: \"kubernetes.io/projected/e4e9bd5b-3ba4-4648-993f-fba75fd55fc1-kube-api-access-r272d\") pod \"e4e9bd5b-3ba4-4648-993f-fba75fd55fc1\" (UID: \"e4e9bd5b-3ba4-4648-993f-fba75fd55fc1\") " Jan 27 15:55:15 crc kubenswrapper[4697]: I0127 15:55:15.047177 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4e9bd5b-3ba4-4648-993f-fba75fd55fc1-catalog-content\") pod \"e4e9bd5b-3ba4-4648-993f-fba75fd55fc1\" (UID: \"e4e9bd5b-3ba4-4648-993f-fba75fd55fc1\") " Jan 27 15:55:15 crc kubenswrapper[4697]: I0127 15:55:15.047897 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e4e9bd5b-3ba4-4648-993f-fba75fd55fc1-utilities" (OuterVolumeSpecName: "utilities") pod "e4e9bd5b-3ba4-4648-993f-fba75fd55fc1" (UID: "e4e9bd5b-3ba4-4648-993f-fba75fd55fc1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:55:15 crc kubenswrapper[4697]: I0127 15:55:15.048346 4697 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4e9bd5b-3ba4-4648-993f-fba75fd55fc1-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 15:55:15 crc kubenswrapper[4697]: I0127 15:55:15.055213 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4e9bd5b-3ba4-4648-993f-fba75fd55fc1-kube-api-access-r272d" (OuterVolumeSpecName: "kube-api-access-r272d") pod "e4e9bd5b-3ba4-4648-993f-fba75fd55fc1" (UID: "e4e9bd5b-3ba4-4648-993f-fba75fd55fc1"). InnerVolumeSpecName "kube-api-access-r272d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:55:15 crc kubenswrapper[4697]: I0127 15:55:15.107029 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e4e9bd5b-3ba4-4648-993f-fba75fd55fc1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e4e9bd5b-3ba4-4648-993f-fba75fd55fc1" (UID: "e4e9bd5b-3ba4-4648-993f-fba75fd55fc1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:55:15 crc kubenswrapper[4697]: I0127 15:55:15.149580 4697 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4e9bd5b-3ba4-4648-993f-fba75fd55fc1-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 15:55:15 crc kubenswrapper[4697]: I0127 15:55:15.149608 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r272d\" (UniqueName: \"kubernetes.io/projected/e4e9bd5b-3ba4-4648-993f-fba75fd55fc1-kube-api-access-r272d\") on node \"crc\" DevicePath \"\"" Jan 27 15:55:15 crc kubenswrapper[4697]: I0127 15:55:15.375988 4697 generic.go:334] "Generic (PLEG): container finished" podID="e4e9bd5b-3ba4-4648-993f-fba75fd55fc1" containerID="1e36cccf727ce96f44c1ef4101a2f281d7c2720143f4241dbdb8bb881e9d3c5a" exitCode=0 Jan 27 15:55:15 crc kubenswrapper[4697]: I0127 15:55:15.376032 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7bjfp" event={"ID":"e4e9bd5b-3ba4-4648-993f-fba75fd55fc1","Type":"ContainerDied","Data":"1e36cccf727ce96f44c1ef4101a2f281d7c2720143f4241dbdb8bb881e9d3c5a"} Jan 27 15:55:15 crc kubenswrapper[4697]: I0127 15:55:15.376057 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7bjfp" event={"ID":"e4e9bd5b-3ba4-4648-993f-fba75fd55fc1","Type":"ContainerDied","Data":"84cee5c8b43cea18430d2b11ba40c40ff15b978d84344dee7e4e7b234c4d5f6c"} Jan 27 15:55:15 crc kubenswrapper[4697]: I0127 15:55:15.376073 4697 scope.go:117] "RemoveContainer" containerID="1e36cccf727ce96f44c1ef4101a2f281d7c2720143f4241dbdb8bb881e9d3c5a" Jan 27 15:55:15 crc kubenswrapper[4697]: I0127 15:55:15.376072 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7bjfp" Jan 27 15:55:15 crc kubenswrapper[4697]: I0127 15:55:15.438448 4697 scope.go:117] "RemoveContainer" containerID="d9a02dfd89121862f55e9551fff79251d11d1e5ba8203d179c6dd744eb60f3a4" Jan 27 15:55:15 crc kubenswrapper[4697]: I0127 15:55:15.464353 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7bjfp"] Jan 27 15:55:15 crc kubenswrapper[4697]: I0127 15:55:15.474239 4697 scope.go:117] "RemoveContainer" containerID="93f743b250eb5c1a7971482b09d085ccbce62153b1644968f221a738df7452db" Jan 27 15:55:15 crc kubenswrapper[4697]: I0127 15:55:15.476873 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-7bjfp"] Jan 27 15:55:15 crc kubenswrapper[4697]: I0127 15:55:15.519064 4697 scope.go:117] "RemoveContainer" containerID="1e36cccf727ce96f44c1ef4101a2f281d7c2720143f4241dbdb8bb881e9d3c5a" Jan 27 15:55:15 crc kubenswrapper[4697]: E0127 15:55:15.519546 4697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e36cccf727ce96f44c1ef4101a2f281d7c2720143f4241dbdb8bb881e9d3c5a\": container with ID starting with 1e36cccf727ce96f44c1ef4101a2f281d7c2720143f4241dbdb8bb881e9d3c5a not found: ID does not exist" containerID="1e36cccf727ce96f44c1ef4101a2f281d7c2720143f4241dbdb8bb881e9d3c5a" Jan 27 15:55:15 crc kubenswrapper[4697]: I0127 15:55:15.519580 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e36cccf727ce96f44c1ef4101a2f281d7c2720143f4241dbdb8bb881e9d3c5a"} err="failed to get container status \"1e36cccf727ce96f44c1ef4101a2f281d7c2720143f4241dbdb8bb881e9d3c5a\": rpc error: code = NotFound desc = could not find container \"1e36cccf727ce96f44c1ef4101a2f281d7c2720143f4241dbdb8bb881e9d3c5a\": container with ID starting with 1e36cccf727ce96f44c1ef4101a2f281d7c2720143f4241dbdb8bb881e9d3c5a not found: ID does not exist" Jan 27 15:55:15 crc kubenswrapper[4697]: I0127 15:55:15.519608 4697 scope.go:117] "RemoveContainer" containerID="d9a02dfd89121862f55e9551fff79251d11d1e5ba8203d179c6dd744eb60f3a4" Jan 27 15:55:15 crc kubenswrapper[4697]: E0127 15:55:15.521662 4697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d9a02dfd89121862f55e9551fff79251d11d1e5ba8203d179c6dd744eb60f3a4\": container with ID starting with d9a02dfd89121862f55e9551fff79251d11d1e5ba8203d179c6dd744eb60f3a4 not found: ID does not exist" containerID="d9a02dfd89121862f55e9551fff79251d11d1e5ba8203d179c6dd744eb60f3a4" Jan 27 15:55:15 crc kubenswrapper[4697]: I0127 15:55:15.521694 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d9a02dfd89121862f55e9551fff79251d11d1e5ba8203d179c6dd744eb60f3a4"} err="failed to get container status \"d9a02dfd89121862f55e9551fff79251d11d1e5ba8203d179c6dd744eb60f3a4\": rpc error: code = NotFound desc = could not find container \"d9a02dfd89121862f55e9551fff79251d11d1e5ba8203d179c6dd744eb60f3a4\": container with ID starting with d9a02dfd89121862f55e9551fff79251d11d1e5ba8203d179c6dd744eb60f3a4 not found: ID does not exist" Jan 27 15:55:15 crc kubenswrapper[4697]: I0127 15:55:15.521714 4697 scope.go:117] "RemoveContainer" containerID="93f743b250eb5c1a7971482b09d085ccbce62153b1644968f221a738df7452db" Jan 27 15:55:15 crc kubenswrapper[4697]: E0127 15:55:15.522150 4697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"93f743b250eb5c1a7971482b09d085ccbce62153b1644968f221a738df7452db\": container with ID starting with 93f743b250eb5c1a7971482b09d085ccbce62153b1644968f221a738df7452db not found: ID does not exist" containerID="93f743b250eb5c1a7971482b09d085ccbce62153b1644968f221a738df7452db" Jan 27 15:55:15 crc kubenswrapper[4697]: I0127 15:55:15.522174 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93f743b250eb5c1a7971482b09d085ccbce62153b1644968f221a738df7452db"} err="failed to get container status \"93f743b250eb5c1a7971482b09d085ccbce62153b1644968f221a738df7452db\": rpc error: code = NotFound desc = could not find container \"93f743b250eb5c1a7971482b09d085ccbce62153b1644968f221a738df7452db\": container with ID starting with 93f743b250eb5c1a7971482b09d085ccbce62153b1644968f221a738df7452db not found: ID does not exist" Jan 27 15:55:16 crc kubenswrapper[4697]: I0127 15:55:16.378391 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-szlh5" Jan 27 15:55:16 crc kubenswrapper[4697]: I0127 15:55:16.378775 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-szlh5" Jan 27 15:55:16 crc kubenswrapper[4697]: I0127 15:55:16.429055 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-szlh5" Jan 27 15:55:16 crc kubenswrapper[4697]: I0127 15:55:16.578872 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4e9bd5b-3ba4-4648-993f-fba75fd55fc1" path="/var/lib/kubelet/pods/e4e9bd5b-3ba4-4648-993f-fba75fd55fc1/volumes" Jan 27 15:55:17 crc kubenswrapper[4697]: I0127 15:55:17.450824 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-szlh5" Jan 27 15:55:17 crc kubenswrapper[4697]: I0127 15:55:17.821827 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-szlh5"] Jan 27 15:55:19 crc kubenswrapper[4697]: I0127 15:55:19.723899 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-szlh5" podUID="42ea8a5f-5941-4700-9dbb-896b8c86db42" containerName="registry-server" containerID="cri-o://9c6aaf2c1202ed0fbb7082fe9bff2a0671b4ca813c01345a12b6ff2ea2d6f56d" gracePeriod=2 Jan 27 15:55:20 crc kubenswrapper[4697]: I0127 15:55:20.251061 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-szlh5" Jan 27 15:55:20 crc kubenswrapper[4697]: I0127 15:55:20.354659 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42ea8a5f-5941-4700-9dbb-896b8c86db42-utilities\") pod \"42ea8a5f-5941-4700-9dbb-896b8c86db42\" (UID: \"42ea8a5f-5941-4700-9dbb-896b8c86db42\") " Jan 27 15:55:20 crc kubenswrapper[4697]: I0127 15:55:20.355086 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42ea8a5f-5941-4700-9dbb-896b8c86db42-catalog-content\") pod \"42ea8a5f-5941-4700-9dbb-896b8c86db42\" (UID: \"42ea8a5f-5941-4700-9dbb-896b8c86db42\") " Jan 27 15:55:20 crc kubenswrapper[4697]: I0127 15:55:20.355264 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2mlj2\" (UniqueName: \"kubernetes.io/projected/42ea8a5f-5941-4700-9dbb-896b8c86db42-kube-api-access-2mlj2\") pod \"42ea8a5f-5941-4700-9dbb-896b8c86db42\" (UID: \"42ea8a5f-5941-4700-9dbb-896b8c86db42\") " Jan 27 15:55:20 crc kubenswrapper[4697]: I0127 15:55:20.355582 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/42ea8a5f-5941-4700-9dbb-896b8c86db42-utilities" (OuterVolumeSpecName: "utilities") pod "42ea8a5f-5941-4700-9dbb-896b8c86db42" (UID: "42ea8a5f-5941-4700-9dbb-896b8c86db42"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:55:20 crc kubenswrapper[4697]: I0127 15:55:20.355930 4697 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42ea8a5f-5941-4700-9dbb-896b8c86db42-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 15:55:20 crc kubenswrapper[4697]: I0127 15:55:20.363383 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42ea8a5f-5941-4700-9dbb-896b8c86db42-kube-api-access-2mlj2" (OuterVolumeSpecName: "kube-api-access-2mlj2") pod "42ea8a5f-5941-4700-9dbb-896b8c86db42" (UID: "42ea8a5f-5941-4700-9dbb-896b8c86db42"). InnerVolumeSpecName "kube-api-access-2mlj2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:55:20 crc kubenswrapper[4697]: I0127 15:55:20.381132 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/42ea8a5f-5941-4700-9dbb-896b8c86db42-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "42ea8a5f-5941-4700-9dbb-896b8c86db42" (UID: "42ea8a5f-5941-4700-9dbb-896b8c86db42"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:55:20 crc kubenswrapper[4697]: I0127 15:55:20.459109 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2mlj2\" (UniqueName: \"kubernetes.io/projected/42ea8a5f-5941-4700-9dbb-896b8c86db42-kube-api-access-2mlj2\") on node \"crc\" DevicePath \"\"" Jan 27 15:55:20 crc kubenswrapper[4697]: I0127 15:55:20.459153 4697 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42ea8a5f-5941-4700-9dbb-896b8c86db42-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 15:55:20 crc kubenswrapper[4697]: I0127 15:55:20.736530 4697 generic.go:334] "Generic (PLEG): container finished" podID="42ea8a5f-5941-4700-9dbb-896b8c86db42" containerID="9c6aaf2c1202ed0fbb7082fe9bff2a0671b4ca813c01345a12b6ff2ea2d6f56d" exitCode=0 Jan 27 15:55:20 crc kubenswrapper[4697]: I0127 15:55:20.736578 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-szlh5" event={"ID":"42ea8a5f-5941-4700-9dbb-896b8c86db42","Type":"ContainerDied","Data":"9c6aaf2c1202ed0fbb7082fe9bff2a0671b4ca813c01345a12b6ff2ea2d6f56d"} Jan 27 15:55:20 crc kubenswrapper[4697]: I0127 15:55:20.736606 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-szlh5" event={"ID":"42ea8a5f-5941-4700-9dbb-896b8c86db42","Type":"ContainerDied","Data":"148d2bfba3e30bd28010bcc83713b4af63d4306a866725252374d01f3b8c16d9"} Jan 27 15:55:20 crc kubenswrapper[4697]: I0127 15:55:20.736622 4697 scope.go:117] "RemoveContainer" containerID="9c6aaf2c1202ed0fbb7082fe9bff2a0671b4ca813c01345a12b6ff2ea2d6f56d" Jan 27 15:55:20 crc kubenswrapper[4697]: I0127 15:55:20.737932 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-szlh5" Jan 27 15:55:20 crc kubenswrapper[4697]: I0127 15:55:20.762228 4697 scope.go:117] "RemoveContainer" containerID="8edbca1889596f70284914a1f96c56d9131d17c2a5febdd7b805413bc0b93151" Jan 27 15:55:20 crc kubenswrapper[4697]: I0127 15:55:20.762717 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-szlh5"] Jan 27 15:55:20 crc kubenswrapper[4697]: I0127 15:55:20.781662 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-szlh5"] Jan 27 15:55:20 crc kubenswrapper[4697]: I0127 15:55:20.785385 4697 scope.go:117] "RemoveContainer" containerID="f5a68191dddcd62b25c8adaf9a3ab22f71ed68a51f42fe650832caecd29272c8" Jan 27 15:55:20 crc kubenswrapper[4697]: I0127 15:55:20.827932 4697 scope.go:117] "RemoveContainer" containerID="9c6aaf2c1202ed0fbb7082fe9bff2a0671b4ca813c01345a12b6ff2ea2d6f56d" Jan 27 15:55:20 crc kubenswrapper[4697]: E0127 15:55:20.828305 4697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9c6aaf2c1202ed0fbb7082fe9bff2a0671b4ca813c01345a12b6ff2ea2d6f56d\": container with ID starting with 9c6aaf2c1202ed0fbb7082fe9bff2a0671b4ca813c01345a12b6ff2ea2d6f56d not found: ID does not exist" containerID="9c6aaf2c1202ed0fbb7082fe9bff2a0671b4ca813c01345a12b6ff2ea2d6f56d" Jan 27 15:55:20 crc kubenswrapper[4697]: I0127 15:55:20.828351 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c6aaf2c1202ed0fbb7082fe9bff2a0671b4ca813c01345a12b6ff2ea2d6f56d"} err="failed to get container status \"9c6aaf2c1202ed0fbb7082fe9bff2a0671b4ca813c01345a12b6ff2ea2d6f56d\": rpc error: code = NotFound desc = could not find container \"9c6aaf2c1202ed0fbb7082fe9bff2a0671b4ca813c01345a12b6ff2ea2d6f56d\": container with ID starting with 9c6aaf2c1202ed0fbb7082fe9bff2a0671b4ca813c01345a12b6ff2ea2d6f56d not found: ID does not exist" Jan 27 15:55:20 crc kubenswrapper[4697]: I0127 15:55:20.828373 4697 scope.go:117] "RemoveContainer" containerID="8edbca1889596f70284914a1f96c56d9131d17c2a5febdd7b805413bc0b93151" Jan 27 15:55:20 crc kubenswrapper[4697]: E0127 15:55:20.828688 4697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8edbca1889596f70284914a1f96c56d9131d17c2a5febdd7b805413bc0b93151\": container with ID starting with 8edbca1889596f70284914a1f96c56d9131d17c2a5febdd7b805413bc0b93151 not found: ID does not exist" containerID="8edbca1889596f70284914a1f96c56d9131d17c2a5febdd7b805413bc0b93151" Jan 27 15:55:20 crc kubenswrapper[4697]: I0127 15:55:20.828708 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8edbca1889596f70284914a1f96c56d9131d17c2a5febdd7b805413bc0b93151"} err="failed to get container status \"8edbca1889596f70284914a1f96c56d9131d17c2a5febdd7b805413bc0b93151\": rpc error: code = NotFound desc = could not find container \"8edbca1889596f70284914a1f96c56d9131d17c2a5febdd7b805413bc0b93151\": container with ID starting with 8edbca1889596f70284914a1f96c56d9131d17c2a5febdd7b805413bc0b93151 not found: ID does not exist" Jan 27 15:55:20 crc kubenswrapper[4697]: I0127 15:55:20.828721 4697 scope.go:117] "RemoveContainer" containerID="f5a68191dddcd62b25c8adaf9a3ab22f71ed68a51f42fe650832caecd29272c8" Jan 27 15:55:20 crc kubenswrapper[4697]: E0127 15:55:20.829026 4697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f5a68191dddcd62b25c8adaf9a3ab22f71ed68a51f42fe650832caecd29272c8\": container with ID starting with f5a68191dddcd62b25c8adaf9a3ab22f71ed68a51f42fe650832caecd29272c8 not found: ID does not exist" containerID="f5a68191dddcd62b25c8adaf9a3ab22f71ed68a51f42fe650832caecd29272c8" Jan 27 15:55:20 crc kubenswrapper[4697]: I0127 15:55:20.829051 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f5a68191dddcd62b25c8adaf9a3ab22f71ed68a51f42fe650832caecd29272c8"} err="failed to get container status \"f5a68191dddcd62b25c8adaf9a3ab22f71ed68a51f42fe650832caecd29272c8\": rpc error: code = NotFound desc = could not find container \"f5a68191dddcd62b25c8adaf9a3ab22f71ed68a51f42fe650832caecd29272c8\": container with ID starting with f5a68191dddcd62b25c8adaf9a3ab22f71ed68a51f42fe650832caecd29272c8 not found: ID does not exist" Jan 27 15:55:22 crc kubenswrapper[4697]: I0127 15:55:22.578996 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42ea8a5f-5941-4700-9dbb-896b8c86db42" path="/var/lib/kubelet/pods/42ea8a5f-5941-4700-9dbb-896b8c86db42/volumes" Jan 27 15:55:44 crc kubenswrapper[4697]: I0127 15:55:44.375162 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-87fxr"] Jan 27 15:55:44 crc kubenswrapper[4697]: E0127 15:55:44.376128 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4e9bd5b-3ba4-4648-993f-fba75fd55fc1" containerName="extract-utilities" Jan 27 15:55:44 crc kubenswrapper[4697]: I0127 15:55:44.376144 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4e9bd5b-3ba4-4648-993f-fba75fd55fc1" containerName="extract-utilities" Jan 27 15:55:44 crc kubenswrapper[4697]: E0127 15:55:44.376160 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4e9bd5b-3ba4-4648-993f-fba75fd55fc1" containerName="extract-content" Jan 27 15:55:44 crc kubenswrapper[4697]: I0127 15:55:44.376168 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4e9bd5b-3ba4-4648-993f-fba75fd55fc1" containerName="extract-content" Jan 27 15:55:44 crc kubenswrapper[4697]: E0127 15:55:44.376180 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42ea8a5f-5941-4700-9dbb-896b8c86db42" containerName="extract-content" Jan 27 15:55:44 crc kubenswrapper[4697]: I0127 15:55:44.376186 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="42ea8a5f-5941-4700-9dbb-896b8c86db42" containerName="extract-content" Jan 27 15:55:44 crc kubenswrapper[4697]: E0127 15:55:44.376212 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42ea8a5f-5941-4700-9dbb-896b8c86db42" containerName="extract-utilities" Jan 27 15:55:44 crc kubenswrapper[4697]: I0127 15:55:44.376218 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="42ea8a5f-5941-4700-9dbb-896b8c86db42" containerName="extract-utilities" Jan 27 15:55:44 crc kubenswrapper[4697]: E0127 15:55:44.376234 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42ea8a5f-5941-4700-9dbb-896b8c86db42" containerName="registry-server" Jan 27 15:55:44 crc kubenswrapper[4697]: I0127 15:55:44.376243 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="42ea8a5f-5941-4700-9dbb-896b8c86db42" containerName="registry-server" Jan 27 15:55:44 crc kubenswrapper[4697]: E0127 15:55:44.376258 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4e9bd5b-3ba4-4648-993f-fba75fd55fc1" containerName="registry-server" Jan 27 15:55:44 crc kubenswrapper[4697]: I0127 15:55:44.376265 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4e9bd5b-3ba4-4648-993f-fba75fd55fc1" containerName="registry-server" Jan 27 15:55:44 crc kubenswrapper[4697]: I0127 15:55:44.376487 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="42ea8a5f-5941-4700-9dbb-896b8c86db42" containerName="registry-server" Jan 27 15:55:44 crc kubenswrapper[4697]: I0127 15:55:44.376511 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4e9bd5b-3ba4-4648-993f-fba75fd55fc1" containerName="registry-server" Jan 27 15:55:44 crc kubenswrapper[4697]: I0127 15:55:44.378008 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-87fxr" Jan 27 15:55:44 crc kubenswrapper[4697]: I0127 15:55:44.392538 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-87fxr"] Jan 27 15:55:44 crc kubenswrapper[4697]: I0127 15:55:44.529501 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46ba372e-b97a-4144-9296-cf0bf04d5ed2-utilities\") pod \"redhat-operators-87fxr\" (UID: \"46ba372e-b97a-4144-9296-cf0bf04d5ed2\") " pod="openshift-marketplace/redhat-operators-87fxr" Jan 27 15:55:44 crc kubenswrapper[4697]: I0127 15:55:44.529692 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25szm\" (UniqueName: \"kubernetes.io/projected/46ba372e-b97a-4144-9296-cf0bf04d5ed2-kube-api-access-25szm\") pod \"redhat-operators-87fxr\" (UID: \"46ba372e-b97a-4144-9296-cf0bf04d5ed2\") " pod="openshift-marketplace/redhat-operators-87fxr" Jan 27 15:55:44 crc kubenswrapper[4697]: I0127 15:55:44.529838 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46ba372e-b97a-4144-9296-cf0bf04d5ed2-catalog-content\") pod \"redhat-operators-87fxr\" (UID: \"46ba372e-b97a-4144-9296-cf0bf04d5ed2\") " pod="openshift-marketplace/redhat-operators-87fxr" Jan 27 15:55:44 crc kubenswrapper[4697]: I0127 15:55:44.631438 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46ba372e-b97a-4144-9296-cf0bf04d5ed2-utilities\") pod \"redhat-operators-87fxr\" (UID: \"46ba372e-b97a-4144-9296-cf0bf04d5ed2\") " pod="openshift-marketplace/redhat-operators-87fxr" Jan 27 15:55:44 crc kubenswrapper[4697]: I0127 15:55:44.631965 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46ba372e-b97a-4144-9296-cf0bf04d5ed2-utilities\") pod \"redhat-operators-87fxr\" (UID: \"46ba372e-b97a-4144-9296-cf0bf04d5ed2\") " pod="openshift-marketplace/redhat-operators-87fxr" Jan 27 15:55:44 crc kubenswrapper[4697]: I0127 15:55:44.632102 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25szm\" (UniqueName: \"kubernetes.io/projected/46ba372e-b97a-4144-9296-cf0bf04d5ed2-kube-api-access-25szm\") pod \"redhat-operators-87fxr\" (UID: \"46ba372e-b97a-4144-9296-cf0bf04d5ed2\") " pod="openshift-marketplace/redhat-operators-87fxr" Jan 27 15:55:44 crc kubenswrapper[4697]: I0127 15:55:44.632396 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46ba372e-b97a-4144-9296-cf0bf04d5ed2-catalog-content\") pod \"redhat-operators-87fxr\" (UID: \"46ba372e-b97a-4144-9296-cf0bf04d5ed2\") " pod="openshift-marketplace/redhat-operators-87fxr" Jan 27 15:55:44 crc kubenswrapper[4697]: I0127 15:55:44.632613 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46ba372e-b97a-4144-9296-cf0bf04d5ed2-catalog-content\") pod \"redhat-operators-87fxr\" (UID: \"46ba372e-b97a-4144-9296-cf0bf04d5ed2\") " pod="openshift-marketplace/redhat-operators-87fxr" Jan 27 15:55:44 crc kubenswrapper[4697]: I0127 15:55:44.665364 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25szm\" (UniqueName: \"kubernetes.io/projected/46ba372e-b97a-4144-9296-cf0bf04d5ed2-kube-api-access-25szm\") pod \"redhat-operators-87fxr\" (UID: \"46ba372e-b97a-4144-9296-cf0bf04d5ed2\") " pod="openshift-marketplace/redhat-operators-87fxr" Jan 27 15:55:44 crc kubenswrapper[4697]: I0127 15:55:44.697551 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-87fxr" Jan 27 15:55:45 crc kubenswrapper[4697]: I0127 15:55:45.182417 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-87fxr"] Jan 27 15:55:45 crc kubenswrapper[4697]: I0127 15:55:45.967260 4697 generic.go:334] "Generic (PLEG): container finished" podID="46ba372e-b97a-4144-9296-cf0bf04d5ed2" containerID="e31b5775f82673b83b1d51f9d7f3f80b9db223ee8288e14ac039b462368b2000" exitCode=0 Jan 27 15:55:45 crc kubenswrapper[4697]: I0127 15:55:45.967354 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-87fxr" event={"ID":"46ba372e-b97a-4144-9296-cf0bf04d5ed2","Type":"ContainerDied","Data":"e31b5775f82673b83b1d51f9d7f3f80b9db223ee8288e14ac039b462368b2000"} Jan 27 15:55:45 crc kubenswrapper[4697]: I0127 15:55:45.967585 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-87fxr" event={"ID":"46ba372e-b97a-4144-9296-cf0bf04d5ed2","Type":"ContainerStarted","Data":"f962e5710ec028e063df547c43d2a15d18c3dd7e0341d97b80334e013158faab"} Jan 27 15:55:46 crc kubenswrapper[4697]: I0127 15:55:46.980035 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-87fxr" event={"ID":"46ba372e-b97a-4144-9296-cf0bf04d5ed2","Type":"ContainerStarted","Data":"6ddfa2d2c7769b7584668a22f5509d9cb59ee59a01ecedb0912ec25adbd18c87"} Jan 27 15:55:51 crc kubenswrapper[4697]: I0127 15:55:51.064385 4697 generic.go:334] "Generic (PLEG): container finished" podID="46ba372e-b97a-4144-9296-cf0bf04d5ed2" containerID="6ddfa2d2c7769b7584668a22f5509d9cb59ee59a01ecedb0912ec25adbd18c87" exitCode=0 Jan 27 15:55:51 crc kubenswrapper[4697]: I0127 15:55:51.064500 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-87fxr" event={"ID":"46ba372e-b97a-4144-9296-cf0bf04d5ed2","Type":"ContainerDied","Data":"6ddfa2d2c7769b7584668a22f5509d9cb59ee59a01ecedb0912ec25adbd18c87"} Jan 27 15:55:52 crc kubenswrapper[4697]: I0127 15:55:52.079626 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-87fxr" event={"ID":"46ba372e-b97a-4144-9296-cf0bf04d5ed2","Type":"ContainerStarted","Data":"de614d6ccad31806c37733b662a1e1329903ba5a340a407614f77211a2fa8b49"} Jan 27 15:55:52 crc kubenswrapper[4697]: I0127 15:55:52.101940 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-87fxr" podStartSLOduration=2.5421291569999998 podStartE2EDuration="8.101925407s" podCreationTimestamp="2026-01-27 15:55:44 +0000 UTC" firstStartedPulling="2026-01-27 15:55:45.969285318 +0000 UTC m=+2842.141685099" lastFinishedPulling="2026-01-27 15:55:51.529081568 +0000 UTC m=+2847.701481349" observedRunningTime="2026-01-27 15:55:52.099242601 +0000 UTC m=+2848.271642402" watchObservedRunningTime="2026-01-27 15:55:52.101925407 +0000 UTC m=+2848.274325188" Jan 27 15:55:54 crc kubenswrapper[4697]: I0127 15:55:54.698461 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-87fxr" Jan 27 15:55:54 crc kubenswrapper[4697]: I0127 15:55:54.699098 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-87fxr" Jan 27 15:55:55 crc kubenswrapper[4697]: I0127 15:55:55.108841 4697 patch_prober.go:28] interesting pod/machine-config-daemon-wz495 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 15:55:55 crc kubenswrapper[4697]: I0127 15:55:55.108900 4697 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 15:55:55 crc kubenswrapper[4697]: I0127 15:55:55.759062 4697 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-87fxr" podUID="46ba372e-b97a-4144-9296-cf0bf04d5ed2" containerName="registry-server" probeResult="failure" output=< Jan 27 15:55:55 crc kubenswrapper[4697]: timeout: failed to connect service ":50051" within 1s Jan 27 15:55:55 crc kubenswrapper[4697]: > Jan 27 15:56:04 crc kubenswrapper[4697]: I0127 15:56:04.758622 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-87fxr" Jan 27 15:56:04 crc kubenswrapper[4697]: I0127 15:56:04.829487 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-87fxr" Jan 27 15:56:05 crc kubenswrapper[4697]: I0127 15:56:05.003555 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-87fxr"] Jan 27 15:56:06 crc kubenswrapper[4697]: I0127 15:56:06.200047 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-87fxr" podUID="46ba372e-b97a-4144-9296-cf0bf04d5ed2" containerName="registry-server" containerID="cri-o://de614d6ccad31806c37733b662a1e1329903ba5a340a407614f77211a2fa8b49" gracePeriod=2 Jan 27 15:56:06 crc kubenswrapper[4697]: I0127 15:56:06.654615 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-87fxr" Jan 27 15:56:06 crc kubenswrapper[4697]: I0127 15:56:06.773520 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46ba372e-b97a-4144-9296-cf0bf04d5ed2-utilities\") pod \"46ba372e-b97a-4144-9296-cf0bf04d5ed2\" (UID: \"46ba372e-b97a-4144-9296-cf0bf04d5ed2\") " Jan 27 15:56:06 crc kubenswrapper[4697]: I0127 15:56:06.774046 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-25szm\" (UniqueName: \"kubernetes.io/projected/46ba372e-b97a-4144-9296-cf0bf04d5ed2-kube-api-access-25szm\") pod \"46ba372e-b97a-4144-9296-cf0bf04d5ed2\" (UID: \"46ba372e-b97a-4144-9296-cf0bf04d5ed2\") " Jan 27 15:56:06 crc kubenswrapper[4697]: I0127 15:56:06.774176 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/46ba372e-b97a-4144-9296-cf0bf04d5ed2-utilities" (OuterVolumeSpecName: "utilities") pod "46ba372e-b97a-4144-9296-cf0bf04d5ed2" (UID: "46ba372e-b97a-4144-9296-cf0bf04d5ed2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:56:06 crc kubenswrapper[4697]: I0127 15:56:06.774247 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46ba372e-b97a-4144-9296-cf0bf04d5ed2-catalog-content\") pod \"46ba372e-b97a-4144-9296-cf0bf04d5ed2\" (UID: \"46ba372e-b97a-4144-9296-cf0bf04d5ed2\") " Jan 27 15:56:06 crc kubenswrapper[4697]: I0127 15:56:06.775048 4697 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46ba372e-b97a-4144-9296-cf0bf04d5ed2-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 15:56:06 crc kubenswrapper[4697]: I0127 15:56:06.781428 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46ba372e-b97a-4144-9296-cf0bf04d5ed2-kube-api-access-25szm" (OuterVolumeSpecName: "kube-api-access-25szm") pod "46ba372e-b97a-4144-9296-cf0bf04d5ed2" (UID: "46ba372e-b97a-4144-9296-cf0bf04d5ed2"). InnerVolumeSpecName "kube-api-access-25szm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:56:06 crc kubenswrapper[4697]: I0127 15:56:06.877560 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-25szm\" (UniqueName: \"kubernetes.io/projected/46ba372e-b97a-4144-9296-cf0bf04d5ed2-kube-api-access-25szm\") on node \"crc\" DevicePath \"\"" Jan 27 15:56:06 crc kubenswrapper[4697]: I0127 15:56:06.914831 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/46ba372e-b97a-4144-9296-cf0bf04d5ed2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "46ba372e-b97a-4144-9296-cf0bf04d5ed2" (UID: "46ba372e-b97a-4144-9296-cf0bf04d5ed2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:56:06 crc kubenswrapper[4697]: I0127 15:56:06.979394 4697 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46ba372e-b97a-4144-9296-cf0bf04d5ed2-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 15:56:07 crc kubenswrapper[4697]: I0127 15:56:07.212920 4697 generic.go:334] "Generic (PLEG): container finished" podID="46ba372e-b97a-4144-9296-cf0bf04d5ed2" containerID="de614d6ccad31806c37733b662a1e1329903ba5a340a407614f77211a2fa8b49" exitCode=0 Jan 27 15:56:07 crc kubenswrapper[4697]: I0127 15:56:07.212966 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-87fxr" event={"ID":"46ba372e-b97a-4144-9296-cf0bf04d5ed2","Type":"ContainerDied","Data":"de614d6ccad31806c37733b662a1e1329903ba5a340a407614f77211a2fa8b49"} Jan 27 15:56:07 crc kubenswrapper[4697]: I0127 15:56:07.213011 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-87fxr" event={"ID":"46ba372e-b97a-4144-9296-cf0bf04d5ed2","Type":"ContainerDied","Data":"f962e5710ec028e063df547c43d2a15d18c3dd7e0341d97b80334e013158faab"} Jan 27 15:56:07 crc kubenswrapper[4697]: I0127 15:56:07.213028 4697 scope.go:117] "RemoveContainer" containerID="de614d6ccad31806c37733b662a1e1329903ba5a340a407614f77211a2fa8b49" Jan 27 15:56:07 crc kubenswrapper[4697]: I0127 15:56:07.213042 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-87fxr" Jan 27 15:56:07 crc kubenswrapper[4697]: I0127 15:56:07.261189 4697 scope.go:117] "RemoveContainer" containerID="6ddfa2d2c7769b7584668a22f5509d9cb59ee59a01ecedb0912ec25adbd18c87" Jan 27 15:56:07 crc kubenswrapper[4697]: I0127 15:56:07.267933 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-87fxr"] Jan 27 15:56:07 crc kubenswrapper[4697]: I0127 15:56:07.277652 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-87fxr"] Jan 27 15:56:07 crc kubenswrapper[4697]: I0127 15:56:07.287727 4697 scope.go:117] "RemoveContainer" containerID="e31b5775f82673b83b1d51f9d7f3f80b9db223ee8288e14ac039b462368b2000" Jan 27 15:56:07 crc kubenswrapper[4697]: I0127 15:56:07.333559 4697 scope.go:117] "RemoveContainer" containerID="de614d6ccad31806c37733b662a1e1329903ba5a340a407614f77211a2fa8b49" Jan 27 15:56:07 crc kubenswrapper[4697]: E0127 15:56:07.335229 4697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de614d6ccad31806c37733b662a1e1329903ba5a340a407614f77211a2fa8b49\": container with ID starting with de614d6ccad31806c37733b662a1e1329903ba5a340a407614f77211a2fa8b49 not found: ID does not exist" containerID="de614d6ccad31806c37733b662a1e1329903ba5a340a407614f77211a2fa8b49" Jan 27 15:56:07 crc kubenswrapper[4697]: I0127 15:56:07.335264 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de614d6ccad31806c37733b662a1e1329903ba5a340a407614f77211a2fa8b49"} err="failed to get container status \"de614d6ccad31806c37733b662a1e1329903ba5a340a407614f77211a2fa8b49\": rpc error: code = NotFound desc = could not find container \"de614d6ccad31806c37733b662a1e1329903ba5a340a407614f77211a2fa8b49\": container with ID starting with de614d6ccad31806c37733b662a1e1329903ba5a340a407614f77211a2fa8b49 not found: ID does not exist" Jan 27 15:56:07 crc kubenswrapper[4697]: I0127 15:56:07.335288 4697 scope.go:117] "RemoveContainer" containerID="6ddfa2d2c7769b7584668a22f5509d9cb59ee59a01ecedb0912ec25adbd18c87" Jan 27 15:56:07 crc kubenswrapper[4697]: E0127 15:56:07.336270 4697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ddfa2d2c7769b7584668a22f5509d9cb59ee59a01ecedb0912ec25adbd18c87\": container with ID starting with 6ddfa2d2c7769b7584668a22f5509d9cb59ee59a01ecedb0912ec25adbd18c87 not found: ID does not exist" containerID="6ddfa2d2c7769b7584668a22f5509d9cb59ee59a01ecedb0912ec25adbd18c87" Jan 27 15:56:07 crc kubenswrapper[4697]: I0127 15:56:07.336297 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ddfa2d2c7769b7584668a22f5509d9cb59ee59a01ecedb0912ec25adbd18c87"} err="failed to get container status \"6ddfa2d2c7769b7584668a22f5509d9cb59ee59a01ecedb0912ec25adbd18c87\": rpc error: code = NotFound desc = could not find container \"6ddfa2d2c7769b7584668a22f5509d9cb59ee59a01ecedb0912ec25adbd18c87\": container with ID starting with 6ddfa2d2c7769b7584668a22f5509d9cb59ee59a01ecedb0912ec25adbd18c87 not found: ID does not exist" Jan 27 15:56:07 crc kubenswrapper[4697]: I0127 15:56:07.336312 4697 scope.go:117] "RemoveContainer" containerID="e31b5775f82673b83b1d51f9d7f3f80b9db223ee8288e14ac039b462368b2000" Jan 27 15:56:07 crc kubenswrapper[4697]: E0127 15:56:07.336842 4697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e31b5775f82673b83b1d51f9d7f3f80b9db223ee8288e14ac039b462368b2000\": container with ID starting with e31b5775f82673b83b1d51f9d7f3f80b9db223ee8288e14ac039b462368b2000 not found: ID does not exist" containerID="e31b5775f82673b83b1d51f9d7f3f80b9db223ee8288e14ac039b462368b2000" Jan 27 15:56:07 crc kubenswrapper[4697]: I0127 15:56:07.336863 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e31b5775f82673b83b1d51f9d7f3f80b9db223ee8288e14ac039b462368b2000"} err="failed to get container status \"e31b5775f82673b83b1d51f9d7f3f80b9db223ee8288e14ac039b462368b2000\": rpc error: code = NotFound desc = could not find container \"e31b5775f82673b83b1d51f9d7f3f80b9db223ee8288e14ac039b462368b2000\": container with ID starting with e31b5775f82673b83b1d51f9d7f3f80b9db223ee8288e14ac039b462368b2000 not found: ID does not exist" Jan 27 15:56:08 crc kubenswrapper[4697]: I0127 15:56:08.580559 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46ba372e-b97a-4144-9296-cf0bf04d5ed2" path="/var/lib/kubelet/pods/46ba372e-b97a-4144-9296-cf0bf04d5ed2/volumes" Jan 27 15:56:25 crc kubenswrapper[4697]: I0127 15:56:25.109342 4697 patch_prober.go:28] interesting pod/machine-config-daemon-wz495 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 15:56:25 crc kubenswrapper[4697]: I0127 15:56:25.109923 4697 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 15:56:54 crc kubenswrapper[4697]: I0127 15:56:54.616256 4697 generic.go:334] "Generic (PLEG): container finished" podID="8b858060-b802-452d-aa2a-1be4f38efe74" containerID="6f8a93bc17a7f4f33692491a24dc668a0497a94d52d72e1236f01615f88820d5" exitCode=0 Jan 27 15:56:54 crc kubenswrapper[4697]: I0127 15:56:54.616793 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jk5zz" event={"ID":"8b858060-b802-452d-aa2a-1be4f38efe74","Type":"ContainerDied","Data":"6f8a93bc17a7f4f33692491a24dc668a0497a94d52d72e1236f01615f88820d5"} Jan 27 15:56:55 crc kubenswrapper[4697]: I0127 15:56:55.109356 4697 patch_prober.go:28] interesting pod/machine-config-daemon-wz495 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 15:56:55 crc kubenswrapper[4697]: I0127 15:56:55.109617 4697 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 15:56:55 crc kubenswrapper[4697]: I0127 15:56:55.109661 4697 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wz495" Jan 27 15:56:55 crc kubenswrapper[4697]: I0127 15:56:55.110322 4697 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"070df30899c85498f8cec50dcfed85b20e4ba889e263f6d29311290475d4f7df"} pod="openshift-machine-config-operator/machine-config-daemon-wz495" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 15:56:55 crc kubenswrapper[4697]: I0127 15:56:55.110372 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" containerName="machine-config-daemon" containerID="cri-o://070df30899c85498f8cec50dcfed85b20e4ba889e263f6d29311290475d4f7df" gracePeriod=600 Jan 27 15:56:55 crc kubenswrapper[4697]: E0127 15:56:55.248211 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wz495_openshift-machine-config-operator(e9bec8bc-b2a6-4865-83ca-692ae5c022a6)\"" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" Jan 27 15:56:55 crc kubenswrapper[4697]: I0127 15:56:55.629246 4697 generic.go:334] "Generic (PLEG): container finished" podID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" containerID="070df30899c85498f8cec50dcfed85b20e4ba889e263f6d29311290475d4f7df" exitCode=0 Jan 27 15:56:55 crc kubenswrapper[4697]: I0127 15:56:55.629435 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wz495" event={"ID":"e9bec8bc-b2a6-4865-83ca-692ae5c022a6","Type":"ContainerDied","Data":"070df30899c85498f8cec50dcfed85b20e4ba889e263f6d29311290475d4f7df"} Jan 27 15:56:55 crc kubenswrapper[4697]: I0127 15:56:55.631461 4697 scope.go:117] "RemoveContainer" containerID="f6fb7c40b002bba85b4cc9851083ccbb1cbbdf65b8f193243e0beeee887e82e7" Jan 27 15:56:55 crc kubenswrapper[4697]: I0127 15:56:55.632922 4697 scope.go:117] "RemoveContainer" containerID="070df30899c85498f8cec50dcfed85b20e4ba889e263f6d29311290475d4f7df" Jan 27 15:56:55 crc kubenswrapper[4697]: E0127 15:56:55.633269 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wz495_openshift-machine-config-operator(e9bec8bc-b2a6-4865-83ca-692ae5c022a6)\"" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" Jan 27 15:56:56 crc kubenswrapper[4697]: I0127 15:56:56.119732 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jk5zz" Jan 27 15:56:56 crc kubenswrapper[4697]: I0127 15:56:56.258852 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/8b858060-b802-452d-aa2a-1be4f38efe74-nova-migration-ssh-key-0\") pod \"8b858060-b802-452d-aa2a-1be4f38efe74\" (UID: \"8b858060-b802-452d-aa2a-1be4f38efe74\") " Jan 27 15:56:56 crc kubenswrapper[4697]: I0127 15:56:56.258914 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b858060-b802-452d-aa2a-1be4f38efe74-nova-combined-ca-bundle\") pod \"8b858060-b802-452d-aa2a-1be4f38efe74\" (UID: \"8b858060-b802-452d-aa2a-1be4f38efe74\") " Jan 27 15:56:56 crc kubenswrapper[4697]: I0127 15:56:56.258972 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/8b858060-b802-452d-aa2a-1be4f38efe74-nova-migration-ssh-key-1\") pod \"8b858060-b802-452d-aa2a-1be4f38efe74\" (UID: \"8b858060-b802-452d-aa2a-1be4f38efe74\") " Jan 27 15:56:56 crc kubenswrapper[4697]: I0127 15:56:56.258995 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-844f8\" (UniqueName: \"kubernetes.io/projected/8b858060-b802-452d-aa2a-1be4f38efe74-kube-api-access-844f8\") pod \"8b858060-b802-452d-aa2a-1be4f38efe74\" (UID: \"8b858060-b802-452d-aa2a-1be4f38efe74\") " Jan 27 15:56:56 crc kubenswrapper[4697]: I0127 15:56:56.259018 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/8b858060-b802-452d-aa2a-1be4f38efe74-nova-extra-config-0\") pod \"8b858060-b802-452d-aa2a-1be4f38efe74\" (UID: \"8b858060-b802-452d-aa2a-1be4f38efe74\") " Jan 27 15:56:56 crc kubenswrapper[4697]: I0127 15:56:56.259035 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8b858060-b802-452d-aa2a-1be4f38efe74-inventory\") pod \"8b858060-b802-452d-aa2a-1be4f38efe74\" (UID: \"8b858060-b802-452d-aa2a-1be4f38efe74\") " Jan 27 15:56:56 crc kubenswrapper[4697]: I0127 15:56:56.259061 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8b858060-b802-452d-aa2a-1be4f38efe74-ssh-key-openstack-edpm-ipam\") pod \"8b858060-b802-452d-aa2a-1be4f38efe74\" (UID: \"8b858060-b802-452d-aa2a-1be4f38efe74\") " Jan 27 15:56:56 crc kubenswrapper[4697]: I0127 15:56:56.259168 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/8b858060-b802-452d-aa2a-1be4f38efe74-nova-cell1-compute-config-0\") pod \"8b858060-b802-452d-aa2a-1be4f38efe74\" (UID: \"8b858060-b802-452d-aa2a-1be4f38efe74\") " Jan 27 15:56:56 crc kubenswrapper[4697]: I0127 15:56:56.259261 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/8b858060-b802-452d-aa2a-1be4f38efe74-nova-cell1-compute-config-1\") pod \"8b858060-b802-452d-aa2a-1be4f38efe74\" (UID: \"8b858060-b802-452d-aa2a-1be4f38efe74\") " Jan 27 15:56:56 crc kubenswrapper[4697]: I0127 15:56:56.264323 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b858060-b802-452d-aa2a-1be4f38efe74-kube-api-access-844f8" (OuterVolumeSpecName: "kube-api-access-844f8") pod "8b858060-b802-452d-aa2a-1be4f38efe74" (UID: "8b858060-b802-452d-aa2a-1be4f38efe74"). InnerVolumeSpecName "kube-api-access-844f8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:56:56 crc kubenswrapper[4697]: I0127 15:56:56.264674 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b858060-b802-452d-aa2a-1be4f38efe74-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "8b858060-b802-452d-aa2a-1be4f38efe74" (UID: "8b858060-b802-452d-aa2a-1be4f38efe74"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:56:56 crc kubenswrapper[4697]: I0127 15:56:56.285520 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b858060-b802-452d-aa2a-1be4f38efe74-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "8b858060-b802-452d-aa2a-1be4f38efe74" (UID: "8b858060-b802-452d-aa2a-1be4f38efe74"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:56:56 crc kubenswrapper[4697]: I0127 15:56:56.298491 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b858060-b802-452d-aa2a-1be4f38efe74-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "8b858060-b802-452d-aa2a-1be4f38efe74" (UID: "8b858060-b802-452d-aa2a-1be4f38efe74"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:56:56 crc kubenswrapper[4697]: I0127 15:56:56.299506 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b858060-b802-452d-aa2a-1be4f38efe74-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "8b858060-b802-452d-aa2a-1be4f38efe74" (UID: "8b858060-b802-452d-aa2a-1be4f38efe74"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:56:56 crc kubenswrapper[4697]: I0127 15:56:56.312606 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b858060-b802-452d-aa2a-1be4f38efe74-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "8b858060-b802-452d-aa2a-1be4f38efe74" (UID: "8b858060-b802-452d-aa2a-1be4f38efe74"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:56:56 crc kubenswrapper[4697]: I0127 15:56:56.327051 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b858060-b802-452d-aa2a-1be4f38efe74-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "8b858060-b802-452d-aa2a-1be4f38efe74" (UID: "8b858060-b802-452d-aa2a-1be4f38efe74"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:56:56 crc kubenswrapper[4697]: I0127 15:56:56.330674 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b858060-b802-452d-aa2a-1be4f38efe74-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "8b858060-b802-452d-aa2a-1be4f38efe74" (UID: "8b858060-b802-452d-aa2a-1be4f38efe74"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:56:56 crc kubenswrapper[4697]: I0127 15:56:56.332126 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b858060-b802-452d-aa2a-1be4f38efe74-inventory" (OuterVolumeSpecName: "inventory") pod "8b858060-b802-452d-aa2a-1be4f38efe74" (UID: "8b858060-b802-452d-aa2a-1be4f38efe74"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:56:56 crc kubenswrapper[4697]: I0127 15:56:56.360905 4697 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/8b858060-b802-452d-aa2a-1be4f38efe74-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Jan 27 15:56:56 crc kubenswrapper[4697]: I0127 15:56:56.360949 4697 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b858060-b802-452d-aa2a-1be4f38efe74-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 15:56:56 crc kubenswrapper[4697]: I0127 15:56:56.360958 4697 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/8b858060-b802-452d-aa2a-1be4f38efe74-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Jan 27 15:56:56 crc kubenswrapper[4697]: I0127 15:56:56.360967 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-844f8\" (UniqueName: \"kubernetes.io/projected/8b858060-b802-452d-aa2a-1be4f38efe74-kube-api-access-844f8\") on node \"crc\" DevicePath \"\"" Jan 27 15:56:56 crc kubenswrapper[4697]: I0127 15:56:56.360976 4697 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/8b858060-b802-452d-aa2a-1be4f38efe74-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Jan 27 15:56:56 crc kubenswrapper[4697]: I0127 15:56:56.360985 4697 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8b858060-b802-452d-aa2a-1be4f38efe74-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 15:56:56 crc kubenswrapper[4697]: I0127 15:56:56.360994 4697 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8b858060-b802-452d-aa2a-1be4f38efe74-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 15:56:56 crc kubenswrapper[4697]: I0127 15:56:56.361004 4697 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/8b858060-b802-452d-aa2a-1be4f38efe74-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Jan 27 15:56:56 crc kubenswrapper[4697]: I0127 15:56:56.361012 4697 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/8b858060-b802-452d-aa2a-1be4f38efe74-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Jan 27 15:56:56 crc kubenswrapper[4697]: I0127 15:56:56.654121 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jk5zz" event={"ID":"8b858060-b802-452d-aa2a-1be4f38efe74","Type":"ContainerDied","Data":"50ea95763f795893a05be5a7385e9dff96525242436ec81222a1be8dfffe7a1d"} Jan 27 15:56:56 crc kubenswrapper[4697]: I0127 15:56:56.654161 4697 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="50ea95763f795893a05be5a7385e9dff96525242436ec81222a1be8dfffe7a1d" Jan 27 15:56:56 crc kubenswrapper[4697]: I0127 15:56:56.654210 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jk5zz" Jan 27 15:56:56 crc kubenswrapper[4697]: I0127 15:56:56.743006 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4hmkx"] Jan 27 15:56:56 crc kubenswrapper[4697]: E0127 15:56:56.746372 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b858060-b802-452d-aa2a-1be4f38efe74" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 27 15:56:56 crc kubenswrapper[4697]: I0127 15:56:56.746411 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b858060-b802-452d-aa2a-1be4f38efe74" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 27 15:56:56 crc kubenswrapper[4697]: E0127 15:56:56.746443 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46ba372e-b97a-4144-9296-cf0bf04d5ed2" containerName="extract-content" Jan 27 15:56:56 crc kubenswrapper[4697]: I0127 15:56:56.746452 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="46ba372e-b97a-4144-9296-cf0bf04d5ed2" containerName="extract-content" Jan 27 15:56:56 crc kubenswrapper[4697]: E0127 15:56:56.746477 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46ba372e-b97a-4144-9296-cf0bf04d5ed2" containerName="extract-utilities" Jan 27 15:56:56 crc kubenswrapper[4697]: I0127 15:56:56.746485 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="46ba372e-b97a-4144-9296-cf0bf04d5ed2" containerName="extract-utilities" Jan 27 15:56:56 crc kubenswrapper[4697]: E0127 15:56:56.746508 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46ba372e-b97a-4144-9296-cf0bf04d5ed2" containerName="registry-server" Jan 27 15:56:56 crc kubenswrapper[4697]: I0127 15:56:56.746513 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="46ba372e-b97a-4144-9296-cf0bf04d5ed2" containerName="registry-server" Jan 27 15:56:56 crc kubenswrapper[4697]: I0127 15:56:56.746696 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b858060-b802-452d-aa2a-1be4f38efe74" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 27 15:56:56 crc kubenswrapper[4697]: I0127 15:56:56.746733 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="46ba372e-b97a-4144-9296-cf0bf04d5ed2" containerName="registry-server" Jan 27 15:56:56 crc kubenswrapper[4697]: I0127 15:56:56.747444 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4hmkx" Jan 27 15:56:56 crc kubenswrapper[4697]: I0127 15:56:56.757162 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4hmkx"] Jan 27 15:56:56 crc kubenswrapper[4697]: I0127 15:56:56.757699 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 15:56:56 crc kubenswrapper[4697]: I0127 15:56:56.760378 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Jan 27 15:56:56 crc kubenswrapper[4697]: I0127 15:56:56.760831 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 15:56:56 crc kubenswrapper[4697]: I0127 15:56:56.762756 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ctbjc" Jan 27 15:56:56 crc kubenswrapper[4697]: I0127 15:56:56.767397 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 15:56:56 crc kubenswrapper[4697]: I0127 15:56:56.870125 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/2b30d2e1-f8ce-4e50-9476-eb2d454bc1ce-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4hmkx\" (UID: \"2b30d2e1-f8ce-4e50-9476-eb2d454bc1ce\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4hmkx" Jan 27 15:56:56 crc kubenswrapper[4697]: I0127 15:56:56.870200 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b30d2e1-f8ce-4e50-9476-eb2d454bc1ce-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4hmkx\" (UID: \"2b30d2e1-f8ce-4e50-9476-eb2d454bc1ce\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4hmkx" Jan 27 15:56:56 crc kubenswrapper[4697]: I0127 15:56:56.870228 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2b30d2e1-f8ce-4e50-9476-eb2d454bc1ce-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4hmkx\" (UID: \"2b30d2e1-f8ce-4e50-9476-eb2d454bc1ce\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4hmkx" Jan 27 15:56:56 crc kubenswrapper[4697]: I0127 15:56:56.870262 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/2b30d2e1-f8ce-4e50-9476-eb2d454bc1ce-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4hmkx\" (UID: \"2b30d2e1-f8ce-4e50-9476-eb2d454bc1ce\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4hmkx" Jan 27 15:56:56 crc kubenswrapper[4697]: I0127 15:56:56.870408 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2b30d2e1-f8ce-4e50-9476-eb2d454bc1ce-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4hmkx\" (UID: \"2b30d2e1-f8ce-4e50-9476-eb2d454bc1ce\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4hmkx" Jan 27 15:56:56 crc kubenswrapper[4697]: I0127 15:56:56.870487 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8drd\" (UniqueName: \"kubernetes.io/projected/2b30d2e1-f8ce-4e50-9476-eb2d454bc1ce-kube-api-access-w8drd\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4hmkx\" (UID: \"2b30d2e1-f8ce-4e50-9476-eb2d454bc1ce\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4hmkx" Jan 27 15:56:56 crc kubenswrapper[4697]: I0127 15:56:56.870537 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/2b30d2e1-f8ce-4e50-9476-eb2d454bc1ce-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4hmkx\" (UID: \"2b30d2e1-f8ce-4e50-9476-eb2d454bc1ce\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4hmkx" Jan 27 15:56:56 crc kubenswrapper[4697]: I0127 15:56:56.972398 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/2b30d2e1-f8ce-4e50-9476-eb2d454bc1ce-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4hmkx\" (UID: \"2b30d2e1-f8ce-4e50-9476-eb2d454bc1ce\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4hmkx" Jan 27 15:56:56 crc kubenswrapper[4697]: I0127 15:56:56.972550 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/2b30d2e1-f8ce-4e50-9476-eb2d454bc1ce-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4hmkx\" (UID: \"2b30d2e1-f8ce-4e50-9476-eb2d454bc1ce\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4hmkx" Jan 27 15:56:56 crc kubenswrapper[4697]: I0127 15:56:56.972675 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b30d2e1-f8ce-4e50-9476-eb2d454bc1ce-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4hmkx\" (UID: \"2b30d2e1-f8ce-4e50-9476-eb2d454bc1ce\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4hmkx" Jan 27 15:56:56 crc kubenswrapper[4697]: I0127 15:56:56.972709 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2b30d2e1-f8ce-4e50-9476-eb2d454bc1ce-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4hmkx\" (UID: \"2b30d2e1-f8ce-4e50-9476-eb2d454bc1ce\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4hmkx" Jan 27 15:56:56 crc kubenswrapper[4697]: I0127 15:56:56.973608 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/2b30d2e1-f8ce-4e50-9476-eb2d454bc1ce-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4hmkx\" (UID: \"2b30d2e1-f8ce-4e50-9476-eb2d454bc1ce\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4hmkx" Jan 27 15:56:56 crc kubenswrapper[4697]: I0127 15:56:56.973923 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2b30d2e1-f8ce-4e50-9476-eb2d454bc1ce-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4hmkx\" (UID: \"2b30d2e1-f8ce-4e50-9476-eb2d454bc1ce\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4hmkx" Jan 27 15:56:56 crc kubenswrapper[4697]: I0127 15:56:56.973968 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8drd\" (UniqueName: \"kubernetes.io/projected/2b30d2e1-f8ce-4e50-9476-eb2d454bc1ce-kube-api-access-w8drd\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4hmkx\" (UID: \"2b30d2e1-f8ce-4e50-9476-eb2d454bc1ce\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4hmkx" Jan 27 15:56:56 crc kubenswrapper[4697]: I0127 15:56:56.978374 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/2b30d2e1-f8ce-4e50-9476-eb2d454bc1ce-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4hmkx\" (UID: \"2b30d2e1-f8ce-4e50-9476-eb2d454bc1ce\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4hmkx" Jan 27 15:56:56 crc kubenswrapper[4697]: I0127 15:56:56.987308 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2b30d2e1-f8ce-4e50-9476-eb2d454bc1ce-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4hmkx\" (UID: \"2b30d2e1-f8ce-4e50-9476-eb2d454bc1ce\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4hmkx" Jan 27 15:56:56 crc kubenswrapper[4697]: I0127 15:56:56.988054 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b30d2e1-f8ce-4e50-9476-eb2d454bc1ce-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4hmkx\" (UID: \"2b30d2e1-f8ce-4e50-9476-eb2d454bc1ce\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4hmkx" Jan 27 15:56:56 crc kubenswrapper[4697]: I0127 15:56:56.988492 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2b30d2e1-f8ce-4e50-9476-eb2d454bc1ce-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4hmkx\" (UID: \"2b30d2e1-f8ce-4e50-9476-eb2d454bc1ce\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4hmkx" Jan 27 15:56:56 crc kubenswrapper[4697]: I0127 15:56:56.989293 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/2b30d2e1-f8ce-4e50-9476-eb2d454bc1ce-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4hmkx\" (UID: \"2b30d2e1-f8ce-4e50-9476-eb2d454bc1ce\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4hmkx" Jan 27 15:56:57 crc kubenswrapper[4697]: I0127 15:56:56.999825 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/2b30d2e1-f8ce-4e50-9476-eb2d454bc1ce-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4hmkx\" (UID: \"2b30d2e1-f8ce-4e50-9476-eb2d454bc1ce\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4hmkx" Jan 27 15:56:57 crc kubenswrapper[4697]: I0127 15:56:57.000768 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8drd\" (UniqueName: \"kubernetes.io/projected/2b30d2e1-f8ce-4e50-9476-eb2d454bc1ce-kube-api-access-w8drd\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4hmkx\" (UID: \"2b30d2e1-f8ce-4e50-9476-eb2d454bc1ce\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4hmkx" Jan 27 15:56:57 crc kubenswrapper[4697]: I0127 15:56:57.072303 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4hmkx" Jan 27 15:56:57 crc kubenswrapper[4697]: I0127 15:56:57.662549 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4hmkx"] Jan 27 15:56:58 crc kubenswrapper[4697]: I0127 15:56:58.683154 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4hmkx" event={"ID":"2b30d2e1-f8ce-4e50-9476-eb2d454bc1ce","Type":"ContainerStarted","Data":"9a7690a278c56003caa48b68b2789119167e6cb219760a3fa12df267dd6f09a6"} Jan 27 15:56:58 crc kubenswrapper[4697]: I0127 15:56:58.683501 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4hmkx" event={"ID":"2b30d2e1-f8ce-4e50-9476-eb2d454bc1ce","Type":"ContainerStarted","Data":"a1d7c9bcd398371009374d93f4e021deb8e0cd9ace6be5060025b77b213cc384"} Jan 27 15:56:58 crc kubenswrapper[4697]: I0127 15:56:58.706172 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4hmkx" podStartSLOduration=2.286094173 podStartE2EDuration="2.706151752s" podCreationTimestamp="2026-01-27 15:56:56 +0000 UTC" firstStartedPulling="2026-01-27 15:56:57.695843831 +0000 UTC m=+2913.868243612" lastFinishedPulling="2026-01-27 15:56:58.11590139 +0000 UTC m=+2914.288301191" observedRunningTime="2026-01-27 15:56:58.704654904 +0000 UTC m=+2914.877054685" watchObservedRunningTime="2026-01-27 15:56:58.706151752 +0000 UTC m=+2914.878551553" Jan 27 15:57:10 crc kubenswrapper[4697]: I0127 15:57:10.568367 4697 scope.go:117] "RemoveContainer" containerID="070df30899c85498f8cec50dcfed85b20e4ba889e263f6d29311290475d4f7df" Jan 27 15:57:10 crc kubenswrapper[4697]: E0127 15:57:10.569195 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wz495_openshift-machine-config-operator(e9bec8bc-b2a6-4865-83ca-692ae5c022a6)\"" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" Jan 27 15:57:21 crc kubenswrapper[4697]: I0127 15:57:21.569429 4697 scope.go:117] "RemoveContainer" containerID="070df30899c85498f8cec50dcfed85b20e4ba889e263f6d29311290475d4f7df" Jan 27 15:57:21 crc kubenswrapper[4697]: E0127 15:57:21.570257 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wz495_openshift-machine-config-operator(e9bec8bc-b2a6-4865-83ca-692ae5c022a6)\"" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" Jan 27 15:57:35 crc kubenswrapper[4697]: I0127 15:57:35.569522 4697 scope.go:117] "RemoveContainer" containerID="070df30899c85498f8cec50dcfed85b20e4ba889e263f6d29311290475d4f7df" Jan 27 15:57:35 crc kubenswrapper[4697]: E0127 15:57:35.570530 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wz495_openshift-machine-config-operator(e9bec8bc-b2a6-4865-83ca-692ae5c022a6)\"" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" Jan 27 15:57:49 crc kubenswrapper[4697]: I0127 15:57:49.569516 4697 scope.go:117] "RemoveContainer" containerID="070df30899c85498f8cec50dcfed85b20e4ba889e263f6d29311290475d4f7df" Jan 27 15:57:49 crc kubenswrapper[4697]: E0127 15:57:49.570687 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wz495_openshift-machine-config-operator(e9bec8bc-b2a6-4865-83ca-692ae5c022a6)\"" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" Jan 27 15:58:00 crc kubenswrapper[4697]: I0127 15:58:00.569092 4697 scope.go:117] "RemoveContainer" containerID="070df30899c85498f8cec50dcfed85b20e4ba889e263f6d29311290475d4f7df" Jan 27 15:58:00 crc kubenswrapper[4697]: E0127 15:58:00.569819 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wz495_openshift-machine-config-operator(e9bec8bc-b2a6-4865-83ca-692ae5c022a6)\"" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" Jan 27 15:58:11 crc kubenswrapper[4697]: I0127 15:58:11.567948 4697 scope.go:117] "RemoveContainer" containerID="070df30899c85498f8cec50dcfed85b20e4ba889e263f6d29311290475d4f7df" Jan 27 15:58:11 crc kubenswrapper[4697]: E0127 15:58:11.568710 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wz495_openshift-machine-config-operator(e9bec8bc-b2a6-4865-83ca-692ae5c022a6)\"" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" Jan 27 15:58:23 crc kubenswrapper[4697]: I0127 15:58:23.568497 4697 scope.go:117] "RemoveContainer" containerID="070df30899c85498f8cec50dcfed85b20e4ba889e263f6d29311290475d4f7df" Jan 27 15:58:23 crc kubenswrapper[4697]: E0127 15:58:23.569225 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wz495_openshift-machine-config-operator(e9bec8bc-b2a6-4865-83ca-692ae5c022a6)\"" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" Jan 27 15:58:36 crc kubenswrapper[4697]: I0127 15:58:36.568600 4697 scope.go:117] "RemoveContainer" containerID="070df30899c85498f8cec50dcfed85b20e4ba889e263f6d29311290475d4f7df" Jan 27 15:58:36 crc kubenswrapper[4697]: E0127 15:58:36.569363 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wz495_openshift-machine-config-operator(e9bec8bc-b2a6-4865-83ca-692ae5c022a6)\"" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" Jan 27 15:58:51 crc kubenswrapper[4697]: I0127 15:58:51.568604 4697 scope.go:117] "RemoveContainer" containerID="070df30899c85498f8cec50dcfed85b20e4ba889e263f6d29311290475d4f7df" Jan 27 15:58:51 crc kubenswrapper[4697]: E0127 15:58:51.569280 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wz495_openshift-machine-config-operator(e9bec8bc-b2a6-4865-83ca-692ae5c022a6)\"" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" Jan 27 15:59:06 crc kubenswrapper[4697]: I0127 15:59:06.569699 4697 scope.go:117] "RemoveContainer" containerID="070df30899c85498f8cec50dcfed85b20e4ba889e263f6d29311290475d4f7df" Jan 27 15:59:06 crc kubenswrapper[4697]: E0127 15:59:06.570369 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wz495_openshift-machine-config-operator(e9bec8bc-b2a6-4865-83ca-692ae5c022a6)\"" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" Jan 27 15:59:20 crc kubenswrapper[4697]: I0127 15:59:20.568323 4697 scope.go:117] "RemoveContainer" containerID="070df30899c85498f8cec50dcfed85b20e4ba889e263f6d29311290475d4f7df" Jan 27 15:59:20 crc kubenswrapper[4697]: E0127 15:59:20.569125 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wz495_openshift-machine-config-operator(e9bec8bc-b2a6-4865-83ca-692ae5c022a6)\"" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" Jan 27 15:59:33 crc kubenswrapper[4697]: I0127 15:59:33.568265 4697 scope.go:117] "RemoveContainer" containerID="070df30899c85498f8cec50dcfed85b20e4ba889e263f6d29311290475d4f7df" Jan 27 15:59:33 crc kubenswrapper[4697]: E0127 15:59:33.569251 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wz495_openshift-machine-config-operator(e9bec8bc-b2a6-4865-83ca-692ae5c022a6)\"" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" Jan 27 15:59:46 crc kubenswrapper[4697]: I0127 15:59:46.568266 4697 scope.go:117] "RemoveContainer" containerID="070df30899c85498f8cec50dcfed85b20e4ba889e263f6d29311290475d4f7df" Jan 27 15:59:46 crc kubenswrapper[4697]: E0127 15:59:46.569053 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wz495_openshift-machine-config-operator(e9bec8bc-b2a6-4865-83ca-692ae5c022a6)\"" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" Jan 27 15:59:57 crc kubenswrapper[4697]: I0127 15:59:57.568243 4697 scope.go:117] "RemoveContainer" containerID="070df30899c85498f8cec50dcfed85b20e4ba889e263f6d29311290475d4f7df" Jan 27 15:59:57 crc kubenswrapper[4697]: E0127 15:59:57.569081 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wz495_openshift-machine-config-operator(e9bec8bc-b2a6-4865-83ca-692ae5c022a6)\"" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" Jan 27 16:00:00 crc kubenswrapper[4697]: I0127 16:00:00.157241 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492160-jpmt9"] Jan 27 16:00:00 crc kubenswrapper[4697]: I0127 16:00:00.159389 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492160-jpmt9" Jan 27 16:00:00 crc kubenswrapper[4697]: I0127 16:00:00.162148 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 27 16:00:00 crc kubenswrapper[4697]: I0127 16:00:00.162397 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 27 16:00:00 crc kubenswrapper[4697]: I0127 16:00:00.171814 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492160-jpmt9"] Jan 27 16:00:00 crc kubenswrapper[4697]: I0127 16:00:00.262673 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f5bc43e8-9cbd-49da-812d-00d599d2d563-config-volume\") pod \"collect-profiles-29492160-jpmt9\" (UID: \"f5bc43e8-9cbd-49da-812d-00d599d2d563\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492160-jpmt9" Jan 27 16:00:00 crc kubenswrapper[4697]: I0127 16:00:00.262735 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f5bc43e8-9cbd-49da-812d-00d599d2d563-secret-volume\") pod \"collect-profiles-29492160-jpmt9\" (UID: \"f5bc43e8-9cbd-49da-812d-00d599d2d563\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492160-jpmt9" Jan 27 16:00:00 crc kubenswrapper[4697]: I0127 16:00:00.262763 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nx24r\" (UniqueName: \"kubernetes.io/projected/f5bc43e8-9cbd-49da-812d-00d599d2d563-kube-api-access-nx24r\") pod \"collect-profiles-29492160-jpmt9\" (UID: \"f5bc43e8-9cbd-49da-812d-00d599d2d563\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492160-jpmt9" Jan 27 16:00:00 crc kubenswrapper[4697]: I0127 16:00:00.364637 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f5bc43e8-9cbd-49da-812d-00d599d2d563-config-volume\") pod \"collect-profiles-29492160-jpmt9\" (UID: \"f5bc43e8-9cbd-49da-812d-00d599d2d563\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492160-jpmt9" Jan 27 16:00:00 crc kubenswrapper[4697]: I0127 16:00:00.364686 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f5bc43e8-9cbd-49da-812d-00d599d2d563-secret-volume\") pod \"collect-profiles-29492160-jpmt9\" (UID: \"f5bc43e8-9cbd-49da-812d-00d599d2d563\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492160-jpmt9" Jan 27 16:00:00 crc kubenswrapper[4697]: I0127 16:00:00.364711 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nx24r\" (UniqueName: \"kubernetes.io/projected/f5bc43e8-9cbd-49da-812d-00d599d2d563-kube-api-access-nx24r\") pod \"collect-profiles-29492160-jpmt9\" (UID: \"f5bc43e8-9cbd-49da-812d-00d599d2d563\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492160-jpmt9" Jan 27 16:00:00 crc kubenswrapper[4697]: I0127 16:00:00.365660 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f5bc43e8-9cbd-49da-812d-00d599d2d563-config-volume\") pod \"collect-profiles-29492160-jpmt9\" (UID: \"f5bc43e8-9cbd-49da-812d-00d599d2d563\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492160-jpmt9" Jan 27 16:00:00 crc kubenswrapper[4697]: I0127 16:00:00.374482 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f5bc43e8-9cbd-49da-812d-00d599d2d563-secret-volume\") pod \"collect-profiles-29492160-jpmt9\" (UID: \"f5bc43e8-9cbd-49da-812d-00d599d2d563\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492160-jpmt9" Jan 27 16:00:00 crc kubenswrapper[4697]: I0127 16:00:00.382843 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nx24r\" (UniqueName: \"kubernetes.io/projected/f5bc43e8-9cbd-49da-812d-00d599d2d563-kube-api-access-nx24r\") pod \"collect-profiles-29492160-jpmt9\" (UID: \"f5bc43e8-9cbd-49da-812d-00d599d2d563\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492160-jpmt9" Jan 27 16:00:00 crc kubenswrapper[4697]: I0127 16:00:00.482508 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492160-jpmt9" Jan 27 16:00:00 crc kubenswrapper[4697]: I0127 16:00:00.928311 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492160-jpmt9"] Jan 27 16:00:01 crc kubenswrapper[4697]: I0127 16:00:01.360013 4697 generic.go:334] "Generic (PLEG): container finished" podID="f5bc43e8-9cbd-49da-812d-00d599d2d563" containerID="a427bcfe38b40a46901b0fcbe9eb44099b2eea1cb0511a61bdf04c8cc6b800be" exitCode=0 Jan 27 16:00:01 crc kubenswrapper[4697]: I0127 16:00:01.360059 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492160-jpmt9" event={"ID":"f5bc43e8-9cbd-49da-812d-00d599d2d563","Type":"ContainerDied","Data":"a427bcfe38b40a46901b0fcbe9eb44099b2eea1cb0511a61bdf04c8cc6b800be"} Jan 27 16:00:01 crc kubenswrapper[4697]: I0127 16:00:01.360088 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492160-jpmt9" event={"ID":"f5bc43e8-9cbd-49da-812d-00d599d2d563","Type":"ContainerStarted","Data":"0d965261fbd90f82bfdb729839a0f04279b56d654aef85d3960cf4322ae9decc"} Jan 27 16:00:02 crc kubenswrapper[4697]: I0127 16:00:02.753661 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492160-jpmt9" Jan 27 16:00:02 crc kubenswrapper[4697]: I0127 16:00:02.832119 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f5bc43e8-9cbd-49da-812d-00d599d2d563-config-volume\") pod \"f5bc43e8-9cbd-49da-812d-00d599d2d563\" (UID: \"f5bc43e8-9cbd-49da-812d-00d599d2d563\") " Jan 27 16:00:02 crc kubenswrapper[4697]: I0127 16:00:02.832169 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f5bc43e8-9cbd-49da-812d-00d599d2d563-secret-volume\") pod \"f5bc43e8-9cbd-49da-812d-00d599d2d563\" (UID: \"f5bc43e8-9cbd-49da-812d-00d599d2d563\") " Jan 27 16:00:02 crc kubenswrapper[4697]: I0127 16:00:02.832263 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nx24r\" (UniqueName: \"kubernetes.io/projected/f5bc43e8-9cbd-49da-812d-00d599d2d563-kube-api-access-nx24r\") pod \"f5bc43e8-9cbd-49da-812d-00d599d2d563\" (UID: \"f5bc43e8-9cbd-49da-812d-00d599d2d563\") " Jan 27 16:00:02 crc kubenswrapper[4697]: I0127 16:00:02.833079 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f5bc43e8-9cbd-49da-812d-00d599d2d563-config-volume" (OuterVolumeSpecName: "config-volume") pod "f5bc43e8-9cbd-49da-812d-00d599d2d563" (UID: "f5bc43e8-9cbd-49da-812d-00d599d2d563"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:00:02 crc kubenswrapper[4697]: I0127 16:00:02.833774 4697 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f5bc43e8-9cbd-49da-812d-00d599d2d563-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 16:00:02 crc kubenswrapper[4697]: I0127 16:00:02.838120 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5bc43e8-9cbd-49da-812d-00d599d2d563-kube-api-access-nx24r" (OuterVolumeSpecName: "kube-api-access-nx24r") pod "f5bc43e8-9cbd-49da-812d-00d599d2d563" (UID: "f5bc43e8-9cbd-49da-812d-00d599d2d563"). InnerVolumeSpecName "kube-api-access-nx24r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:00:02 crc kubenswrapper[4697]: I0127 16:00:02.838958 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5bc43e8-9cbd-49da-812d-00d599d2d563-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "f5bc43e8-9cbd-49da-812d-00d599d2d563" (UID: "f5bc43e8-9cbd-49da-812d-00d599d2d563"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:00:02 crc kubenswrapper[4697]: I0127 16:00:02.936035 4697 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f5bc43e8-9cbd-49da-812d-00d599d2d563-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 27 16:00:02 crc kubenswrapper[4697]: I0127 16:00:02.936091 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nx24r\" (UniqueName: \"kubernetes.io/projected/f5bc43e8-9cbd-49da-812d-00d599d2d563-kube-api-access-nx24r\") on node \"crc\" DevicePath \"\"" Jan 27 16:00:03 crc kubenswrapper[4697]: I0127 16:00:03.377483 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492160-jpmt9" event={"ID":"f5bc43e8-9cbd-49da-812d-00d599d2d563","Type":"ContainerDied","Data":"0d965261fbd90f82bfdb729839a0f04279b56d654aef85d3960cf4322ae9decc"} Jan 27 16:00:03 crc kubenswrapper[4697]: I0127 16:00:03.377529 4697 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0d965261fbd90f82bfdb729839a0f04279b56d654aef85d3960cf4322ae9decc" Jan 27 16:00:03 crc kubenswrapper[4697]: I0127 16:00:03.377525 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492160-jpmt9" Jan 27 16:00:03 crc kubenswrapper[4697]: I0127 16:00:03.848588 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492115-hjr25"] Jan 27 16:00:03 crc kubenswrapper[4697]: I0127 16:00:03.858602 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492115-hjr25"] Jan 27 16:00:04 crc kubenswrapper[4697]: I0127 16:00:04.587435 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="257b9c99-2693-4921-b8bb-4ca5c66e711c" path="/var/lib/kubelet/pods/257b9c99-2693-4921-b8bb-4ca5c66e711c/volumes" Jan 27 16:00:08 crc kubenswrapper[4697]: I0127 16:00:08.568737 4697 scope.go:117] "RemoveContainer" containerID="070df30899c85498f8cec50dcfed85b20e4ba889e263f6d29311290475d4f7df" Jan 27 16:00:08 crc kubenswrapper[4697]: E0127 16:00:08.569978 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wz495_openshift-machine-config-operator(e9bec8bc-b2a6-4865-83ca-692ae5c022a6)\"" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" Jan 27 16:00:21 crc kubenswrapper[4697]: I0127 16:00:21.568865 4697 scope.go:117] "RemoveContainer" containerID="070df30899c85498f8cec50dcfed85b20e4ba889e263f6d29311290475d4f7df" Jan 27 16:00:21 crc kubenswrapper[4697]: E0127 16:00:21.569834 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wz495_openshift-machine-config-operator(e9bec8bc-b2a6-4865-83ca-692ae5c022a6)\"" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" Jan 27 16:00:33 crc kubenswrapper[4697]: I0127 16:00:33.569436 4697 scope.go:117] "RemoveContainer" containerID="070df30899c85498f8cec50dcfed85b20e4ba889e263f6d29311290475d4f7df" Jan 27 16:00:33 crc kubenswrapper[4697]: E0127 16:00:33.570715 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wz495_openshift-machine-config-operator(e9bec8bc-b2a6-4865-83ca-692ae5c022a6)\"" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" Jan 27 16:00:40 crc kubenswrapper[4697]: I0127 16:00:40.763761 4697 generic.go:334] "Generic (PLEG): container finished" podID="2b30d2e1-f8ce-4e50-9476-eb2d454bc1ce" containerID="9a7690a278c56003caa48b68b2789119167e6cb219760a3fa12df267dd6f09a6" exitCode=0 Jan 27 16:00:40 crc kubenswrapper[4697]: I0127 16:00:40.763892 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4hmkx" event={"ID":"2b30d2e1-f8ce-4e50-9476-eb2d454bc1ce","Type":"ContainerDied","Data":"9a7690a278c56003caa48b68b2789119167e6cb219760a3fa12df267dd6f09a6"} Jan 27 16:00:42 crc kubenswrapper[4697]: I0127 16:00:42.222839 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4hmkx" Jan 27 16:00:42 crc kubenswrapper[4697]: I0127 16:00:42.422428 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/2b30d2e1-f8ce-4e50-9476-eb2d454bc1ce-ceilometer-compute-config-data-1\") pod \"2b30d2e1-f8ce-4e50-9476-eb2d454bc1ce\" (UID: \"2b30d2e1-f8ce-4e50-9476-eb2d454bc1ce\") " Jan 27 16:00:42 crc kubenswrapper[4697]: I0127 16:00:42.422797 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b30d2e1-f8ce-4e50-9476-eb2d454bc1ce-telemetry-combined-ca-bundle\") pod \"2b30d2e1-f8ce-4e50-9476-eb2d454bc1ce\" (UID: \"2b30d2e1-f8ce-4e50-9476-eb2d454bc1ce\") " Jan 27 16:00:42 crc kubenswrapper[4697]: I0127 16:00:42.422853 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/2b30d2e1-f8ce-4e50-9476-eb2d454bc1ce-ceilometer-compute-config-data-2\") pod \"2b30d2e1-f8ce-4e50-9476-eb2d454bc1ce\" (UID: \"2b30d2e1-f8ce-4e50-9476-eb2d454bc1ce\") " Jan 27 16:00:42 crc kubenswrapper[4697]: I0127 16:00:42.422896 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/2b30d2e1-f8ce-4e50-9476-eb2d454bc1ce-ceilometer-compute-config-data-0\") pod \"2b30d2e1-f8ce-4e50-9476-eb2d454bc1ce\" (UID: \"2b30d2e1-f8ce-4e50-9476-eb2d454bc1ce\") " Jan 27 16:00:42 crc kubenswrapper[4697]: I0127 16:00:42.423007 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w8drd\" (UniqueName: \"kubernetes.io/projected/2b30d2e1-f8ce-4e50-9476-eb2d454bc1ce-kube-api-access-w8drd\") pod \"2b30d2e1-f8ce-4e50-9476-eb2d454bc1ce\" (UID: \"2b30d2e1-f8ce-4e50-9476-eb2d454bc1ce\") " Jan 27 16:00:42 crc kubenswrapper[4697]: I0127 16:00:42.423053 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2b30d2e1-f8ce-4e50-9476-eb2d454bc1ce-ssh-key-openstack-edpm-ipam\") pod \"2b30d2e1-f8ce-4e50-9476-eb2d454bc1ce\" (UID: \"2b30d2e1-f8ce-4e50-9476-eb2d454bc1ce\") " Jan 27 16:00:42 crc kubenswrapper[4697]: I0127 16:00:42.423108 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2b30d2e1-f8ce-4e50-9476-eb2d454bc1ce-inventory\") pod \"2b30d2e1-f8ce-4e50-9476-eb2d454bc1ce\" (UID: \"2b30d2e1-f8ce-4e50-9476-eb2d454bc1ce\") " Jan 27 16:00:42 crc kubenswrapper[4697]: I0127 16:00:42.446181 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b30d2e1-f8ce-4e50-9476-eb2d454bc1ce-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "2b30d2e1-f8ce-4e50-9476-eb2d454bc1ce" (UID: "2b30d2e1-f8ce-4e50-9476-eb2d454bc1ce"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:00:42 crc kubenswrapper[4697]: I0127 16:00:42.446832 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b30d2e1-f8ce-4e50-9476-eb2d454bc1ce-kube-api-access-w8drd" (OuterVolumeSpecName: "kube-api-access-w8drd") pod "2b30d2e1-f8ce-4e50-9476-eb2d454bc1ce" (UID: "2b30d2e1-f8ce-4e50-9476-eb2d454bc1ce"). InnerVolumeSpecName "kube-api-access-w8drd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:00:42 crc kubenswrapper[4697]: I0127 16:00:42.451201 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b30d2e1-f8ce-4e50-9476-eb2d454bc1ce-inventory" (OuterVolumeSpecName: "inventory") pod "2b30d2e1-f8ce-4e50-9476-eb2d454bc1ce" (UID: "2b30d2e1-f8ce-4e50-9476-eb2d454bc1ce"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:00:42 crc kubenswrapper[4697]: I0127 16:00:42.457530 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b30d2e1-f8ce-4e50-9476-eb2d454bc1ce-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "2b30d2e1-f8ce-4e50-9476-eb2d454bc1ce" (UID: "2b30d2e1-f8ce-4e50-9476-eb2d454bc1ce"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:00:42 crc kubenswrapper[4697]: I0127 16:00:42.458360 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b30d2e1-f8ce-4e50-9476-eb2d454bc1ce-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "2b30d2e1-f8ce-4e50-9476-eb2d454bc1ce" (UID: "2b30d2e1-f8ce-4e50-9476-eb2d454bc1ce"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:00:42 crc kubenswrapper[4697]: I0127 16:00:42.459156 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b30d2e1-f8ce-4e50-9476-eb2d454bc1ce-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "2b30d2e1-f8ce-4e50-9476-eb2d454bc1ce" (UID: "2b30d2e1-f8ce-4e50-9476-eb2d454bc1ce"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:00:42 crc kubenswrapper[4697]: I0127 16:00:42.470985 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b30d2e1-f8ce-4e50-9476-eb2d454bc1ce-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "2b30d2e1-f8ce-4e50-9476-eb2d454bc1ce" (UID: "2b30d2e1-f8ce-4e50-9476-eb2d454bc1ce"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:00:42 crc kubenswrapper[4697]: I0127 16:00:42.525282 4697 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b30d2e1-f8ce-4e50-9476-eb2d454bc1ce-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 16:00:42 crc kubenswrapper[4697]: I0127 16:00:42.525315 4697 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/2b30d2e1-f8ce-4e50-9476-eb2d454bc1ce-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Jan 27 16:00:42 crc kubenswrapper[4697]: I0127 16:00:42.525333 4697 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/2b30d2e1-f8ce-4e50-9476-eb2d454bc1ce-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Jan 27 16:00:42 crc kubenswrapper[4697]: I0127 16:00:42.525345 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w8drd\" (UniqueName: \"kubernetes.io/projected/2b30d2e1-f8ce-4e50-9476-eb2d454bc1ce-kube-api-access-w8drd\") on node \"crc\" DevicePath \"\"" Jan 27 16:00:42 crc kubenswrapper[4697]: I0127 16:00:42.525359 4697 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2b30d2e1-f8ce-4e50-9476-eb2d454bc1ce-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 16:00:42 crc kubenswrapper[4697]: I0127 16:00:42.525370 4697 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2b30d2e1-f8ce-4e50-9476-eb2d454bc1ce-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 16:00:42 crc kubenswrapper[4697]: I0127 16:00:42.525380 4697 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/2b30d2e1-f8ce-4e50-9476-eb2d454bc1ce-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Jan 27 16:00:42 crc kubenswrapper[4697]: I0127 16:00:42.783675 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4hmkx" event={"ID":"2b30d2e1-f8ce-4e50-9476-eb2d454bc1ce","Type":"ContainerDied","Data":"a1d7c9bcd398371009374d93f4e021deb8e0cd9ace6be5060025b77b213cc384"} Jan 27 16:00:42 crc kubenswrapper[4697]: I0127 16:00:42.783711 4697 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a1d7c9bcd398371009374d93f4e021deb8e0cd9ace6be5060025b77b213cc384" Jan 27 16:00:42 crc kubenswrapper[4697]: I0127 16:00:42.783720 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4hmkx" Jan 27 16:00:44 crc kubenswrapper[4697]: I0127 16:00:44.568555 4697 scope.go:117] "RemoveContainer" containerID="070df30899c85498f8cec50dcfed85b20e4ba889e263f6d29311290475d4f7df" Jan 27 16:00:44 crc kubenswrapper[4697]: E0127 16:00:44.569163 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wz495_openshift-machine-config-operator(e9bec8bc-b2a6-4865-83ca-692ae5c022a6)\"" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" Jan 27 16:00:48 crc kubenswrapper[4697]: I0127 16:00:48.446320 4697 scope.go:117] "RemoveContainer" containerID="f5d2797f5463ee235ca182a38e12058696a35ed747108c80ec94a54264e8f6a9" Jan 27 16:00:57 crc kubenswrapper[4697]: I0127 16:00:57.569128 4697 scope.go:117] "RemoveContainer" containerID="070df30899c85498f8cec50dcfed85b20e4ba889e263f6d29311290475d4f7df" Jan 27 16:00:57 crc kubenswrapper[4697]: E0127 16:00:57.570026 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wz495_openshift-machine-config-operator(e9bec8bc-b2a6-4865-83ca-692ae5c022a6)\"" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" Jan 27 16:01:00 crc kubenswrapper[4697]: I0127 16:01:00.165961 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29492161-pqk7t"] Jan 27 16:01:00 crc kubenswrapper[4697]: E0127 16:01:00.166717 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5bc43e8-9cbd-49da-812d-00d599d2d563" containerName="collect-profiles" Jan 27 16:01:00 crc kubenswrapper[4697]: I0127 16:01:00.166732 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5bc43e8-9cbd-49da-812d-00d599d2d563" containerName="collect-profiles" Jan 27 16:01:00 crc kubenswrapper[4697]: E0127 16:01:00.166743 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b30d2e1-f8ce-4e50-9476-eb2d454bc1ce" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 27 16:01:00 crc kubenswrapper[4697]: I0127 16:01:00.166752 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b30d2e1-f8ce-4e50-9476-eb2d454bc1ce" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 27 16:01:00 crc kubenswrapper[4697]: I0127 16:01:00.166992 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5bc43e8-9cbd-49da-812d-00d599d2d563" containerName="collect-profiles" Jan 27 16:01:00 crc kubenswrapper[4697]: I0127 16:01:00.167010 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b30d2e1-f8ce-4e50-9476-eb2d454bc1ce" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 27 16:01:00 crc kubenswrapper[4697]: I0127 16:01:00.167722 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29492161-pqk7t" Jan 27 16:01:00 crc kubenswrapper[4697]: I0127 16:01:00.180625 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29492161-pqk7t"] Jan 27 16:01:00 crc kubenswrapper[4697]: I0127 16:01:00.289472 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02b729a6-604c-42d7-94d9-0d39bfcaf203-config-data\") pod \"keystone-cron-29492161-pqk7t\" (UID: \"02b729a6-604c-42d7-94d9-0d39bfcaf203\") " pod="openstack/keystone-cron-29492161-pqk7t" Jan 27 16:01:00 crc kubenswrapper[4697]: I0127 16:01:00.289533 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zk4d7\" (UniqueName: \"kubernetes.io/projected/02b729a6-604c-42d7-94d9-0d39bfcaf203-kube-api-access-zk4d7\") pod \"keystone-cron-29492161-pqk7t\" (UID: \"02b729a6-604c-42d7-94d9-0d39bfcaf203\") " pod="openstack/keystone-cron-29492161-pqk7t" Jan 27 16:01:00 crc kubenswrapper[4697]: I0127 16:01:00.289577 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02b729a6-604c-42d7-94d9-0d39bfcaf203-combined-ca-bundle\") pod \"keystone-cron-29492161-pqk7t\" (UID: \"02b729a6-604c-42d7-94d9-0d39bfcaf203\") " pod="openstack/keystone-cron-29492161-pqk7t" Jan 27 16:01:00 crc kubenswrapper[4697]: I0127 16:01:00.289595 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/02b729a6-604c-42d7-94d9-0d39bfcaf203-fernet-keys\") pod \"keystone-cron-29492161-pqk7t\" (UID: \"02b729a6-604c-42d7-94d9-0d39bfcaf203\") " pod="openstack/keystone-cron-29492161-pqk7t" Jan 27 16:01:00 crc kubenswrapper[4697]: I0127 16:01:00.390683 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02b729a6-604c-42d7-94d9-0d39bfcaf203-config-data\") pod \"keystone-cron-29492161-pqk7t\" (UID: \"02b729a6-604c-42d7-94d9-0d39bfcaf203\") " pod="openstack/keystone-cron-29492161-pqk7t" Jan 27 16:01:00 crc kubenswrapper[4697]: I0127 16:01:00.390773 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zk4d7\" (UniqueName: \"kubernetes.io/projected/02b729a6-604c-42d7-94d9-0d39bfcaf203-kube-api-access-zk4d7\") pod \"keystone-cron-29492161-pqk7t\" (UID: \"02b729a6-604c-42d7-94d9-0d39bfcaf203\") " pod="openstack/keystone-cron-29492161-pqk7t" Jan 27 16:01:00 crc kubenswrapper[4697]: I0127 16:01:00.390886 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02b729a6-604c-42d7-94d9-0d39bfcaf203-combined-ca-bundle\") pod \"keystone-cron-29492161-pqk7t\" (UID: \"02b729a6-604c-42d7-94d9-0d39bfcaf203\") " pod="openstack/keystone-cron-29492161-pqk7t" Jan 27 16:01:00 crc kubenswrapper[4697]: I0127 16:01:00.390913 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/02b729a6-604c-42d7-94d9-0d39bfcaf203-fernet-keys\") pod \"keystone-cron-29492161-pqk7t\" (UID: \"02b729a6-604c-42d7-94d9-0d39bfcaf203\") " pod="openstack/keystone-cron-29492161-pqk7t" Jan 27 16:01:00 crc kubenswrapper[4697]: I0127 16:01:00.402669 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02b729a6-604c-42d7-94d9-0d39bfcaf203-combined-ca-bundle\") pod \"keystone-cron-29492161-pqk7t\" (UID: \"02b729a6-604c-42d7-94d9-0d39bfcaf203\") " pod="openstack/keystone-cron-29492161-pqk7t" Jan 27 16:01:00 crc kubenswrapper[4697]: I0127 16:01:00.402776 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02b729a6-604c-42d7-94d9-0d39bfcaf203-config-data\") pod \"keystone-cron-29492161-pqk7t\" (UID: \"02b729a6-604c-42d7-94d9-0d39bfcaf203\") " pod="openstack/keystone-cron-29492161-pqk7t" Jan 27 16:01:00 crc kubenswrapper[4697]: I0127 16:01:00.406222 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/02b729a6-604c-42d7-94d9-0d39bfcaf203-fernet-keys\") pod \"keystone-cron-29492161-pqk7t\" (UID: \"02b729a6-604c-42d7-94d9-0d39bfcaf203\") " pod="openstack/keystone-cron-29492161-pqk7t" Jan 27 16:01:00 crc kubenswrapper[4697]: I0127 16:01:00.408571 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zk4d7\" (UniqueName: \"kubernetes.io/projected/02b729a6-604c-42d7-94d9-0d39bfcaf203-kube-api-access-zk4d7\") pod \"keystone-cron-29492161-pqk7t\" (UID: \"02b729a6-604c-42d7-94d9-0d39bfcaf203\") " pod="openstack/keystone-cron-29492161-pqk7t" Jan 27 16:01:00 crc kubenswrapper[4697]: I0127 16:01:00.503474 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29492161-pqk7t" Jan 27 16:01:01 crc kubenswrapper[4697]: I0127 16:01:01.048069 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29492161-pqk7t"] Jan 27 16:01:01 crc kubenswrapper[4697]: I0127 16:01:01.947285 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29492161-pqk7t" event={"ID":"02b729a6-604c-42d7-94d9-0d39bfcaf203","Type":"ContainerStarted","Data":"5fff9dac6be8517638536601421a1bedacf860f6d95d16385d64d1c2d2cfb33b"} Jan 27 16:01:01 crc kubenswrapper[4697]: I0127 16:01:01.947587 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29492161-pqk7t" event={"ID":"02b729a6-604c-42d7-94d9-0d39bfcaf203","Type":"ContainerStarted","Data":"cafc4a65719b9a146ad075797c8cd13e2e88ebd56a5fa4e92e4392c4a636a7bd"} Jan 27 16:01:04 crc kubenswrapper[4697]: I0127 16:01:04.981645 4697 generic.go:334] "Generic (PLEG): container finished" podID="02b729a6-604c-42d7-94d9-0d39bfcaf203" containerID="5fff9dac6be8517638536601421a1bedacf860f6d95d16385d64d1c2d2cfb33b" exitCode=0 Jan 27 16:01:04 crc kubenswrapper[4697]: I0127 16:01:04.981755 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29492161-pqk7t" event={"ID":"02b729a6-604c-42d7-94d9-0d39bfcaf203","Type":"ContainerDied","Data":"5fff9dac6be8517638536601421a1bedacf860f6d95d16385d64d1c2d2cfb33b"} Jan 27 16:01:06 crc kubenswrapper[4697]: I0127 16:01:06.333160 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29492161-pqk7t" Jan 27 16:01:06 crc kubenswrapper[4697]: I0127 16:01:06.540220 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/02b729a6-604c-42d7-94d9-0d39bfcaf203-fernet-keys\") pod \"02b729a6-604c-42d7-94d9-0d39bfcaf203\" (UID: \"02b729a6-604c-42d7-94d9-0d39bfcaf203\") " Jan 27 16:01:06 crc kubenswrapper[4697]: I0127 16:01:06.540366 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02b729a6-604c-42d7-94d9-0d39bfcaf203-combined-ca-bundle\") pod \"02b729a6-604c-42d7-94d9-0d39bfcaf203\" (UID: \"02b729a6-604c-42d7-94d9-0d39bfcaf203\") " Jan 27 16:01:06 crc kubenswrapper[4697]: I0127 16:01:06.540420 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02b729a6-604c-42d7-94d9-0d39bfcaf203-config-data\") pod \"02b729a6-604c-42d7-94d9-0d39bfcaf203\" (UID: \"02b729a6-604c-42d7-94d9-0d39bfcaf203\") " Jan 27 16:01:06 crc kubenswrapper[4697]: I0127 16:01:06.540515 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zk4d7\" (UniqueName: \"kubernetes.io/projected/02b729a6-604c-42d7-94d9-0d39bfcaf203-kube-api-access-zk4d7\") pod \"02b729a6-604c-42d7-94d9-0d39bfcaf203\" (UID: \"02b729a6-604c-42d7-94d9-0d39bfcaf203\") " Jan 27 16:01:06 crc kubenswrapper[4697]: I0127 16:01:06.552012 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02b729a6-604c-42d7-94d9-0d39bfcaf203-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "02b729a6-604c-42d7-94d9-0d39bfcaf203" (UID: "02b729a6-604c-42d7-94d9-0d39bfcaf203"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:01:06 crc kubenswrapper[4697]: I0127 16:01:06.564066 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02b729a6-604c-42d7-94d9-0d39bfcaf203-kube-api-access-zk4d7" (OuterVolumeSpecName: "kube-api-access-zk4d7") pod "02b729a6-604c-42d7-94d9-0d39bfcaf203" (UID: "02b729a6-604c-42d7-94d9-0d39bfcaf203"). InnerVolumeSpecName "kube-api-access-zk4d7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:01:06 crc kubenswrapper[4697]: I0127 16:01:06.580723 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02b729a6-604c-42d7-94d9-0d39bfcaf203-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "02b729a6-604c-42d7-94d9-0d39bfcaf203" (UID: "02b729a6-604c-42d7-94d9-0d39bfcaf203"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:01:06 crc kubenswrapper[4697]: I0127 16:01:06.606757 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02b729a6-604c-42d7-94d9-0d39bfcaf203-config-data" (OuterVolumeSpecName: "config-data") pod "02b729a6-604c-42d7-94d9-0d39bfcaf203" (UID: "02b729a6-604c-42d7-94d9-0d39bfcaf203"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:01:06 crc kubenswrapper[4697]: I0127 16:01:06.645485 4697 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/02b729a6-604c-42d7-94d9-0d39bfcaf203-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 27 16:01:06 crc kubenswrapper[4697]: I0127 16:01:06.645519 4697 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02b729a6-604c-42d7-94d9-0d39bfcaf203-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 16:01:06 crc kubenswrapper[4697]: I0127 16:01:06.645530 4697 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02b729a6-604c-42d7-94d9-0d39bfcaf203-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 16:01:06 crc kubenswrapper[4697]: I0127 16:01:06.645539 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zk4d7\" (UniqueName: \"kubernetes.io/projected/02b729a6-604c-42d7-94d9-0d39bfcaf203-kube-api-access-zk4d7\") on node \"crc\" DevicePath \"\"" Jan 27 16:01:07 crc kubenswrapper[4697]: I0127 16:01:07.003003 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29492161-pqk7t" event={"ID":"02b729a6-604c-42d7-94d9-0d39bfcaf203","Type":"ContainerDied","Data":"cafc4a65719b9a146ad075797c8cd13e2e88ebd56a5fa4e92e4392c4a636a7bd"} Jan 27 16:01:07 crc kubenswrapper[4697]: I0127 16:01:07.003343 4697 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cafc4a65719b9a146ad075797c8cd13e2e88ebd56a5fa4e92e4392c4a636a7bd" Jan 27 16:01:07 crc kubenswrapper[4697]: I0127 16:01:07.003426 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29492161-pqk7t" Jan 27 16:01:11 crc kubenswrapper[4697]: I0127 16:01:11.568659 4697 scope.go:117] "RemoveContainer" containerID="070df30899c85498f8cec50dcfed85b20e4ba889e263f6d29311290475d4f7df" Jan 27 16:01:11 crc kubenswrapper[4697]: E0127 16:01:11.570448 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wz495_openshift-machine-config-operator(e9bec8bc-b2a6-4865-83ca-692ae5c022a6)\"" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" Jan 27 16:01:26 crc kubenswrapper[4697]: I0127 16:01:26.568072 4697 scope.go:117] "RemoveContainer" containerID="070df30899c85498f8cec50dcfed85b20e4ba889e263f6d29311290475d4f7df" Jan 27 16:01:26 crc kubenswrapper[4697]: E0127 16:01:26.568657 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wz495_openshift-machine-config-operator(e9bec8bc-b2a6-4865-83ca-692ae5c022a6)\"" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" Jan 27 16:01:37 crc kubenswrapper[4697]: I0127 16:01:37.568847 4697 scope.go:117] "RemoveContainer" containerID="070df30899c85498f8cec50dcfed85b20e4ba889e263f6d29311290475d4f7df" Jan 27 16:01:37 crc kubenswrapper[4697]: E0127 16:01:37.569577 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wz495_openshift-machine-config-operator(e9bec8bc-b2a6-4865-83ca-692ae5c022a6)\"" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" Jan 27 16:01:39 crc kubenswrapper[4697]: I0127 16:01:39.947958 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Jan 27 16:01:39 crc kubenswrapper[4697]: E0127 16:01:39.948614 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02b729a6-604c-42d7-94d9-0d39bfcaf203" containerName="keystone-cron" Jan 27 16:01:39 crc kubenswrapper[4697]: I0127 16:01:39.948627 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="02b729a6-604c-42d7-94d9-0d39bfcaf203" containerName="keystone-cron" Jan 27 16:01:39 crc kubenswrapper[4697]: I0127 16:01:39.948833 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="02b729a6-604c-42d7-94d9-0d39bfcaf203" containerName="keystone-cron" Jan 27 16:01:39 crc kubenswrapper[4697]: I0127 16:01:39.949454 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 27 16:01:39 crc kubenswrapper[4697]: I0127 16:01:39.951246 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Jan 27 16:01:39 crc kubenswrapper[4697]: I0127 16:01:39.951501 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-2q9kp" Jan 27 16:01:39 crc kubenswrapper[4697]: I0127 16:01:39.951980 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Jan 27 16:01:39 crc kubenswrapper[4697]: I0127 16:01:39.952427 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 27 16:01:39 crc kubenswrapper[4697]: I0127 16:01:39.977666 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 27 16:01:40 crc kubenswrapper[4697]: I0127 16:01:40.103858 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/76805ce8-13c7-4d04-83c6-b70eaf33b9d8-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"76805ce8-13c7-4d04-83c6-b70eaf33b9d8\") " pod="openstack/tempest-tests-tempest" Jan 27 16:01:40 crc kubenswrapper[4697]: I0127 16:01:40.103969 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/76805ce8-13c7-4d04-83c6-b70eaf33b9d8-config-data\") pod \"tempest-tests-tempest\" (UID: \"76805ce8-13c7-4d04-83c6-b70eaf33b9d8\") " pod="openstack/tempest-tests-tempest" Jan 27 16:01:40 crc kubenswrapper[4697]: I0127 16:01:40.104015 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/76805ce8-13c7-4d04-83c6-b70eaf33b9d8-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"76805ce8-13c7-4d04-83c6-b70eaf33b9d8\") " pod="openstack/tempest-tests-tempest" Jan 27 16:01:40 crc kubenswrapper[4697]: I0127 16:01:40.104046 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbrqc\" (UniqueName: \"kubernetes.io/projected/76805ce8-13c7-4d04-83c6-b70eaf33b9d8-kube-api-access-qbrqc\") pod \"tempest-tests-tempest\" (UID: \"76805ce8-13c7-4d04-83c6-b70eaf33b9d8\") " pod="openstack/tempest-tests-tempest" Jan 27 16:01:40 crc kubenswrapper[4697]: I0127 16:01:40.104071 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/76805ce8-13c7-4d04-83c6-b70eaf33b9d8-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"76805ce8-13c7-4d04-83c6-b70eaf33b9d8\") " pod="openstack/tempest-tests-tempest" Jan 27 16:01:40 crc kubenswrapper[4697]: I0127 16:01:40.104092 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"tempest-tests-tempest\" (UID: \"76805ce8-13c7-4d04-83c6-b70eaf33b9d8\") " pod="openstack/tempest-tests-tempest" Jan 27 16:01:40 crc kubenswrapper[4697]: I0127 16:01:40.104139 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/76805ce8-13c7-4d04-83c6-b70eaf33b9d8-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"76805ce8-13c7-4d04-83c6-b70eaf33b9d8\") " pod="openstack/tempest-tests-tempest" Jan 27 16:01:40 crc kubenswrapper[4697]: I0127 16:01:40.104165 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/76805ce8-13c7-4d04-83c6-b70eaf33b9d8-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"76805ce8-13c7-4d04-83c6-b70eaf33b9d8\") " pod="openstack/tempest-tests-tempest" Jan 27 16:01:40 crc kubenswrapper[4697]: I0127 16:01:40.104196 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/76805ce8-13c7-4d04-83c6-b70eaf33b9d8-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"76805ce8-13c7-4d04-83c6-b70eaf33b9d8\") " pod="openstack/tempest-tests-tempest" Jan 27 16:01:40 crc kubenswrapper[4697]: I0127 16:01:40.205509 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/76805ce8-13c7-4d04-83c6-b70eaf33b9d8-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"76805ce8-13c7-4d04-83c6-b70eaf33b9d8\") " pod="openstack/tempest-tests-tempest" Jan 27 16:01:40 crc kubenswrapper[4697]: I0127 16:01:40.205555 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qbrqc\" (UniqueName: \"kubernetes.io/projected/76805ce8-13c7-4d04-83c6-b70eaf33b9d8-kube-api-access-qbrqc\") pod \"tempest-tests-tempest\" (UID: \"76805ce8-13c7-4d04-83c6-b70eaf33b9d8\") " pod="openstack/tempest-tests-tempest" Jan 27 16:01:40 crc kubenswrapper[4697]: I0127 16:01:40.205604 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/76805ce8-13c7-4d04-83c6-b70eaf33b9d8-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"76805ce8-13c7-4d04-83c6-b70eaf33b9d8\") " pod="openstack/tempest-tests-tempest" Jan 27 16:01:40 crc kubenswrapper[4697]: I0127 16:01:40.205634 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"tempest-tests-tempest\" (UID: \"76805ce8-13c7-4d04-83c6-b70eaf33b9d8\") " pod="openstack/tempest-tests-tempest" Jan 27 16:01:40 crc kubenswrapper[4697]: I0127 16:01:40.205688 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/76805ce8-13c7-4d04-83c6-b70eaf33b9d8-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"76805ce8-13c7-4d04-83c6-b70eaf33b9d8\") " pod="openstack/tempest-tests-tempest" Jan 27 16:01:40 crc kubenswrapper[4697]: I0127 16:01:40.205732 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/76805ce8-13c7-4d04-83c6-b70eaf33b9d8-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"76805ce8-13c7-4d04-83c6-b70eaf33b9d8\") " pod="openstack/tempest-tests-tempest" Jan 27 16:01:40 crc kubenswrapper[4697]: I0127 16:01:40.205774 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/76805ce8-13c7-4d04-83c6-b70eaf33b9d8-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"76805ce8-13c7-4d04-83c6-b70eaf33b9d8\") " pod="openstack/tempest-tests-tempest" Jan 27 16:01:40 crc kubenswrapper[4697]: I0127 16:01:40.205870 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/76805ce8-13c7-4d04-83c6-b70eaf33b9d8-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"76805ce8-13c7-4d04-83c6-b70eaf33b9d8\") " pod="openstack/tempest-tests-tempest" Jan 27 16:01:40 crc kubenswrapper[4697]: I0127 16:01:40.205917 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/76805ce8-13c7-4d04-83c6-b70eaf33b9d8-config-data\") pod \"tempest-tests-tempest\" (UID: \"76805ce8-13c7-4d04-83c6-b70eaf33b9d8\") " pod="openstack/tempest-tests-tempest" Jan 27 16:01:40 crc kubenswrapper[4697]: I0127 16:01:40.207237 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/76805ce8-13c7-4d04-83c6-b70eaf33b9d8-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"76805ce8-13c7-4d04-83c6-b70eaf33b9d8\") " pod="openstack/tempest-tests-tempest" Jan 27 16:01:40 crc kubenswrapper[4697]: I0127 16:01:40.207738 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/76805ce8-13c7-4d04-83c6-b70eaf33b9d8-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"76805ce8-13c7-4d04-83c6-b70eaf33b9d8\") " pod="openstack/tempest-tests-tempest" Jan 27 16:01:40 crc kubenswrapper[4697]: I0127 16:01:40.207887 4697 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"tempest-tests-tempest\" (UID: \"76805ce8-13c7-4d04-83c6-b70eaf33b9d8\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/tempest-tests-tempest" Jan 27 16:01:40 crc kubenswrapper[4697]: I0127 16:01:40.208435 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/76805ce8-13c7-4d04-83c6-b70eaf33b9d8-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"76805ce8-13c7-4d04-83c6-b70eaf33b9d8\") " pod="openstack/tempest-tests-tempest" Jan 27 16:01:40 crc kubenswrapper[4697]: I0127 16:01:40.210417 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/76805ce8-13c7-4d04-83c6-b70eaf33b9d8-config-data\") pod \"tempest-tests-tempest\" (UID: \"76805ce8-13c7-4d04-83c6-b70eaf33b9d8\") " pod="openstack/tempest-tests-tempest" Jan 27 16:01:40 crc kubenswrapper[4697]: I0127 16:01:40.213503 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/76805ce8-13c7-4d04-83c6-b70eaf33b9d8-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"76805ce8-13c7-4d04-83c6-b70eaf33b9d8\") " pod="openstack/tempest-tests-tempest" Jan 27 16:01:40 crc kubenswrapper[4697]: I0127 16:01:40.214382 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/76805ce8-13c7-4d04-83c6-b70eaf33b9d8-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"76805ce8-13c7-4d04-83c6-b70eaf33b9d8\") " pod="openstack/tempest-tests-tempest" Jan 27 16:01:40 crc kubenswrapper[4697]: I0127 16:01:40.229592 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/76805ce8-13c7-4d04-83c6-b70eaf33b9d8-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"76805ce8-13c7-4d04-83c6-b70eaf33b9d8\") " pod="openstack/tempest-tests-tempest" Jan 27 16:01:40 crc kubenswrapper[4697]: I0127 16:01:40.238289 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qbrqc\" (UniqueName: \"kubernetes.io/projected/76805ce8-13c7-4d04-83c6-b70eaf33b9d8-kube-api-access-qbrqc\") pod \"tempest-tests-tempest\" (UID: \"76805ce8-13c7-4d04-83c6-b70eaf33b9d8\") " pod="openstack/tempest-tests-tempest" Jan 27 16:01:40 crc kubenswrapper[4697]: I0127 16:01:40.271770 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"tempest-tests-tempest\" (UID: \"76805ce8-13c7-4d04-83c6-b70eaf33b9d8\") " pod="openstack/tempest-tests-tempest" Jan 27 16:01:40 crc kubenswrapper[4697]: I0127 16:01:40.570487 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 27 16:01:41 crc kubenswrapper[4697]: I0127 16:01:41.061129 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 27 16:01:41 crc kubenswrapper[4697]: I0127 16:01:41.069434 4697 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 16:01:41 crc kubenswrapper[4697]: I0127 16:01:41.315109 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"76805ce8-13c7-4d04-83c6-b70eaf33b9d8","Type":"ContainerStarted","Data":"4d185a3b05d3717bb71ef179553bf7e6f6a3da247b1ee10d7e73874cb0a370b8"} Jan 27 16:01:48 crc kubenswrapper[4697]: I0127 16:01:48.569382 4697 scope.go:117] "RemoveContainer" containerID="070df30899c85498f8cec50dcfed85b20e4ba889e263f6d29311290475d4f7df" Jan 27 16:01:48 crc kubenswrapper[4697]: E0127 16:01:48.570325 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wz495_openshift-machine-config-operator(e9bec8bc-b2a6-4865-83ca-692ae5c022a6)\"" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" Jan 27 16:02:02 crc kubenswrapper[4697]: I0127 16:02:02.568416 4697 scope.go:117] "RemoveContainer" containerID="070df30899c85498f8cec50dcfed85b20e4ba889e263f6d29311290475d4f7df" Jan 27 16:02:19 crc kubenswrapper[4697]: E0127 16:02:19.293236 4697 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Jan 27 16:02:19 crc kubenswrapper[4697]: E0127 16:02:19.293988 4697 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qbrqc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(76805ce8-13c7-4d04-83c6-b70eaf33b9d8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 16:02:19 crc kubenswrapper[4697]: E0127 16:02:19.296122 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="76805ce8-13c7-4d04-83c6-b70eaf33b9d8" Jan 27 16:02:20 crc kubenswrapper[4697]: I0127 16:02:20.231127 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wz495" event={"ID":"e9bec8bc-b2a6-4865-83ca-692ae5c022a6","Type":"ContainerStarted","Data":"3eb09feb9e26fa4ecab5032165e1fa5342a47b0a789b98e14109ed9027c3b15d"} Jan 27 16:02:20 crc kubenswrapper[4697]: E0127 16:02:20.233875 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="76805ce8-13c7-4d04-83c6-b70eaf33b9d8" Jan 27 16:02:35 crc kubenswrapper[4697]: I0127 16:02:35.199901 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 27 16:02:37 crc kubenswrapper[4697]: I0127 16:02:37.371248 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"76805ce8-13c7-4d04-83c6-b70eaf33b9d8","Type":"ContainerStarted","Data":"a4c7a0bbff0ebc952f4d45b407537f098be7e52fba2721dea1e9dd3fafa743bc"} Jan 27 16:02:37 crc kubenswrapper[4697]: I0127 16:02:37.396560 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=5.268327603 podStartE2EDuration="59.39653402s" podCreationTimestamp="2026-01-27 16:01:38 +0000 UTC" firstStartedPulling="2026-01-27 16:01:41.069237892 +0000 UTC m=+3197.241637673" lastFinishedPulling="2026-01-27 16:02:35.197444309 +0000 UTC m=+3251.369844090" observedRunningTime="2026-01-27 16:02:37.387700909 +0000 UTC m=+3253.560100710" watchObservedRunningTime="2026-01-27 16:02:37.39653402 +0000 UTC m=+3253.568933811" Jan 27 16:03:30 crc kubenswrapper[4697]: I0127 16:03:30.713377 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-t7hct"] Jan 27 16:03:30 crc kubenswrapper[4697]: I0127 16:03:30.732082 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t7hct" Jan 27 16:03:30 crc kubenswrapper[4697]: I0127 16:03:30.792902 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-t7hct"] Jan 27 16:03:30 crc kubenswrapper[4697]: I0127 16:03:30.866038 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f7942d4-1eba-48a4-b90d-be28d5e1b03d-utilities\") pod \"certified-operators-t7hct\" (UID: \"2f7942d4-1eba-48a4-b90d-be28d5e1b03d\") " pod="openshift-marketplace/certified-operators-t7hct" Jan 27 16:03:30 crc kubenswrapper[4697]: I0127 16:03:30.866216 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5f6lg\" (UniqueName: \"kubernetes.io/projected/2f7942d4-1eba-48a4-b90d-be28d5e1b03d-kube-api-access-5f6lg\") pod \"certified-operators-t7hct\" (UID: \"2f7942d4-1eba-48a4-b90d-be28d5e1b03d\") " pod="openshift-marketplace/certified-operators-t7hct" Jan 27 16:03:30 crc kubenswrapper[4697]: I0127 16:03:30.866263 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f7942d4-1eba-48a4-b90d-be28d5e1b03d-catalog-content\") pod \"certified-operators-t7hct\" (UID: \"2f7942d4-1eba-48a4-b90d-be28d5e1b03d\") " pod="openshift-marketplace/certified-operators-t7hct" Jan 27 16:03:30 crc kubenswrapper[4697]: I0127 16:03:30.967903 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f7942d4-1eba-48a4-b90d-be28d5e1b03d-utilities\") pod \"certified-operators-t7hct\" (UID: \"2f7942d4-1eba-48a4-b90d-be28d5e1b03d\") " pod="openshift-marketplace/certified-operators-t7hct" Jan 27 16:03:30 crc kubenswrapper[4697]: I0127 16:03:30.968019 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5f6lg\" (UniqueName: \"kubernetes.io/projected/2f7942d4-1eba-48a4-b90d-be28d5e1b03d-kube-api-access-5f6lg\") pod \"certified-operators-t7hct\" (UID: \"2f7942d4-1eba-48a4-b90d-be28d5e1b03d\") " pod="openshift-marketplace/certified-operators-t7hct" Jan 27 16:03:30 crc kubenswrapper[4697]: I0127 16:03:30.968049 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f7942d4-1eba-48a4-b90d-be28d5e1b03d-catalog-content\") pod \"certified-operators-t7hct\" (UID: \"2f7942d4-1eba-48a4-b90d-be28d5e1b03d\") " pod="openshift-marketplace/certified-operators-t7hct" Jan 27 16:03:30 crc kubenswrapper[4697]: I0127 16:03:30.968408 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f7942d4-1eba-48a4-b90d-be28d5e1b03d-utilities\") pod \"certified-operators-t7hct\" (UID: \"2f7942d4-1eba-48a4-b90d-be28d5e1b03d\") " pod="openshift-marketplace/certified-operators-t7hct" Jan 27 16:03:30 crc kubenswrapper[4697]: I0127 16:03:30.968453 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f7942d4-1eba-48a4-b90d-be28d5e1b03d-catalog-content\") pod \"certified-operators-t7hct\" (UID: \"2f7942d4-1eba-48a4-b90d-be28d5e1b03d\") " pod="openshift-marketplace/certified-operators-t7hct" Jan 27 16:03:31 crc kubenswrapper[4697]: I0127 16:03:31.002022 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5f6lg\" (UniqueName: \"kubernetes.io/projected/2f7942d4-1eba-48a4-b90d-be28d5e1b03d-kube-api-access-5f6lg\") pod \"certified-operators-t7hct\" (UID: \"2f7942d4-1eba-48a4-b90d-be28d5e1b03d\") " pod="openshift-marketplace/certified-operators-t7hct" Jan 27 16:03:31 crc kubenswrapper[4697]: I0127 16:03:31.088422 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t7hct" Jan 27 16:03:32 crc kubenswrapper[4697]: I0127 16:03:32.040176 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-t7hct"] Jan 27 16:03:32 crc kubenswrapper[4697]: I0127 16:03:32.933698 4697 generic.go:334] "Generic (PLEG): container finished" podID="2f7942d4-1eba-48a4-b90d-be28d5e1b03d" containerID="31ed8140a90e0b6156467d22f1e3a66075dd67bb86447e338deee5f8f58859a7" exitCode=0 Jan 27 16:03:32 crc kubenswrapper[4697]: I0127 16:03:32.933808 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t7hct" event={"ID":"2f7942d4-1eba-48a4-b90d-be28d5e1b03d","Type":"ContainerDied","Data":"31ed8140a90e0b6156467d22f1e3a66075dd67bb86447e338deee5f8f58859a7"} Jan 27 16:03:32 crc kubenswrapper[4697]: I0127 16:03:32.934389 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t7hct" event={"ID":"2f7942d4-1eba-48a4-b90d-be28d5e1b03d","Type":"ContainerStarted","Data":"d03c82903090855fe4d4ea79242082d0b6370721a9cc79bf9dba428338ba329d"} Jan 27 16:03:33 crc kubenswrapper[4697]: I0127 16:03:33.945922 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t7hct" event={"ID":"2f7942d4-1eba-48a4-b90d-be28d5e1b03d","Type":"ContainerStarted","Data":"3d066112d60ea2bd6bcbd30940d6b36e59635246272c39bbaeac1e90773fa9a8"} Jan 27 16:03:36 crc kubenswrapper[4697]: I0127 16:03:36.971506 4697 generic.go:334] "Generic (PLEG): container finished" podID="2f7942d4-1eba-48a4-b90d-be28d5e1b03d" containerID="3d066112d60ea2bd6bcbd30940d6b36e59635246272c39bbaeac1e90773fa9a8" exitCode=0 Jan 27 16:03:36 crc kubenswrapper[4697]: I0127 16:03:36.971668 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t7hct" event={"ID":"2f7942d4-1eba-48a4-b90d-be28d5e1b03d","Type":"ContainerDied","Data":"3d066112d60ea2bd6bcbd30940d6b36e59635246272c39bbaeac1e90773fa9a8"} Jan 27 16:03:37 crc kubenswrapper[4697]: I0127 16:03:37.983149 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t7hct" event={"ID":"2f7942d4-1eba-48a4-b90d-be28d5e1b03d","Type":"ContainerStarted","Data":"192da85bd0c109499ffb344e6e4e3691079eb1ee3bfc9ca79abb99ae9e70f7ec"} Jan 27 16:03:38 crc kubenswrapper[4697]: I0127 16:03:38.007080 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-t7hct" podStartSLOduration=3.267304323 podStartE2EDuration="8.007058973s" podCreationTimestamp="2026-01-27 16:03:30 +0000 UTC" firstStartedPulling="2026-01-27 16:03:32.93549667 +0000 UTC m=+3309.107896461" lastFinishedPulling="2026-01-27 16:03:37.67525133 +0000 UTC m=+3313.847651111" observedRunningTime="2026-01-27 16:03:37.999244858 +0000 UTC m=+3314.171644649" watchObservedRunningTime="2026-01-27 16:03:38.007058973 +0000 UTC m=+3314.179458754" Jan 27 16:03:41 crc kubenswrapper[4697]: I0127 16:03:41.088750 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-t7hct" Jan 27 16:03:41 crc kubenswrapper[4697]: I0127 16:03:41.089310 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-t7hct" Jan 27 16:03:42 crc kubenswrapper[4697]: I0127 16:03:42.146264 4697 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-t7hct" podUID="2f7942d4-1eba-48a4-b90d-be28d5e1b03d" containerName="registry-server" probeResult="failure" output=< Jan 27 16:03:42 crc kubenswrapper[4697]: timeout: failed to connect service ":50051" within 1s Jan 27 16:03:42 crc kubenswrapper[4697]: > Jan 27 16:03:51 crc kubenswrapper[4697]: I0127 16:03:51.148980 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-t7hct" Jan 27 16:03:51 crc kubenswrapper[4697]: I0127 16:03:51.203913 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-t7hct" Jan 27 16:03:53 crc kubenswrapper[4697]: I0127 16:03:53.936473 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-t7hct"] Jan 27 16:03:53 crc kubenswrapper[4697]: I0127 16:03:53.938681 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-t7hct" podUID="2f7942d4-1eba-48a4-b90d-be28d5e1b03d" containerName="registry-server" containerID="cri-o://192da85bd0c109499ffb344e6e4e3691079eb1ee3bfc9ca79abb99ae9e70f7ec" gracePeriod=2 Jan 27 16:03:54 crc kubenswrapper[4697]: I0127 16:03:54.176722 4697 generic.go:334] "Generic (PLEG): container finished" podID="2f7942d4-1eba-48a4-b90d-be28d5e1b03d" containerID="192da85bd0c109499ffb344e6e4e3691079eb1ee3bfc9ca79abb99ae9e70f7ec" exitCode=0 Jan 27 16:03:54 crc kubenswrapper[4697]: I0127 16:03:54.176937 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t7hct" event={"ID":"2f7942d4-1eba-48a4-b90d-be28d5e1b03d","Type":"ContainerDied","Data":"192da85bd0c109499ffb344e6e4e3691079eb1ee3bfc9ca79abb99ae9e70f7ec"} Jan 27 16:03:54 crc kubenswrapper[4697]: I0127 16:03:54.844992 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t7hct" Jan 27 16:03:55 crc kubenswrapper[4697]: I0127 16:03:55.047108 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f7942d4-1eba-48a4-b90d-be28d5e1b03d-utilities\") pod \"2f7942d4-1eba-48a4-b90d-be28d5e1b03d\" (UID: \"2f7942d4-1eba-48a4-b90d-be28d5e1b03d\") " Jan 27 16:03:55 crc kubenswrapper[4697]: I0127 16:03:55.047206 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f7942d4-1eba-48a4-b90d-be28d5e1b03d-catalog-content\") pod \"2f7942d4-1eba-48a4-b90d-be28d5e1b03d\" (UID: \"2f7942d4-1eba-48a4-b90d-be28d5e1b03d\") " Jan 27 16:03:55 crc kubenswrapper[4697]: I0127 16:03:55.047273 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5f6lg\" (UniqueName: \"kubernetes.io/projected/2f7942d4-1eba-48a4-b90d-be28d5e1b03d-kube-api-access-5f6lg\") pod \"2f7942d4-1eba-48a4-b90d-be28d5e1b03d\" (UID: \"2f7942d4-1eba-48a4-b90d-be28d5e1b03d\") " Jan 27 16:03:55 crc kubenswrapper[4697]: I0127 16:03:55.049934 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f7942d4-1eba-48a4-b90d-be28d5e1b03d-utilities" (OuterVolumeSpecName: "utilities") pod "2f7942d4-1eba-48a4-b90d-be28d5e1b03d" (UID: "2f7942d4-1eba-48a4-b90d-be28d5e1b03d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:03:55 crc kubenswrapper[4697]: I0127 16:03:55.065668 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f7942d4-1eba-48a4-b90d-be28d5e1b03d-kube-api-access-5f6lg" (OuterVolumeSpecName: "kube-api-access-5f6lg") pod "2f7942d4-1eba-48a4-b90d-be28d5e1b03d" (UID: "2f7942d4-1eba-48a4-b90d-be28d5e1b03d"). InnerVolumeSpecName "kube-api-access-5f6lg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:03:55 crc kubenswrapper[4697]: I0127 16:03:55.136435 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f7942d4-1eba-48a4-b90d-be28d5e1b03d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2f7942d4-1eba-48a4-b90d-be28d5e1b03d" (UID: "2f7942d4-1eba-48a4-b90d-be28d5e1b03d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:03:55 crc kubenswrapper[4697]: I0127 16:03:55.150137 4697 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f7942d4-1eba-48a4-b90d-be28d5e1b03d-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 16:03:55 crc kubenswrapper[4697]: I0127 16:03:55.150178 4697 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f7942d4-1eba-48a4-b90d-be28d5e1b03d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 16:03:55 crc kubenswrapper[4697]: I0127 16:03:55.150189 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5f6lg\" (UniqueName: \"kubernetes.io/projected/2f7942d4-1eba-48a4-b90d-be28d5e1b03d-kube-api-access-5f6lg\") on node \"crc\" DevicePath \"\"" Jan 27 16:03:55 crc kubenswrapper[4697]: I0127 16:03:55.186772 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t7hct" event={"ID":"2f7942d4-1eba-48a4-b90d-be28d5e1b03d","Type":"ContainerDied","Data":"d03c82903090855fe4d4ea79242082d0b6370721a9cc79bf9dba428338ba329d"} Jan 27 16:03:55 crc kubenswrapper[4697]: I0127 16:03:55.186836 4697 scope.go:117] "RemoveContainer" containerID="192da85bd0c109499ffb344e6e4e3691079eb1ee3bfc9ca79abb99ae9e70f7ec" Jan 27 16:03:55 crc kubenswrapper[4697]: I0127 16:03:55.186952 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t7hct" Jan 27 16:03:55 crc kubenswrapper[4697]: I0127 16:03:55.238155 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-t7hct"] Jan 27 16:03:55 crc kubenswrapper[4697]: I0127 16:03:55.254889 4697 scope.go:117] "RemoveContainer" containerID="3d066112d60ea2bd6bcbd30940d6b36e59635246272c39bbaeac1e90773fa9a8" Jan 27 16:03:55 crc kubenswrapper[4697]: I0127 16:03:55.256150 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-t7hct"] Jan 27 16:03:55 crc kubenswrapper[4697]: I0127 16:03:55.278978 4697 scope.go:117] "RemoveContainer" containerID="31ed8140a90e0b6156467d22f1e3a66075dd67bb86447e338deee5f8f58859a7" Jan 27 16:03:56 crc kubenswrapper[4697]: I0127 16:03:56.582416 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f7942d4-1eba-48a4-b90d-be28d5e1b03d" path="/var/lib/kubelet/pods/2f7942d4-1eba-48a4-b90d-be28d5e1b03d/volumes" Jan 27 16:04:25 crc kubenswrapper[4697]: I0127 16:04:25.109070 4697 patch_prober.go:28] interesting pod/machine-config-daemon-wz495 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 16:04:25 crc kubenswrapper[4697]: I0127 16:04:25.110471 4697 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 16:04:55 crc kubenswrapper[4697]: I0127 16:04:55.108634 4697 patch_prober.go:28] interesting pod/machine-config-daemon-wz495 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 16:04:55 crc kubenswrapper[4697]: I0127 16:04:55.109132 4697 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 16:05:09 crc kubenswrapper[4697]: I0127 16:05:09.803477 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-wcxbt"] Jan 27 16:05:09 crc kubenswrapper[4697]: E0127 16:05:09.804519 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f7942d4-1eba-48a4-b90d-be28d5e1b03d" containerName="extract-utilities" Jan 27 16:05:09 crc kubenswrapper[4697]: I0127 16:05:09.804536 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f7942d4-1eba-48a4-b90d-be28d5e1b03d" containerName="extract-utilities" Jan 27 16:05:09 crc kubenswrapper[4697]: E0127 16:05:09.804562 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f7942d4-1eba-48a4-b90d-be28d5e1b03d" containerName="extract-content" Jan 27 16:05:09 crc kubenswrapper[4697]: I0127 16:05:09.804569 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f7942d4-1eba-48a4-b90d-be28d5e1b03d" containerName="extract-content" Jan 27 16:05:09 crc kubenswrapper[4697]: E0127 16:05:09.804579 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f7942d4-1eba-48a4-b90d-be28d5e1b03d" containerName="registry-server" Jan 27 16:05:09 crc kubenswrapper[4697]: I0127 16:05:09.804586 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f7942d4-1eba-48a4-b90d-be28d5e1b03d" containerName="registry-server" Jan 27 16:05:09 crc kubenswrapper[4697]: I0127 16:05:09.804768 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f7942d4-1eba-48a4-b90d-be28d5e1b03d" containerName="registry-server" Jan 27 16:05:09 crc kubenswrapper[4697]: I0127 16:05:09.821918 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wcxbt"] Jan 27 16:05:09 crc kubenswrapper[4697]: I0127 16:05:09.822025 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wcxbt" Jan 27 16:05:09 crc kubenswrapper[4697]: I0127 16:05:09.838108 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ede9f843-6951-4d2b-8c07-a664f189a015-utilities\") pod \"redhat-marketplace-wcxbt\" (UID: \"ede9f843-6951-4d2b-8c07-a664f189a015\") " pod="openshift-marketplace/redhat-marketplace-wcxbt" Jan 27 16:05:09 crc kubenswrapper[4697]: I0127 16:05:09.838372 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ede9f843-6951-4d2b-8c07-a664f189a015-catalog-content\") pod \"redhat-marketplace-wcxbt\" (UID: \"ede9f843-6951-4d2b-8c07-a664f189a015\") " pod="openshift-marketplace/redhat-marketplace-wcxbt" Jan 27 16:05:09 crc kubenswrapper[4697]: I0127 16:05:09.838487 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tg6mn\" (UniqueName: \"kubernetes.io/projected/ede9f843-6951-4d2b-8c07-a664f189a015-kube-api-access-tg6mn\") pod \"redhat-marketplace-wcxbt\" (UID: \"ede9f843-6951-4d2b-8c07-a664f189a015\") " pod="openshift-marketplace/redhat-marketplace-wcxbt" Jan 27 16:05:09 crc kubenswrapper[4697]: I0127 16:05:09.940426 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ede9f843-6951-4d2b-8c07-a664f189a015-utilities\") pod \"redhat-marketplace-wcxbt\" (UID: \"ede9f843-6951-4d2b-8c07-a664f189a015\") " pod="openshift-marketplace/redhat-marketplace-wcxbt" Jan 27 16:05:09 crc kubenswrapper[4697]: I0127 16:05:09.940477 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ede9f843-6951-4d2b-8c07-a664f189a015-catalog-content\") pod \"redhat-marketplace-wcxbt\" (UID: \"ede9f843-6951-4d2b-8c07-a664f189a015\") " pod="openshift-marketplace/redhat-marketplace-wcxbt" Jan 27 16:05:09 crc kubenswrapper[4697]: I0127 16:05:09.940526 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tg6mn\" (UniqueName: \"kubernetes.io/projected/ede9f843-6951-4d2b-8c07-a664f189a015-kube-api-access-tg6mn\") pod \"redhat-marketplace-wcxbt\" (UID: \"ede9f843-6951-4d2b-8c07-a664f189a015\") " pod="openshift-marketplace/redhat-marketplace-wcxbt" Jan 27 16:05:09 crc kubenswrapper[4697]: I0127 16:05:09.941271 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ede9f843-6951-4d2b-8c07-a664f189a015-catalog-content\") pod \"redhat-marketplace-wcxbt\" (UID: \"ede9f843-6951-4d2b-8c07-a664f189a015\") " pod="openshift-marketplace/redhat-marketplace-wcxbt" Jan 27 16:05:09 crc kubenswrapper[4697]: I0127 16:05:09.942573 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ede9f843-6951-4d2b-8c07-a664f189a015-utilities\") pod \"redhat-marketplace-wcxbt\" (UID: \"ede9f843-6951-4d2b-8c07-a664f189a015\") " pod="openshift-marketplace/redhat-marketplace-wcxbt" Jan 27 16:05:09 crc kubenswrapper[4697]: I0127 16:05:09.961604 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tg6mn\" (UniqueName: \"kubernetes.io/projected/ede9f843-6951-4d2b-8c07-a664f189a015-kube-api-access-tg6mn\") pod \"redhat-marketplace-wcxbt\" (UID: \"ede9f843-6951-4d2b-8c07-a664f189a015\") " pod="openshift-marketplace/redhat-marketplace-wcxbt" Jan 27 16:05:10 crc kubenswrapper[4697]: I0127 16:05:10.143225 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wcxbt" Jan 27 16:05:10 crc kubenswrapper[4697]: I0127 16:05:10.844229 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wcxbt"] Jan 27 16:05:10 crc kubenswrapper[4697]: I0127 16:05:10.879313 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wcxbt" event={"ID":"ede9f843-6951-4d2b-8c07-a664f189a015","Type":"ContainerStarted","Data":"e5bf884fdfdd0f73f5711620be9cfd29d7f839931dc00068b4844b9bbeb8eea2"} Jan 27 16:05:11 crc kubenswrapper[4697]: I0127 16:05:11.889035 4697 generic.go:334] "Generic (PLEG): container finished" podID="ede9f843-6951-4d2b-8c07-a664f189a015" containerID="0126a40404c66c75ed479a649e9caf1cf7224a7e63fb58ae152faf6a344fcef8" exitCode=0 Jan 27 16:05:11 crc kubenswrapper[4697]: I0127 16:05:11.889091 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wcxbt" event={"ID":"ede9f843-6951-4d2b-8c07-a664f189a015","Type":"ContainerDied","Data":"0126a40404c66c75ed479a649e9caf1cf7224a7e63fb58ae152faf6a344fcef8"} Jan 27 16:05:12 crc kubenswrapper[4697]: I0127 16:05:12.399111 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-gphv2"] Jan 27 16:05:12 crc kubenswrapper[4697]: I0127 16:05:12.400899 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gphv2" Jan 27 16:05:12 crc kubenswrapper[4697]: I0127 16:05:12.407453 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gphv2"] Jan 27 16:05:12 crc kubenswrapper[4697]: I0127 16:05:12.596201 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qm5qw\" (UniqueName: \"kubernetes.io/projected/eee2b617-9ed0-4676-9a5b-463845e161f5-kube-api-access-qm5qw\") pod \"community-operators-gphv2\" (UID: \"eee2b617-9ed0-4676-9a5b-463845e161f5\") " pod="openshift-marketplace/community-operators-gphv2" Jan 27 16:05:12 crc kubenswrapper[4697]: I0127 16:05:12.596549 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eee2b617-9ed0-4676-9a5b-463845e161f5-utilities\") pod \"community-operators-gphv2\" (UID: \"eee2b617-9ed0-4676-9a5b-463845e161f5\") " pod="openshift-marketplace/community-operators-gphv2" Jan 27 16:05:12 crc kubenswrapper[4697]: I0127 16:05:12.596841 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eee2b617-9ed0-4676-9a5b-463845e161f5-catalog-content\") pod \"community-operators-gphv2\" (UID: \"eee2b617-9ed0-4676-9a5b-463845e161f5\") " pod="openshift-marketplace/community-operators-gphv2" Jan 27 16:05:12 crc kubenswrapper[4697]: I0127 16:05:12.699690 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eee2b617-9ed0-4676-9a5b-463845e161f5-utilities\") pod \"community-operators-gphv2\" (UID: \"eee2b617-9ed0-4676-9a5b-463845e161f5\") " pod="openshift-marketplace/community-operators-gphv2" Jan 27 16:05:12 crc kubenswrapper[4697]: I0127 16:05:12.700151 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eee2b617-9ed0-4676-9a5b-463845e161f5-catalog-content\") pod \"community-operators-gphv2\" (UID: \"eee2b617-9ed0-4676-9a5b-463845e161f5\") " pod="openshift-marketplace/community-operators-gphv2" Jan 27 16:05:12 crc kubenswrapper[4697]: I0127 16:05:12.700521 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qm5qw\" (UniqueName: \"kubernetes.io/projected/eee2b617-9ed0-4676-9a5b-463845e161f5-kube-api-access-qm5qw\") pod \"community-operators-gphv2\" (UID: \"eee2b617-9ed0-4676-9a5b-463845e161f5\") " pod="openshift-marketplace/community-operators-gphv2" Jan 27 16:05:12 crc kubenswrapper[4697]: I0127 16:05:12.702053 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eee2b617-9ed0-4676-9a5b-463845e161f5-catalog-content\") pod \"community-operators-gphv2\" (UID: \"eee2b617-9ed0-4676-9a5b-463845e161f5\") " pod="openshift-marketplace/community-operators-gphv2" Jan 27 16:05:12 crc kubenswrapper[4697]: I0127 16:05:12.703272 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eee2b617-9ed0-4676-9a5b-463845e161f5-utilities\") pod \"community-operators-gphv2\" (UID: \"eee2b617-9ed0-4676-9a5b-463845e161f5\") " pod="openshift-marketplace/community-operators-gphv2" Jan 27 16:05:12 crc kubenswrapper[4697]: I0127 16:05:12.755299 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qm5qw\" (UniqueName: \"kubernetes.io/projected/eee2b617-9ed0-4676-9a5b-463845e161f5-kube-api-access-qm5qw\") pod \"community-operators-gphv2\" (UID: \"eee2b617-9ed0-4676-9a5b-463845e161f5\") " pod="openshift-marketplace/community-operators-gphv2" Jan 27 16:05:13 crc kubenswrapper[4697]: I0127 16:05:13.019040 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gphv2" Jan 27 16:05:13 crc kubenswrapper[4697]: I0127 16:05:13.664991 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gphv2"] Jan 27 16:05:13 crc kubenswrapper[4697]: W0127 16:05:13.665995 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeee2b617_9ed0_4676_9a5b_463845e161f5.slice/crio-a9fb78b56ead58e5d5f7982ef0f8fff4e23b2f8047cd3e6ea655e82cafb84b21 WatchSource:0}: Error finding container a9fb78b56ead58e5d5f7982ef0f8fff4e23b2f8047cd3e6ea655e82cafb84b21: Status 404 returned error can't find the container with id a9fb78b56ead58e5d5f7982ef0f8fff4e23b2f8047cd3e6ea655e82cafb84b21 Jan 27 16:05:13 crc kubenswrapper[4697]: I0127 16:05:13.913697 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wcxbt" event={"ID":"ede9f843-6951-4d2b-8c07-a664f189a015","Type":"ContainerStarted","Data":"2ec8ec233a08d9292673e40551101af2d7553acef2cb5df54b617ae7fdcb619d"} Jan 27 16:05:13 crc kubenswrapper[4697]: I0127 16:05:13.914921 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gphv2" event={"ID":"eee2b617-9ed0-4676-9a5b-463845e161f5","Type":"ContainerStarted","Data":"a9fb78b56ead58e5d5f7982ef0f8fff4e23b2f8047cd3e6ea655e82cafb84b21"} Jan 27 16:05:14 crc kubenswrapper[4697]: I0127 16:05:14.922584 4697 generic.go:334] "Generic (PLEG): container finished" podID="eee2b617-9ed0-4676-9a5b-463845e161f5" containerID="230887b8a50007eae4843823d44ef05ae9897ed6d2d3ca4b4818aec10fc7b776" exitCode=0 Jan 27 16:05:14 crc kubenswrapper[4697]: I0127 16:05:14.922761 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gphv2" event={"ID":"eee2b617-9ed0-4676-9a5b-463845e161f5","Type":"ContainerDied","Data":"230887b8a50007eae4843823d44ef05ae9897ed6d2d3ca4b4818aec10fc7b776"} Jan 27 16:05:17 crc kubenswrapper[4697]: I0127 16:05:17.948915 4697 generic.go:334] "Generic (PLEG): container finished" podID="ede9f843-6951-4d2b-8c07-a664f189a015" containerID="2ec8ec233a08d9292673e40551101af2d7553acef2cb5df54b617ae7fdcb619d" exitCode=0 Jan 27 16:05:17 crc kubenswrapper[4697]: I0127 16:05:17.948980 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wcxbt" event={"ID":"ede9f843-6951-4d2b-8c07-a664f189a015","Type":"ContainerDied","Data":"2ec8ec233a08d9292673e40551101af2d7553acef2cb5df54b617ae7fdcb619d"} Jan 27 16:05:17 crc kubenswrapper[4697]: I0127 16:05:17.952920 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gphv2" event={"ID":"eee2b617-9ed0-4676-9a5b-463845e161f5","Type":"ContainerStarted","Data":"8781268f86dd19d0a4ee582ac34aa737f5f13fee45b190a2304c98946cc9e18c"} Jan 27 16:05:18 crc kubenswrapper[4697]: I0127 16:05:18.963910 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wcxbt" event={"ID":"ede9f843-6951-4d2b-8c07-a664f189a015","Type":"ContainerStarted","Data":"59db1989399cb2cb6cfb68ba8fa96311d68f4fcb58e0ce3c8d82937f2a5f9d8b"} Jan 27 16:05:18 crc kubenswrapper[4697]: I0127 16:05:18.991411 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-wcxbt" podStartSLOduration=3.474949853 podStartE2EDuration="9.991385263s" podCreationTimestamp="2026-01-27 16:05:09 +0000 UTC" firstStartedPulling="2026-01-27 16:05:11.890722816 +0000 UTC m=+3408.063122597" lastFinishedPulling="2026-01-27 16:05:18.407158226 +0000 UTC m=+3414.579558007" observedRunningTime="2026-01-27 16:05:18.980997442 +0000 UTC m=+3415.153397233" watchObservedRunningTime="2026-01-27 16:05:18.991385263 +0000 UTC m=+3415.163785054" Jan 27 16:05:20 crc kubenswrapper[4697]: I0127 16:05:20.144728 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-wcxbt" Jan 27 16:05:20 crc kubenswrapper[4697]: I0127 16:05:20.144774 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-wcxbt" Jan 27 16:05:22 crc kubenswrapper[4697]: I0127 16:05:22.045716 4697 generic.go:334] "Generic (PLEG): container finished" podID="eee2b617-9ed0-4676-9a5b-463845e161f5" containerID="8781268f86dd19d0a4ee582ac34aa737f5f13fee45b190a2304c98946cc9e18c" exitCode=0 Jan 27 16:05:22 crc kubenswrapper[4697]: I0127 16:05:22.046500 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gphv2" event={"ID":"eee2b617-9ed0-4676-9a5b-463845e161f5","Type":"ContainerDied","Data":"8781268f86dd19d0a4ee582ac34aa737f5f13fee45b190a2304c98946cc9e18c"} Jan 27 16:05:22 crc kubenswrapper[4697]: I0127 16:05:22.140772 4697 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-wcxbt" podUID="ede9f843-6951-4d2b-8c07-a664f189a015" containerName="registry-server" probeResult="failure" output=< Jan 27 16:05:22 crc kubenswrapper[4697]: timeout: failed to connect service ":50051" within 1s Jan 27 16:05:22 crc kubenswrapper[4697]: > Jan 27 16:05:23 crc kubenswrapper[4697]: I0127 16:05:23.054876 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gphv2" event={"ID":"eee2b617-9ed0-4676-9a5b-463845e161f5","Type":"ContainerStarted","Data":"b422c17c46a17010a5010e079011bb4b224de32b1aa6c501e54a0d193057a841"} Jan 27 16:05:23 crc kubenswrapper[4697]: I0127 16:05:23.079824 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-gphv2" podStartSLOduration=3.371781745 podStartE2EDuration="11.079807187s" podCreationTimestamp="2026-01-27 16:05:12 +0000 UTC" firstStartedPulling="2026-01-27 16:05:14.924364885 +0000 UTC m=+3411.096764666" lastFinishedPulling="2026-01-27 16:05:22.632390337 +0000 UTC m=+3418.804790108" observedRunningTime="2026-01-27 16:05:23.073865877 +0000 UTC m=+3419.246265658" watchObservedRunningTime="2026-01-27 16:05:23.079807187 +0000 UTC m=+3419.252206968" Jan 27 16:05:25 crc kubenswrapper[4697]: I0127 16:05:25.109421 4697 patch_prober.go:28] interesting pod/machine-config-daemon-wz495 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 16:05:25 crc kubenswrapper[4697]: I0127 16:05:25.110732 4697 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 16:05:25 crc kubenswrapper[4697]: I0127 16:05:25.110887 4697 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wz495" Jan 27 16:05:25 crc kubenswrapper[4697]: I0127 16:05:25.111688 4697 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3eb09feb9e26fa4ecab5032165e1fa5342a47b0a789b98e14109ed9027c3b15d"} pod="openshift-machine-config-operator/machine-config-daemon-wz495" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 16:05:25 crc kubenswrapper[4697]: I0127 16:05:25.111859 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" containerName="machine-config-daemon" containerID="cri-o://3eb09feb9e26fa4ecab5032165e1fa5342a47b0a789b98e14109ed9027c3b15d" gracePeriod=600 Jan 27 16:05:26 crc kubenswrapper[4697]: I0127 16:05:26.099167 4697 generic.go:334] "Generic (PLEG): container finished" podID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" containerID="3eb09feb9e26fa4ecab5032165e1fa5342a47b0a789b98e14109ed9027c3b15d" exitCode=0 Jan 27 16:05:26 crc kubenswrapper[4697]: I0127 16:05:26.099234 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wz495" event={"ID":"e9bec8bc-b2a6-4865-83ca-692ae5c022a6","Type":"ContainerDied","Data":"3eb09feb9e26fa4ecab5032165e1fa5342a47b0a789b98e14109ed9027c3b15d"} Jan 27 16:05:26 crc kubenswrapper[4697]: I0127 16:05:26.099449 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wz495" event={"ID":"e9bec8bc-b2a6-4865-83ca-692ae5c022a6","Type":"ContainerStarted","Data":"c1126d34877407d4cc1a3cef5d83fc9212c644d8a477a9d2a40e1aca1c69dcdf"} Jan 27 16:05:26 crc kubenswrapper[4697]: I0127 16:05:26.099468 4697 scope.go:117] "RemoveContainer" containerID="070df30899c85498f8cec50dcfed85b20e4ba889e263f6d29311290475d4f7df" Jan 27 16:05:30 crc kubenswrapper[4697]: I0127 16:05:30.200486 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-wcxbt" Jan 27 16:05:30 crc kubenswrapper[4697]: I0127 16:05:30.254442 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-wcxbt" Jan 27 16:05:30 crc kubenswrapper[4697]: I0127 16:05:30.433926 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wcxbt"] Jan 27 16:05:32 crc kubenswrapper[4697]: I0127 16:05:32.157227 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-wcxbt" podUID="ede9f843-6951-4d2b-8c07-a664f189a015" containerName="registry-server" containerID="cri-o://59db1989399cb2cb6cfb68ba8fa96311d68f4fcb58e0ce3c8d82937f2a5f9d8b" gracePeriod=2 Jan 27 16:05:32 crc kubenswrapper[4697]: I0127 16:05:32.839520 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wcxbt" Jan 27 16:05:32 crc kubenswrapper[4697]: I0127 16:05:32.927922 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tg6mn\" (UniqueName: \"kubernetes.io/projected/ede9f843-6951-4d2b-8c07-a664f189a015-kube-api-access-tg6mn\") pod \"ede9f843-6951-4d2b-8c07-a664f189a015\" (UID: \"ede9f843-6951-4d2b-8c07-a664f189a015\") " Jan 27 16:05:32 crc kubenswrapper[4697]: I0127 16:05:32.928137 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ede9f843-6951-4d2b-8c07-a664f189a015-utilities\") pod \"ede9f843-6951-4d2b-8c07-a664f189a015\" (UID: \"ede9f843-6951-4d2b-8c07-a664f189a015\") " Jan 27 16:05:32 crc kubenswrapper[4697]: I0127 16:05:32.928232 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ede9f843-6951-4d2b-8c07-a664f189a015-catalog-content\") pod \"ede9f843-6951-4d2b-8c07-a664f189a015\" (UID: \"ede9f843-6951-4d2b-8c07-a664f189a015\") " Jan 27 16:05:32 crc kubenswrapper[4697]: I0127 16:05:32.928649 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ede9f843-6951-4d2b-8c07-a664f189a015-utilities" (OuterVolumeSpecName: "utilities") pod "ede9f843-6951-4d2b-8c07-a664f189a015" (UID: "ede9f843-6951-4d2b-8c07-a664f189a015"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:05:32 crc kubenswrapper[4697]: I0127 16:05:32.929237 4697 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ede9f843-6951-4d2b-8c07-a664f189a015-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 16:05:32 crc kubenswrapper[4697]: I0127 16:05:32.938222 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ede9f843-6951-4d2b-8c07-a664f189a015-kube-api-access-tg6mn" (OuterVolumeSpecName: "kube-api-access-tg6mn") pod "ede9f843-6951-4d2b-8c07-a664f189a015" (UID: "ede9f843-6951-4d2b-8c07-a664f189a015"). InnerVolumeSpecName "kube-api-access-tg6mn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:05:32 crc kubenswrapper[4697]: I0127 16:05:32.959945 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ede9f843-6951-4d2b-8c07-a664f189a015-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ede9f843-6951-4d2b-8c07-a664f189a015" (UID: "ede9f843-6951-4d2b-8c07-a664f189a015"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:05:33 crc kubenswrapper[4697]: I0127 16:05:33.019509 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-gphv2" Jan 27 16:05:33 crc kubenswrapper[4697]: I0127 16:05:33.019590 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-gphv2" Jan 27 16:05:33 crc kubenswrapper[4697]: I0127 16:05:33.032241 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tg6mn\" (UniqueName: \"kubernetes.io/projected/ede9f843-6951-4d2b-8c07-a664f189a015-kube-api-access-tg6mn\") on node \"crc\" DevicePath \"\"" Jan 27 16:05:33 crc kubenswrapper[4697]: I0127 16:05:33.032286 4697 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ede9f843-6951-4d2b-8c07-a664f189a015-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 16:05:33 crc kubenswrapper[4697]: I0127 16:05:33.072363 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-gphv2" Jan 27 16:05:33 crc kubenswrapper[4697]: I0127 16:05:33.167190 4697 generic.go:334] "Generic (PLEG): container finished" podID="ede9f843-6951-4d2b-8c07-a664f189a015" containerID="59db1989399cb2cb6cfb68ba8fa96311d68f4fcb58e0ce3c8d82937f2a5f9d8b" exitCode=0 Jan 27 16:05:33 crc kubenswrapper[4697]: I0127 16:05:33.167284 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wcxbt" Jan 27 16:05:33 crc kubenswrapper[4697]: I0127 16:05:33.167299 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wcxbt" event={"ID":"ede9f843-6951-4d2b-8c07-a664f189a015","Type":"ContainerDied","Data":"59db1989399cb2cb6cfb68ba8fa96311d68f4fcb58e0ce3c8d82937f2a5f9d8b"} Jan 27 16:05:33 crc kubenswrapper[4697]: I0127 16:05:33.167359 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wcxbt" event={"ID":"ede9f843-6951-4d2b-8c07-a664f189a015","Type":"ContainerDied","Data":"e5bf884fdfdd0f73f5711620be9cfd29d7f839931dc00068b4844b9bbeb8eea2"} Jan 27 16:05:33 crc kubenswrapper[4697]: I0127 16:05:33.167409 4697 scope.go:117] "RemoveContainer" containerID="59db1989399cb2cb6cfb68ba8fa96311d68f4fcb58e0ce3c8d82937f2a5f9d8b" Jan 27 16:05:33 crc kubenswrapper[4697]: I0127 16:05:33.207578 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wcxbt"] Jan 27 16:05:33 crc kubenswrapper[4697]: I0127 16:05:33.208679 4697 scope.go:117] "RemoveContainer" containerID="2ec8ec233a08d9292673e40551101af2d7553acef2cb5df54b617ae7fdcb619d" Jan 27 16:05:33 crc kubenswrapper[4697]: I0127 16:05:33.215460 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-gphv2" Jan 27 16:05:33 crc kubenswrapper[4697]: I0127 16:05:33.215560 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-wcxbt"] Jan 27 16:05:33 crc kubenswrapper[4697]: I0127 16:05:33.227663 4697 scope.go:117] "RemoveContainer" containerID="0126a40404c66c75ed479a649e9caf1cf7224a7e63fb58ae152faf6a344fcef8" Jan 27 16:05:33 crc kubenswrapper[4697]: I0127 16:05:33.275625 4697 scope.go:117] "RemoveContainer" containerID="59db1989399cb2cb6cfb68ba8fa96311d68f4fcb58e0ce3c8d82937f2a5f9d8b" Jan 27 16:05:33 crc kubenswrapper[4697]: E0127 16:05:33.276074 4697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"59db1989399cb2cb6cfb68ba8fa96311d68f4fcb58e0ce3c8d82937f2a5f9d8b\": container with ID starting with 59db1989399cb2cb6cfb68ba8fa96311d68f4fcb58e0ce3c8d82937f2a5f9d8b not found: ID does not exist" containerID="59db1989399cb2cb6cfb68ba8fa96311d68f4fcb58e0ce3c8d82937f2a5f9d8b" Jan 27 16:05:33 crc kubenswrapper[4697]: I0127 16:05:33.276133 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"59db1989399cb2cb6cfb68ba8fa96311d68f4fcb58e0ce3c8d82937f2a5f9d8b"} err="failed to get container status \"59db1989399cb2cb6cfb68ba8fa96311d68f4fcb58e0ce3c8d82937f2a5f9d8b\": rpc error: code = NotFound desc = could not find container \"59db1989399cb2cb6cfb68ba8fa96311d68f4fcb58e0ce3c8d82937f2a5f9d8b\": container with ID starting with 59db1989399cb2cb6cfb68ba8fa96311d68f4fcb58e0ce3c8d82937f2a5f9d8b not found: ID does not exist" Jan 27 16:05:33 crc kubenswrapper[4697]: I0127 16:05:33.276169 4697 scope.go:117] "RemoveContainer" containerID="2ec8ec233a08d9292673e40551101af2d7553acef2cb5df54b617ae7fdcb619d" Jan 27 16:05:33 crc kubenswrapper[4697]: E0127 16:05:33.276653 4697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ec8ec233a08d9292673e40551101af2d7553acef2cb5df54b617ae7fdcb619d\": container with ID starting with 2ec8ec233a08d9292673e40551101af2d7553acef2cb5df54b617ae7fdcb619d not found: ID does not exist" containerID="2ec8ec233a08d9292673e40551101af2d7553acef2cb5df54b617ae7fdcb619d" Jan 27 16:05:33 crc kubenswrapper[4697]: I0127 16:05:33.276678 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ec8ec233a08d9292673e40551101af2d7553acef2cb5df54b617ae7fdcb619d"} err="failed to get container status \"2ec8ec233a08d9292673e40551101af2d7553acef2cb5df54b617ae7fdcb619d\": rpc error: code = NotFound desc = could not find container \"2ec8ec233a08d9292673e40551101af2d7553acef2cb5df54b617ae7fdcb619d\": container with ID starting with 2ec8ec233a08d9292673e40551101af2d7553acef2cb5df54b617ae7fdcb619d not found: ID does not exist" Jan 27 16:05:33 crc kubenswrapper[4697]: I0127 16:05:33.276695 4697 scope.go:117] "RemoveContainer" containerID="0126a40404c66c75ed479a649e9caf1cf7224a7e63fb58ae152faf6a344fcef8" Jan 27 16:05:33 crc kubenswrapper[4697]: E0127 16:05:33.277012 4697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0126a40404c66c75ed479a649e9caf1cf7224a7e63fb58ae152faf6a344fcef8\": container with ID starting with 0126a40404c66c75ed479a649e9caf1cf7224a7e63fb58ae152faf6a344fcef8 not found: ID does not exist" containerID="0126a40404c66c75ed479a649e9caf1cf7224a7e63fb58ae152faf6a344fcef8" Jan 27 16:05:33 crc kubenswrapper[4697]: I0127 16:05:33.277062 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0126a40404c66c75ed479a649e9caf1cf7224a7e63fb58ae152faf6a344fcef8"} err="failed to get container status \"0126a40404c66c75ed479a649e9caf1cf7224a7e63fb58ae152faf6a344fcef8\": rpc error: code = NotFound desc = could not find container \"0126a40404c66c75ed479a649e9caf1cf7224a7e63fb58ae152faf6a344fcef8\": container with ID starting with 0126a40404c66c75ed479a649e9caf1cf7224a7e63fb58ae152faf6a344fcef8 not found: ID does not exist" Jan 27 16:05:34 crc kubenswrapper[4697]: I0127 16:05:34.438730 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gphv2"] Jan 27 16:05:34 crc kubenswrapper[4697]: I0127 16:05:34.608594 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ede9f843-6951-4d2b-8c07-a664f189a015" path="/var/lib/kubelet/pods/ede9f843-6951-4d2b-8c07-a664f189a015/volumes" Jan 27 16:05:35 crc kubenswrapper[4697]: I0127 16:05:35.197695 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-gphv2" podUID="eee2b617-9ed0-4676-9a5b-463845e161f5" containerName="registry-server" containerID="cri-o://b422c17c46a17010a5010e079011bb4b224de32b1aa6c501e54a0d193057a841" gracePeriod=2 Jan 27 16:05:35 crc kubenswrapper[4697]: I0127 16:05:35.866768 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gphv2" Jan 27 16:05:35 crc kubenswrapper[4697]: I0127 16:05:35.983821 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qm5qw\" (UniqueName: \"kubernetes.io/projected/eee2b617-9ed0-4676-9a5b-463845e161f5-kube-api-access-qm5qw\") pod \"eee2b617-9ed0-4676-9a5b-463845e161f5\" (UID: \"eee2b617-9ed0-4676-9a5b-463845e161f5\") " Jan 27 16:05:35 crc kubenswrapper[4697]: I0127 16:05:35.984115 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eee2b617-9ed0-4676-9a5b-463845e161f5-catalog-content\") pod \"eee2b617-9ed0-4676-9a5b-463845e161f5\" (UID: \"eee2b617-9ed0-4676-9a5b-463845e161f5\") " Jan 27 16:05:35 crc kubenswrapper[4697]: I0127 16:05:35.984208 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eee2b617-9ed0-4676-9a5b-463845e161f5-utilities\") pod \"eee2b617-9ed0-4676-9a5b-463845e161f5\" (UID: \"eee2b617-9ed0-4676-9a5b-463845e161f5\") " Jan 27 16:05:35 crc kubenswrapper[4697]: I0127 16:05:35.985074 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eee2b617-9ed0-4676-9a5b-463845e161f5-utilities" (OuterVolumeSpecName: "utilities") pod "eee2b617-9ed0-4676-9a5b-463845e161f5" (UID: "eee2b617-9ed0-4676-9a5b-463845e161f5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:05:35 crc kubenswrapper[4697]: I0127 16:05:35.995316 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eee2b617-9ed0-4676-9a5b-463845e161f5-kube-api-access-qm5qw" (OuterVolumeSpecName: "kube-api-access-qm5qw") pod "eee2b617-9ed0-4676-9a5b-463845e161f5" (UID: "eee2b617-9ed0-4676-9a5b-463845e161f5"). InnerVolumeSpecName "kube-api-access-qm5qw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:05:36 crc kubenswrapper[4697]: I0127 16:05:36.051941 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eee2b617-9ed0-4676-9a5b-463845e161f5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "eee2b617-9ed0-4676-9a5b-463845e161f5" (UID: "eee2b617-9ed0-4676-9a5b-463845e161f5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:05:36 crc kubenswrapper[4697]: I0127 16:05:36.086366 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qm5qw\" (UniqueName: \"kubernetes.io/projected/eee2b617-9ed0-4676-9a5b-463845e161f5-kube-api-access-qm5qw\") on node \"crc\" DevicePath \"\"" Jan 27 16:05:36 crc kubenswrapper[4697]: I0127 16:05:36.086398 4697 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eee2b617-9ed0-4676-9a5b-463845e161f5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 16:05:36 crc kubenswrapper[4697]: I0127 16:05:36.086408 4697 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eee2b617-9ed0-4676-9a5b-463845e161f5-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 16:05:36 crc kubenswrapper[4697]: I0127 16:05:36.205195 4697 generic.go:334] "Generic (PLEG): container finished" podID="eee2b617-9ed0-4676-9a5b-463845e161f5" containerID="b422c17c46a17010a5010e079011bb4b224de32b1aa6c501e54a0d193057a841" exitCode=0 Jan 27 16:05:36 crc kubenswrapper[4697]: I0127 16:05:36.205232 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gphv2" event={"ID":"eee2b617-9ed0-4676-9a5b-463845e161f5","Type":"ContainerDied","Data":"b422c17c46a17010a5010e079011bb4b224de32b1aa6c501e54a0d193057a841"} Jan 27 16:05:36 crc kubenswrapper[4697]: I0127 16:05:36.205256 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gphv2" event={"ID":"eee2b617-9ed0-4676-9a5b-463845e161f5","Type":"ContainerDied","Data":"a9fb78b56ead58e5d5f7982ef0f8fff4e23b2f8047cd3e6ea655e82cafb84b21"} Jan 27 16:05:36 crc kubenswrapper[4697]: I0127 16:05:36.205272 4697 scope.go:117] "RemoveContainer" containerID="b422c17c46a17010a5010e079011bb4b224de32b1aa6c501e54a0d193057a841" Jan 27 16:05:36 crc kubenswrapper[4697]: I0127 16:05:36.205398 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gphv2" Jan 27 16:05:36 crc kubenswrapper[4697]: I0127 16:05:36.246638 4697 scope.go:117] "RemoveContainer" containerID="8781268f86dd19d0a4ee582ac34aa737f5f13fee45b190a2304c98946cc9e18c" Jan 27 16:05:36 crc kubenswrapper[4697]: I0127 16:05:36.247022 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gphv2"] Jan 27 16:05:36 crc kubenswrapper[4697]: I0127 16:05:36.254374 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-gphv2"] Jan 27 16:05:36 crc kubenswrapper[4697]: I0127 16:05:36.299825 4697 scope.go:117] "RemoveContainer" containerID="230887b8a50007eae4843823d44ef05ae9897ed6d2d3ca4b4818aec10fc7b776" Jan 27 16:05:36 crc kubenswrapper[4697]: I0127 16:05:36.332813 4697 scope.go:117] "RemoveContainer" containerID="b422c17c46a17010a5010e079011bb4b224de32b1aa6c501e54a0d193057a841" Jan 27 16:05:36 crc kubenswrapper[4697]: E0127 16:05:36.333290 4697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b422c17c46a17010a5010e079011bb4b224de32b1aa6c501e54a0d193057a841\": container with ID starting with b422c17c46a17010a5010e079011bb4b224de32b1aa6c501e54a0d193057a841 not found: ID does not exist" containerID="b422c17c46a17010a5010e079011bb4b224de32b1aa6c501e54a0d193057a841" Jan 27 16:05:36 crc kubenswrapper[4697]: I0127 16:05:36.333325 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b422c17c46a17010a5010e079011bb4b224de32b1aa6c501e54a0d193057a841"} err="failed to get container status \"b422c17c46a17010a5010e079011bb4b224de32b1aa6c501e54a0d193057a841\": rpc error: code = NotFound desc = could not find container \"b422c17c46a17010a5010e079011bb4b224de32b1aa6c501e54a0d193057a841\": container with ID starting with b422c17c46a17010a5010e079011bb4b224de32b1aa6c501e54a0d193057a841 not found: ID does not exist" Jan 27 16:05:36 crc kubenswrapper[4697]: I0127 16:05:36.333358 4697 scope.go:117] "RemoveContainer" containerID="8781268f86dd19d0a4ee582ac34aa737f5f13fee45b190a2304c98946cc9e18c" Jan 27 16:05:36 crc kubenswrapper[4697]: E0127 16:05:36.333598 4697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8781268f86dd19d0a4ee582ac34aa737f5f13fee45b190a2304c98946cc9e18c\": container with ID starting with 8781268f86dd19d0a4ee582ac34aa737f5f13fee45b190a2304c98946cc9e18c not found: ID does not exist" containerID="8781268f86dd19d0a4ee582ac34aa737f5f13fee45b190a2304c98946cc9e18c" Jan 27 16:05:36 crc kubenswrapper[4697]: I0127 16:05:36.333628 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8781268f86dd19d0a4ee582ac34aa737f5f13fee45b190a2304c98946cc9e18c"} err="failed to get container status \"8781268f86dd19d0a4ee582ac34aa737f5f13fee45b190a2304c98946cc9e18c\": rpc error: code = NotFound desc = could not find container \"8781268f86dd19d0a4ee582ac34aa737f5f13fee45b190a2304c98946cc9e18c\": container with ID starting with 8781268f86dd19d0a4ee582ac34aa737f5f13fee45b190a2304c98946cc9e18c not found: ID does not exist" Jan 27 16:05:36 crc kubenswrapper[4697]: I0127 16:05:36.333654 4697 scope.go:117] "RemoveContainer" containerID="230887b8a50007eae4843823d44ef05ae9897ed6d2d3ca4b4818aec10fc7b776" Jan 27 16:05:36 crc kubenswrapper[4697]: E0127 16:05:36.334554 4697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"230887b8a50007eae4843823d44ef05ae9897ed6d2d3ca4b4818aec10fc7b776\": container with ID starting with 230887b8a50007eae4843823d44ef05ae9897ed6d2d3ca4b4818aec10fc7b776 not found: ID does not exist" containerID="230887b8a50007eae4843823d44ef05ae9897ed6d2d3ca4b4818aec10fc7b776" Jan 27 16:05:36 crc kubenswrapper[4697]: I0127 16:05:36.334591 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"230887b8a50007eae4843823d44ef05ae9897ed6d2d3ca4b4818aec10fc7b776"} err="failed to get container status \"230887b8a50007eae4843823d44ef05ae9897ed6d2d3ca4b4818aec10fc7b776\": rpc error: code = NotFound desc = could not find container \"230887b8a50007eae4843823d44ef05ae9897ed6d2d3ca4b4818aec10fc7b776\": container with ID starting with 230887b8a50007eae4843823d44ef05ae9897ed6d2d3ca4b4818aec10fc7b776 not found: ID does not exist" Jan 27 16:05:36 crc kubenswrapper[4697]: I0127 16:05:36.578677 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eee2b617-9ed0-4676-9a5b-463845e161f5" path="/var/lib/kubelet/pods/eee2b617-9ed0-4676-9a5b-463845e161f5/volumes" Jan 27 16:05:46 crc kubenswrapper[4697]: I0127 16:05:46.441713 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-66gpn"] Jan 27 16:05:46 crc kubenswrapper[4697]: E0127 16:05:46.442532 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eee2b617-9ed0-4676-9a5b-463845e161f5" containerName="extract-utilities" Jan 27 16:05:46 crc kubenswrapper[4697]: I0127 16:05:46.442544 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="eee2b617-9ed0-4676-9a5b-463845e161f5" containerName="extract-utilities" Jan 27 16:05:46 crc kubenswrapper[4697]: E0127 16:05:46.442557 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ede9f843-6951-4d2b-8c07-a664f189a015" containerName="extract-content" Jan 27 16:05:46 crc kubenswrapper[4697]: I0127 16:05:46.442563 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="ede9f843-6951-4d2b-8c07-a664f189a015" containerName="extract-content" Jan 27 16:05:46 crc kubenswrapper[4697]: E0127 16:05:46.442581 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ede9f843-6951-4d2b-8c07-a664f189a015" containerName="extract-utilities" Jan 27 16:05:46 crc kubenswrapper[4697]: I0127 16:05:46.442587 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="ede9f843-6951-4d2b-8c07-a664f189a015" containerName="extract-utilities" Jan 27 16:05:46 crc kubenswrapper[4697]: E0127 16:05:46.442611 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eee2b617-9ed0-4676-9a5b-463845e161f5" containerName="registry-server" Jan 27 16:05:46 crc kubenswrapper[4697]: I0127 16:05:46.442617 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="eee2b617-9ed0-4676-9a5b-463845e161f5" containerName="registry-server" Jan 27 16:05:46 crc kubenswrapper[4697]: E0127 16:05:46.442627 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eee2b617-9ed0-4676-9a5b-463845e161f5" containerName="extract-content" Jan 27 16:05:46 crc kubenswrapper[4697]: I0127 16:05:46.442633 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="eee2b617-9ed0-4676-9a5b-463845e161f5" containerName="extract-content" Jan 27 16:05:46 crc kubenswrapper[4697]: E0127 16:05:46.442644 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ede9f843-6951-4d2b-8c07-a664f189a015" containerName="registry-server" Jan 27 16:05:46 crc kubenswrapper[4697]: I0127 16:05:46.442650 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="ede9f843-6951-4d2b-8c07-a664f189a015" containerName="registry-server" Jan 27 16:05:46 crc kubenswrapper[4697]: I0127 16:05:46.450467 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="eee2b617-9ed0-4676-9a5b-463845e161f5" containerName="registry-server" Jan 27 16:05:46 crc kubenswrapper[4697]: I0127 16:05:46.450510 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="ede9f843-6951-4d2b-8c07-a664f189a015" containerName="registry-server" Jan 27 16:05:46 crc kubenswrapper[4697]: I0127 16:05:46.451821 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-66gpn"] Jan 27 16:05:46 crc kubenswrapper[4697]: I0127 16:05:46.451901 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-66gpn" Jan 27 16:05:46 crc kubenswrapper[4697]: I0127 16:05:46.596649 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/760e5be0-0b0c-40b2-90c4-b30b1144793c-utilities\") pod \"redhat-operators-66gpn\" (UID: \"760e5be0-0b0c-40b2-90c4-b30b1144793c\") " pod="openshift-marketplace/redhat-operators-66gpn" Jan 27 16:05:46 crc kubenswrapper[4697]: I0127 16:05:46.596703 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/760e5be0-0b0c-40b2-90c4-b30b1144793c-catalog-content\") pod \"redhat-operators-66gpn\" (UID: \"760e5be0-0b0c-40b2-90c4-b30b1144793c\") " pod="openshift-marketplace/redhat-operators-66gpn" Jan 27 16:05:46 crc kubenswrapper[4697]: I0127 16:05:46.596806 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfbnf\" (UniqueName: \"kubernetes.io/projected/760e5be0-0b0c-40b2-90c4-b30b1144793c-kube-api-access-dfbnf\") pod \"redhat-operators-66gpn\" (UID: \"760e5be0-0b0c-40b2-90c4-b30b1144793c\") " pod="openshift-marketplace/redhat-operators-66gpn" Jan 27 16:05:46 crc kubenswrapper[4697]: I0127 16:05:46.698425 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/760e5be0-0b0c-40b2-90c4-b30b1144793c-utilities\") pod \"redhat-operators-66gpn\" (UID: \"760e5be0-0b0c-40b2-90c4-b30b1144793c\") " pod="openshift-marketplace/redhat-operators-66gpn" Jan 27 16:05:46 crc kubenswrapper[4697]: I0127 16:05:46.698770 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/760e5be0-0b0c-40b2-90c4-b30b1144793c-catalog-content\") pod \"redhat-operators-66gpn\" (UID: \"760e5be0-0b0c-40b2-90c4-b30b1144793c\") " pod="openshift-marketplace/redhat-operators-66gpn" Jan 27 16:05:46 crc kubenswrapper[4697]: I0127 16:05:46.698874 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dfbnf\" (UniqueName: \"kubernetes.io/projected/760e5be0-0b0c-40b2-90c4-b30b1144793c-kube-api-access-dfbnf\") pod \"redhat-operators-66gpn\" (UID: \"760e5be0-0b0c-40b2-90c4-b30b1144793c\") " pod="openshift-marketplace/redhat-operators-66gpn" Jan 27 16:05:46 crc kubenswrapper[4697]: I0127 16:05:46.698898 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/760e5be0-0b0c-40b2-90c4-b30b1144793c-utilities\") pod \"redhat-operators-66gpn\" (UID: \"760e5be0-0b0c-40b2-90c4-b30b1144793c\") " pod="openshift-marketplace/redhat-operators-66gpn" Jan 27 16:05:46 crc kubenswrapper[4697]: I0127 16:05:46.699173 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/760e5be0-0b0c-40b2-90c4-b30b1144793c-catalog-content\") pod \"redhat-operators-66gpn\" (UID: \"760e5be0-0b0c-40b2-90c4-b30b1144793c\") " pod="openshift-marketplace/redhat-operators-66gpn" Jan 27 16:05:46 crc kubenswrapper[4697]: I0127 16:05:46.723705 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dfbnf\" (UniqueName: \"kubernetes.io/projected/760e5be0-0b0c-40b2-90c4-b30b1144793c-kube-api-access-dfbnf\") pod \"redhat-operators-66gpn\" (UID: \"760e5be0-0b0c-40b2-90c4-b30b1144793c\") " pod="openshift-marketplace/redhat-operators-66gpn" Jan 27 16:05:46 crc kubenswrapper[4697]: I0127 16:05:46.791172 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-66gpn" Jan 27 16:05:47 crc kubenswrapper[4697]: I0127 16:05:47.320480 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-66gpn"] Jan 27 16:05:48 crc kubenswrapper[4697]: I0127 16:05:48.325143 4697 generic.go:334] "Generic (PLEG): container finished" podID="760e5be0-0b0c-40b2-90c4-b30b1144793c" containerID="0db63ce3cd91c89b86eabf8d7011ef8260d6ff72b790dbc4cc9ef050c48fb3e3" exitCode=0 Jan 27 16:05:48 crc kubenswrapper[4697]: I0127 16:05:48.325553 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-66gpn" event={"ID":"760e5be0-0b0c-40b2-90c4-b30b1144793c","Type":"ContainerDied","Data":"0db63ce3cd91c89b86eabf8d7011ef8260d6ff72b790dbc4cc9ef050c48fb3e3"} Jan 27 16:05:48 crc kubenswrapper[4697]: I0127 16:05:48.325584 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-66gpn" event={"ID":"760e5be0-0b0c-40b2-90c4-b30b1144793c","Type":"ContainerStarted","Data":"9d51c275e0beb4ff28b5edc8417147f2986e1f2aeb7749bcd4985c81ce3b71d2"} Jan 27 16:05:49 crc kubenswrapper[4697]: I0127 16:05:49.333599 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-66gpn" event={"ID":"760e5be0-0b0c-40b2-90c4-b30b1144793c","Type":"ContainerStarted","Data":"09507af484061c36606bace41e5ff89532429338548979db91abfc3e2e68df0b"} Jan 27 16:05:57 crc kubenswrapper[4697]: I0127 16:05:57.414776 4697 generic.go:334] "Generic (PLEG): container finished" podID="760e5be0-0b0c-40b2-90c4-b30b1144793c" containerID="09507af484061c36606bace41e5ff89532429338548979db91abfc3e2e68df0b" exitCode=0 Jan 27 16:05:57 crc kubenswrapper[4697]: I0127 16:05:57.415478 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-66gpn" event={"ID":"760e5be0-0b0c-40b2-90c4-b30b1144793c","Type":"ContainerDied","Data":"09507af484061c36606bace41e5ff89532429338548979db91abfc3e2e68df0b"} Jan 27 16:05:58 crc kubenswrapper[4697]: I0127 16:05:58.425511 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-66gpn" event={"ID":"760e5be0-0b0c-40b2-90c4-b30b1144793c","Type":"ContainerStarted","Data":"a11eb4c1087f303c20faef010ab458e3703de2a22926c276cfdb7636f82843a7"} Jan 27 16:05:58 crc kubenswrapper[4697]: I0127 16:05:58.455921 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-66gpn" podStartSLOduration=2.979492929 podStartE2EDuration="12.455902343s" podCreationTimestamp="2026-01-27 16:05:46 +0000 UTC" firstStartedPulling="2026-01-27 16:05:48.327264479 +0000 UTC m=+3444.499664260" lastFinishedPulling="2026-01-27 16:05:57.803673893 +0000 UTC m=+3453.976073674" observedRunningTime="2026-01-27 16:05:58.445160663 +0000 UTC m=+3454.617560454" watchObservedRunningTime="2026-01-27 16:05:58.455902343 +0000 UTC m=+3454.628302114" Jan 27 16:06:06 crc kubenswrapper[4697]: I0127 16:06:06.791560 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-66gpn" Jan 27 16:06:06 crc kubenswrapper[4697]: I0127 16:06:06.792187 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-66gpn" Jan 27 16:06:07 crc kubenswrapper[4697]: I0127 16:06:07.833706 4697 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-66gpn" podUID="760e5be0-0b0c-40b2-90c4-b30b1144793c" containerName="registry-server" probeResult="failure" output=< Jan 27 16:06:07 crc kubenswrapper[4697]: timeout: failed to connect service ":50051" within 1s Jan 27 16:06:07 crc kubenswrapper[4697]: > Jan 27 16:06:17 crc kubenswrapper[4697]: I0127 16:06:17.836976 4697 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-66gpn" podUID="760e5be0-0b0c-40b2-90c4-b30b1144793c" containerName="registry-server" probeResult="failure" output=< Jan 27 16:06:17 crc kubenswrapper[4697]: timeout: failed to connect service ":50051" within 1s Jan 27 16:06:17 crc kubenswrapper[4697]: > Jan 27 16:06:27 crc kubenswrapper[4697]: I0127 16:06:27.845446 4697 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-66gpn" podUID="760e5be0-0b0c-40b2-90c4-b30b1144793c" containerName="registry-server" probeResult="failure" output=< Jan 27 16:06:27 crc kubenswrapper[4697]: timeout: failed to connect service ":50051" within 1s Jan 27 16:06:27 crc kubenswrapper[4697]: > Jan 27 16:06:36 crc kubenswrapper[4697]: I0127 16:06:36.844670 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-66gpn" Jan 27 16:06:36 crc kubenswrapper[4697]: I0127 16:06:36.905274 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-66gpn" Jan 27 16:06:37 crc kubenswrapper[4697]: I0127 16:06:37.082404 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-66gpn"] Jan 27 16:06:38 crc kubenswrapper[4697]: I0127 16:06:38.362619 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-66gpn" podUID="760e5be0-0b0c-40b2-90c4-b30b1144793c" containerName="registry-server" containerID="cri-o://a11eb4c1087f303c20faef010ab458e3703de2a22926c276cfdb7636f82843a7" gracePeriod=2 Jan 27 16:06:39 crc kubenswrapper[4697]: I0127 16:06:39.024753 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-66gpn" Jan 27 16:06:39 crc kubenswrapper[4697]: I0127 16:06:39.139955 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/760e5be0-0b0c-40b2-90c4-b30b1144793c-catalog-content\") pod \"760e5be0-0b0c-40b2-90c4-b30b1144793c\" (UID: \"760e5be0-0b0c-40b2-90c4-b30b1144793c\") " Jan 27 16:06:39 crc kubenswrapper[4697]: I0127 16:06:39.140054 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dfbnf\" (UniqueName: \"kubernetes.io/projected/760e5be0-0b0c-40b2-90c4-b30b1144793c-kube-api-access-dfbnf\") pod \"760e5be0-0b0c-40b2-90c4-b30b1144793c\" (UID: \"760e5be0-0b0c-40b2-90c4-b30b1144793c\") " Jan 27 16:06:39 crc kubenswrapper[4697]: I0127 16:06:39.140293 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/760e5be0-0b0c-40b2-90c4-b30b1144793c-utilities\") pod \"760e5be0-0b0c-40b2-90c4-b30b1144793c\" (UID: \"760e5be0-0b0c-40b2-90c4-b30b1144793c\") " Jan 27 16:06:39 crc kubenswrapper[4697]: I0127 16:06:39.141040 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/760e5be0-0b0c-40b2-90c4-b30b1144793c-utilities" (OuterVolumeSpecName: "utilities") pod "760e5be0-0b0c-40b2-90c4-b30b1144793c" (UID: "760e5be0-0b0c-40b2-90c4-b30b1144793c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:06:39 crc kubenswrapper[4697]: I0127 16:06:39.146537 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/760e5be0-0b0c-40b2-90c4-b30b1144793c-kube-api-access-dfbnf" (OuterVolumeSpecName: "kube-api-access-dfbnf") pod "760e5be0-0b0c-40b2-90c4-b30b1144793c" (UID: "760e5be0-0b0c-40b2-90c4-b30b1144793c"). InnerVolumeSpecName "kube-api-access-dfbnf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:06:39 crc kubenswrapper[4697]: I0127 16:06:39.242915 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dfbnf\" (UniqueName: \"kubernetes.io/projected/760e5be0-0b0c-40b2-90c4-b30b1144793c-kube-api-access-dfbnf\") on node \"crc\" DevicePath \"\"" Jan 27 16:06:39 crc kubenswrapper[4697]: I0127 16:06:39.242960 4697 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/760e5be0-0b0c-40b2-90c4-b30b1144793c-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 16:06:39 crc kubenswrapper[4697]: I0127 16:06:39.263012 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/760e5be0-0b0c-40b2-90c4-b30b1144793c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "760e5be0-0b0c-40b2-90c4-b30b1144793c" (UID: "760e5be0-0b0c-40b2-90c4-b30b1144793c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:06:39 crc kubenswrapper[4697]: I0127 16:06:39.345501 4697 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/760e5be0-0b0c-40b2-90c4-b30b1144793c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 16:06:39 crc kubenswrapper[4697]: I0127 16:06:39.374453 4697 generic.go:334] "Generic (PLEG): container finished" podID="760e5be0-0b0c-40b2-90c4-b30b1144793c" containerID="a11eb4c1087f303c20faef010ab458e3703de2a22926c276cfdb7636f82843a7" exitCode=0 Jan 27 16:06:39 crc kubenswrapper[4697]: I0127 16:06:39.374502 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-66gpn" event={"ID":"760e5be0-0b0c-40b2-90c4-b30b1144793c","Type":"ContainerDied","Data":"a11eb4c1087f303c20faef010ab458e3703de2a22926c276cfdb7636f82843a7"} Jan 27 16:06:39 crc kubenswrapper[4697]: I0127 16:06:39.374534 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-66gpn" Jan 27 16:06:39 crc kubenswrapper[4697]: I0127 16:06:39.374553 4697 scope.go:117] "RemoveContainer" containerID="a11eb4c1087f303c20faef010ab458e3703de2a22926c276cfdb7636f82843a7" Jan 27 16:06:39 crc kubenswrapper[4697]: I0127 16:06:39.374537 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-66gpn" event={"ID":"760e5be0-0b0c-40b2-90c4-b30b1144793c","Type":"ContainerDied","Data":"9d51c275e0beb4ff28b5edc8417147f2986e1f2aeb7749bcd4985c81ce3b71d2"} Jan 27 16:06:39 crc kubenswrapper[4697]: I0127 16:06:39.414497 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-66gpn"] Jan 27 16:06:39 crc kubenswrapper[4697]: I0127 16:06:39.419273 4697 scope.go:117] "RemoveContainer" containerID="09507af484061c36606bace41e5ff89532429338548979db91abfc3e2e68df0b" Jan 27 16:06:39 crc kubenswrapper[4697]: I0127 16:06:39.442765 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-66gpn"] Jan 27 16:06:39 crc kubenswrapper[4697]: I0127 16:06:39.465557 4697 scope.go:117] "RemoveContainer" containerID="0db63ce3cd91c89b86eabf8d7011ef8260d6ff72b790dbc4cc9ef050c48fb3e3" Jan 27 16:06:39 crc kubenswrapper[4697]: I0127 16:06:39.507828 4697 scope.go:117] "RemoveContainer" containerID="a11eb4c1087f303c20faef010ab458e3703de2a22926c276cfdb7636f82843a7" Jan 27 16:06:39 crc kubenswrapper[4697]: E0127 16:06:39.509139 4697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a11eb4c1087f303c20faef010ab458e3703de2a22926c276cfdb7636f82843a7\": container with ID starting with a11eb4c1087f303c20faef010ab458e3703de2a22926c276cfdb7636f82843a7 not found: ID does not exist" containerID="a11eb4c1087f303c20faef010ab458e3703de2a22926c276cfdb7636f82843a7" Jan 27 16:06:39 crc kubenswrapper[4697]: I0127 16:06:39.509181 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a11eb4c1087f303c20faef010ab458e3703de2a22926c276cfdb7636f82843a7"} err="failed to get container status \"a11eb4c1087f303c20faef010ab458e3703de2a22926c276cfdb7636f82843a7\": rpc error: code = NotFound desc = could not find container \"a11eb4c1087f303c20faef010ab458e3703de2a22926c276cfdb7636f82843a7\": container with ID starting with a11eb4c1087f303c20faef010ab458e3703de2a22926c276cfdb7636f82843a7 not found: ID does not exist" Jan 27 16:06:39 crc kubenswrapper[4697]: I0127 16:06:39.509207 4697 scope.go:117] "RemoveContainer" containerID="09507af484061c36606bace41e5ff89532429338548979db91abfc3e2e68df0b" Jan 27 16:06:39 crc kubenswrapper[4697]: E0127 16:06:39.509588 4697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"09507af484061c36606bace41e5ff89532429338548979db91abfc3e2e68df0b\": container with ID starting with 09507af484061c36606bace41e5ff89532429338548979db91abfc3e2e68df0b not found: ID does not exist" containerID="09507af484061c36606bace41e5ff89532429338548979db91abfc3e2e68df0b" Jan 27 16:06:39 crc kubenswrapper[4697]: I0127 16:06:39.509624 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"09507af484061c36606bace41e5ff89532429338548979db91abfc3e2e68df0b"} err="failed to get container status \"09507af484061c36606bace41e5ff89532429338548979db91abfc3e2e68df0b\": rpc error: code = NotFound desc = could not find container \"09507af484061c36606bace41e5ff89532429338548979db91abfc3e2e68df0b\": container with ID starting with 09507af484061c36606bace41e5ff89532429338548979db91abfc3e2e68df0b not found: ID does not exist" Jan 27 16:06:39 crc kubenswrapper[4697]: I0127 16:06:39.509665 4697 scope.go:117] "RemoveContainer" containerID="0db63ce3cd91c89b86eabf8d7011ef8260d6ff72b790dbc4cc9ef050c48fb3e3" Jan 27 16:06:39 crc kubenswrapper[4697]: E0127 16:06:39.511924 4697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0db63ce3cd91c89b86eabf8d7011ef8260d6ff72b790dbc4cc9ef050c48fb3e3\": container with ID starting with 0db63ce3cd91c89b86eabf8d7011ef8260d6ff72b790dbc4cc9ef050c48fb3e3 not found: ID does not exist" containerID="0db63ce3cd91c89b86eabf8d7011ef8260d6ff72b790dbc4cc9ef050c48fb3e3" Jan 27 16:06:39 crc kubenswrapper[4697]: I0127 16:06:39.511961 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0db63ce3cd91c89b86eabf8d7011ef8260d6ff72b790dbc4cc9ef050c48fb3e3"} err="failed to get container status \"0db63ce3cd91c89b86eabf8d7011ef8260d6ff72b790dbc4cc9ef050c48fb3e3\": rpc error: code = NotFound desc = could not find container \"0db63ce3cd91c89b86eabf8d7011ef8260d6ff72b790dbc4cc9ef050c48fb3e3\": container with ID starting with 0db63ce3cd91c89b86eabf8d7011ef8260d6ff72b790dbc4cc9ef050c48fb3e3 not found: ID does not exist" Jan 27 16:06:40 crc kubenswrapper[4697]: I0127 16:06:40.579053 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="760e5be0-0b0c-40b2-90c4-b30b1144793c" path="/var/lib/kubelet/pods/760e5be0-0b0c-40b2-90c4-b30b1144793c/volumes" Jan 27 16:07:25 crc kubenswrapper[4697]: I0127 16:07:25.108407 4697 patch_prober.go:28] interesting pod/machine-config-daemon-wz495 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 16:07:25 crc kubenswrapper[4697]: I0127 16:07:25.109040 4697 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 16:07:55 crc kubenswrapper[4697]: I0127 16:07:55.108985 4697 patch_prober.go:28] interesting pod/machine-config-daemon-wz495 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 16:07:55 crc kubenswrapper[4697]: I0127 16:07:55.109807 4697 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 16:08:25 crc kubenswrapper[4697]: I0127 16:08:25.108385 4697 patch_prober.go:28] interesting pod/machine-config-daemon-wz495 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 16:08:25 crc kubenswrapper[4697]: I0127 16:08:25.108917 4697 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 16:08:25 crc kubenswrapper[4697]: I0127 16:08:25.108971 4697 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wz495" Jan 27 16:08:25 crc kubenswrapper[4697]: I0127 16:08:25.109758 4697 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c1126d34877407d4cc1a3cef5d83fc9212c644d8a477a9d2a40e1aca1c69dcdf"} pod="openshift-machine-config-operator/machine-config-daemon-wz495" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 16:08:25 crc kubenswrapper[4697]: I0127 16:08:25.109820 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" containerName="machine-config-daemon" containerID="cri-o://c1126d34877407d4cc1a3cef5d83fc9212c644d8a477a9d2a40e1aca1c69dcdf" gracePeriod=600 Jan 27 16:08:25 crc kubenswrapper[4697]: E0127 16:08:25.233107 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wz495_openshift-machine-config-operator(e9bec8bc-b2a6-4865-83ca-692ae5c022a6)\"" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" Jan 27 16:08:25 crc kubenswrapper[4697]: I0127 16:08:25.325504 4697 generic.go:334] "Generic (PLEG): container finished" podID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" containerID="c1126d34877407d4cc1a3cef5d83fc9212c644d8a477a9d2a40e1aca1c69dcdf" exitCode=0 Jan 27 16:08:25 crc kubenswrapper[4697]: I0127 16:08:25.325556 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wz495" event={"ID":"e9bec8bc-b2a6-4865-83ca-692ae5c022a6","Type":"ContainerDied","Data":"c1126d34877407d4cc1a3cef5d83fc9212c644d8a477a9d2a40e1aca1c69dcdf"} Jan 27 16:08:25 crc kubenswrapper[4697]: I0127 16:08:25.325593 4697 scope.go:117] "RemoveContainer" containerID="3eb09feb9e26fa4ecab5032165e1fa5342a47b0a789b98e14109ed9027c3b15d" Jan 27 16:08:25 crc kubenswrapper[4697]: I0127 16:08:25.326330 4697 scope.go:117] "RemoveContainer" containerID="c1126d34877407d4cc1a3cef5d83fc9212c644d8a477a9d2a40e1aca1c69dcdf" Jan 27 16:08:25 crc kubenswrapper[4697]: E0127 16:08:25.326730 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wz495_openshift-machine-config-operator(e9bec8bc-b2a6-4865-83ca-692ae5c022a6)\"" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" Jan 27 16:08:40 crc kubenswrapper[4697]: I0127 16:08:40.569159 4697 scope.go:117] "RemoveContainer" containerID="c1126d34877407d4cc1a3cef5d83fc9212c644d8a477a9d2a40e1aca1c69dcdf" Jan 27 16:08:40 crc kubenswrapper[4697]: E0127 16:08:40.570151 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wz495_openshift-machine-config-operator(e9bec8bc-b2a6-4865-83ca-692ae5c022a6)\"" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" Jan 27 16:08:53 crc kubenswrapper[4697]: I0127 16:08:53.568826 4697 scope.go:117] "RemoveContainer" containerID="c1126d34877407d4cc1a3cef5d83fc9212c644d8a477a9d2a40e1aca1c69dcdf" Jan 27 16:08:53 crc kubenswrapper[4697]: E0127 16:08:53.569773 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wz495_openshift-machine-config-operator(e9bec8bc-b2a6-4865-83ca-692ae5c022a6)\"" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" Jan 27 16:09:04 crc kubenswrapper[4697]: I0127 16:09:04.575335 4697 scope.go:117] "RemoveContainer" containerID="c1126d34877407d4cc1a3cef5d83fc9212c644d8a477a9d2a40e1aca1c69dcdf" Jan 27 16:09:04 crc kubenswrapper[4697]: E0127 16:09:04.576141 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wz495_openshift-machine-config-operator(e9bec8bc-b2a6-4865-83ca-692ae5c022a6)\"" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" Jan 27 16:09:16 crc kubenswrapper[4697]: I0127 16:09:16.655494 4697 patch_prober.go:28] interesting pod/route-controller-manager-c6cc786fd-2v4gr container/route-controller-manager namespace/openshift-route-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.70:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 16:09:16 crc kubenswrapper[4697]: I0127 16:09:16.664384 4697 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-route-controller-manager/route-controller-manager-c6cc786fd-2v4gr" podUID="31443f3c-4452-47aa-a4bd-2ecd733c442d" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.70:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 16:09:17 crc kubenswrapper[4697]: I0127 16:09:17.568943 4697 scope.go:117] "RemoveContainer" containerID="c1126d34877407d4cc1a3cef5d83fc9212c644d8a477a9d2a40e1aca1c69dcdf" Jan 27 16:09:17 crc kubenswrapper[4697]: E0127 16:09:17.569608 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wz495_openshift-machine-config-operator(e9bec8bc-b2a6-4865-83ca-692ae5c022a6)\"" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" Jan 27 16:09:29 crc kubenswrapper[4697]: I0127 16:09:29.568319 4697 scope.go:117] "RemoveContainer" containerID="c1126d34877407d4cc1a3cef5d83fc9212c644d8a477a9d2a40e1aca1c69dcdf" Jan 27 16:09:29 crc kubenswrapper[4697]: E0127 16:09:29.569364 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wz495_openshift-machine-config-operator(e9bec8bc-b2a6-4865-83ca-692ae5c022a6)\"" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" Jan 27 16:09:41 crc kubenswrapper[4697]: I0127 16:09:41.568262 4697 scope.go:117] "RemoveContainer" containerID="c1126d34877407d4cc1a3cef5d83fc9212c644d8a477a9d2a40e1aca1c69dcdf" Jan 27 16:09:41 crc kubenswrapper[4697]: E0127 16:09:41.569322 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wz495_openshift-machine-config-operator(e9bec8bc-b2a6-4865-83ca-692ae5c022a6)\"" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" Jan 27 16:09:53 crc kubenswrapper[4697]: I0127 16:09:53.569179 4697 scope.go:117] "RemoveContainer" containerID="c1126d34877407d4cc1a3cef5d83fc9212c644d8a477a9d2a40e1aca1c69dcdf" Jan 27 16:09:53 crc kubenswrapper[4697]: E0127 16:09:53.569990 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wz495_openshift-machine-config-operator(e9bec8bc-b2a6-4865-83ca-692ae5c022a6)\"" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" Jan 27 16:10:05 crc kubenswrapper[4697]: I0127 16:10:05.568741 4697 scope.go:117] "RemoveContainer" containerID="c1126d34877407d4cc1a3cef5d83fc9212c644d8a477a9d2a40e1aca1c69dcdf" Jan 27 16:10:05 crc kubenswrapper[4697]: E0127 16:10:05.569958 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wz495_openshift-machine-config-operator(e9bec8bc-b2a6-4865-83ca-692ae5c022a6)\"" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" Jan 27 16:10:17 crc kubenswrapper[4697]: I0127 16:10:17.569401 4697 scope.go:117] "RemoveContainer" containerID="c1126d34877407d4cc1a3cef5d83fc9212c644d8a477a9d2a40e1aca1c69dcdf" Jan 27 16:10:17 crc kubenswrapper[4697]: E0127 16:10:17.570458 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wz495_openshift-machine-config-operator(e9bec8bc-b2a6-4865-83ca-692ae5c022a6)\"" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" Jan 27 16:10:29 crc kubenswrapper[4697]: I0127 16:10:29.569129 4697 scope.go:117] "RemoveContainer" containerID="c1126d34877407d4cc1a3cef5d83fc9212c644d8a477a9d2a40e1aca1c69dcdf" Jan 27 16:10:29 crc kubenswrapper[4697]: E0127 16:10:29.569939 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wz495_openshift-machine-config-operator(e9bec8bc-b2a6-4865-83ca-692ae5c022a6)\"" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" Jan 27 16:10:44 crc kubenswrapper[4697]: I0127 16:10:44.590217 4697 scope.go:117] "RemoveContainer" containerID="c1126d34877407d4cc1a3cef5d83fc9212c644d8a477a9d2a40e1aca1c69dcdf" Jan 27 16:10:44 crc kubenswrapper[4697]: E0127 16:10:44.592118 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wz495_openshift-machine-config-operator(e9bec8bc-b2a6-4865-83ca-692ae5c022a6)\"" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" Jan 27 16:10:57 crc kubenswrapper[4697]: I0127 16:10:57.568255 4697 scope.go:117] "RemoveContainer" containerID="c1126d34877407d4cc1a3cef5d83fc9212c644d8a477a9d2a40e1aca1c69dcdf" Jan 27 16:10:57 crc kubenswrapper[4697]: E0127 16:10:57.569056 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wz495_openshift-machine-config-operator(e9bec8bc-b2a6-4865-83ca-692ae5c022a6)\"" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" Jan 27 16:11:11 crc kubenswrapper[4697]: I0127 16:11:11.568557 4697 scope.go:117] "RemoveContainer" containerID="c1126d34877407d4cc1a3cef5d83fc9212c644d8a477a9d2a40e1aca1c69dcdf" Jan 27 16:11:11 crc kubenswrapper[4697]: E0127 16:11:11.569518 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wz495_openshift-machine-config-operator(e9bec8bc-b2a6-4865-83ca-692ae5c022a6)\"" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" Jan 27 16:11:25 crc kubenswrapper[4697]: I0127 16:11:25.568869 4697 scope.go:117] "RemoveContainer" containerID="c1126d34877407d4cc1a3cef5d83fc9212c644d8a477a9d2a40e1aca1c69dcdf" Jan 27 16:11:25 crc kubenswrapper[4697]: E0127 16:11:25.569526 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wz495_openshift-machine-config-operator(e9bec8bc-b2a6-4865-83ca-692ae5c022a6)\"" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" Jan 27 16:11:36 crc kubenswrapper[4697]: I0127 16:11:36.570730 4697 scope.go:117] "RemoveContainer" containerID="c1126d34877407d4cc1a3cef5d83fc9212c644d8a477a9d2a40e1aca1c69dcdf" Jan 27 16:11:36 crc kubenswrapper[4697]: E0127 16:11:36.571800 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wz495_openshift-machine-config-operator(e9bec8bc-b2a6-4865-83ca-692ae5c022a6)\"" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" Jan 27 16:11:47 crc kubenswrapper[4697]: I0127 16:11:47.569015 4697 scope.go:117] "RemoveContainer" containerID="c1126d34877407d4cc1a3cef5d83fc9212c644d8a477a9d2a40e1aca1c69dcdf" Jan 27 16:11:47 crc kubenswrapper[4697]: E0127 16:11:47.569737 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wz495_openshift-machine-config-operator(e9bec8bc-b2a6-4865-83ca-692ae5c022a6)\"" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" Jan 27 16:12:00 crc kubenswrapper[4697]: I0127 16:12:00.568654 4697 scope.go:117] "RemoveContainer" containerID="c1126d34877407d4cc1a3cef5d83fc9212c644d8a477a9d2a40e1aca1c69dcdf" Jan 27 16:12:00 crc kubenswrapper[4697]: E0127 16:12:00.569472 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wz495_openshift-machine-config-operator(e9bec8bc-b2a6-4865-83ca-692ae5c022a6)\"" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" Jan 27 16:12:11 crc kubenswrapper[4697]: I0127 16:12:11.569001 4697 scope.go:117] "RemoveContainer" containerID="c1126d34877407d4cc1a3cef5d83fc9212c644d8a477a9d2a40e1aca1c69dcdf" Jan 27 16:12:11 crc kubenswrapper[4697]: E0127 16:12:11.569669 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wz495_openshift-machine-config-operator(e9bec8bc-b2a6-4865-83ca-692ae5c022a6)\"" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" Jan 27 16:12:25 crc kubenswrapper[4697]: I0127 16:12:25.569561 4697 scope.go:117] "RemoveContainer" containerID="c1126d34877407d4cc1a3cef5d83fc9212c644d8a477a9d2a40e1aca1c69dcdf" Jan 27 16:12:25 crc kubenswrapper[4697]: E0127 16:12:25.570549 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wz495_openshift-machine-config-operator(e9bec8bc-b2a6-4865-83ca-692ae5c022a6)\"" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" Jan 27 16:12:34 crc kubenswrapper[4697]: I0127 16:12:34.043737 4697 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-webhook-server-5476f886c6-mrv5l" podUID="4779f8a7-b446-4128-8800-0b6420fda6d8" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.47:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 16:12:34 crc kubenswrapper[4697]: I0127 16:12:34.043795 4697 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-kk5jh" podUID="cb6be63b-c3fd-4e21-a1b3-ffc11357a98f" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.49:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 16:12:34 crc kubenswrapper[4697]: I0127 16:12:34.043841 4697 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/controller-6968d8fdc4-shgkw" podUID="0ecbc291-e00b-42be-b1dc-fd53bcb5256a" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.50:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 16:12:34 crc kubenswrapper[4697]: I0127 16:12:34.043864 4697 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/controller-6968d8fdc4-shgkw" podUID="0ecbc291-e00b-42be-b1dc-fd53bcb5256a" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.50:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 16:12:34 crc kubenswrapper[4697]: I0127 16:12:34.043881 4697 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-hwq4c container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.67:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 16:12:34 crc kubenswrapper[4697]: I0127 16:12:34.055473 4697 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-hwq4c" podUID="e4c801e2-39ef-4230-8bb0-fed36eccba1a" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.67:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 16:12:34 crc kubenswrapper[4697]: I0127 16:12:34.043926 4697 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-kk5jh" podUID="cb6be63b-c3fd-4e21-a1b3-ffc11357a98f" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.49:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 16:12:34 crc kubenswrapper[4697]: I0127 16:12:34.043914 4697 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-687f57d79b-8n4gj" podUID="a29b72d6-fcd5-4a5a-b779-437cfc4c8365" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.77:6080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 16:12:34 crc kubenswrapper[4697]: I0127 16:12:34.043895 4697 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-hwq4c container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.67:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 16:12:34 crc kubenswrapper[4697]: I0127 16:12:34.055626 4697 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-79b997595-hwq4c" podUID="e4c801e2-39ef-4230-8bb0-fed36eccba1a" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.67:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 16:12:39 crc kubenswrapper[4697]: I0127 16:12:39.568341 4697 scope.go:117] "RemoveContainer" containerID="c1126d34877407d4cc1a3cef5d83fc9212c644d8a477a9d2a40e1aca1c69dcdf" Jan 27 16:12:39 crc kubenswrapper[4697]: E0127 16:12:39.569975 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wz495_openshift-machine-config-operator(e9bec8bc-b2a6-4865-83ca-692ae5c022a6)\"" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" Jan 27 16:12:53 crc kubenswrapper[4697]: I0127 16:12:53.568587 4697 scope.go:117] "RemoveContainer" containerID="c1126d34877407d4cc1a3cef5d83fc9212c644d8a477a9d2a40e1aca1c69dcdf" Jan 27 16:12:53 crc kubenswrapper[4697]: E0127 16:12:53.569320 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wz495_openshift-machine-config-operator(e9bec8bc-b2a6-4865-83ca-692ae5c022a6)\"" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" Jan 27 16:13:07 crc kubenswrapper[4697]: I0127 16:13:07.568702 4697 scope.go:117] "RemoveContainer" containerID="c1126d34877407d4cc1a3cef5d83fc9212c644d8a477a9d2a40e1aca1c69dcdf" Jan 27 16:13:07 crc kubenswrapper[4697]: E0127 16:13:07.569419 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wz495_openshift-machine-config-operator(e9bec8bc-b2a6-4865-83ca-692ae5c022a6)\"" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" Jan 27 16:13:18 crc kubenswrapper[4697]: I0127 16:13:18.568937 4697 scope.go:117] "RemoveContainer" containerID="c1126d34877407d4cc1a3cef5d83fc9212c644d8a477a9d2a40e1aca1c69dcdf" Jan 27 16:13:18 crc kubenswrapper[4697]: E0127 16:13:18.569604 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wz495_openshift-machine-config-operator(e9bec8bc-b2a6-4865-83ca-692ae5c022a6)\"" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" Jan 27 16:13:31 crc kubenswrapper[4697]: I0127 16:13:31.568862 4697 scope.go:117] "RemoveContainer" containerID="c1126d34877407d4cc1a3cef5d83fc9212c644d8a477a9d2a40e1aca1c69dcdf" Jan 27 16:13:31 crc kubenswrapper[4697]: I0127 16:13:31.828095 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wz495" event={"ID":"e9bec8bc-b2a6-4865-83ca-692ae5c022a6","Type":"ContainerStarted","Data":"69f7f7f2383abcc5335b945a24d4e8423be42e0d7cb37b789173d63bf5bb273d"} Jan 27 16:14:40 crc kubenswrapper[4697]: I0127 16:14:40.664592 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-fr6pp"] Jan 27 16:14:40 crc kubenswrapper[4697]: E0127 16:14:40.665520 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="760e5be0-0b0c-40b2-90c4-b30b1144793c" containerName="registry-server" Jan 27 16:14:40 crc kubenswrapper[4697]: I0127 16:14:40.665535 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="760e5be0-0b0c-40b2-90c4-b30b1144793c" containerName="registry-server" Jan 27 16:14:40 crc kubenswrapper[4697]: E0127 16:14:40.665561 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="760e5be0-0b0c-40b2-90c4-b30b1144793c" containerName="extract-utilities" Jan 27 16:14:40 crc kubenswrapper[4697]: I0127 16:14:40.665570 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="760e5be0-0b0c-40b2-90c4-b30b1144793c" containerName="extract-utilities" Jan 27 16:14:40 crc kubenswrapper[4697]: E0127 16:14:40.665595 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="760e5be0-0b0c-40b2-90c4-b30b1144793c" containerName="extract-content" Jan 27 16:14:40 crc kubenswrapper[4697]: I0127 16:14:40.665603 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="760e5be0-0b0c-40b2-90c4-b30b1144793c" containerName="extract-content" Jan 27 16:14:40 crc kubenswrapper[4697]: I0127 16:14:40.665869 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="760e5be0-0b0c-40b2-90c4-b30b1144793c" containerName="registry-server" Jan 27 16:14:40 crc kubenswrapper[4697]: I0127 16:14:40.667448 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fr6pp" Jan 27 16:14:40 crc kubenswrapper[4697]: I0127 16:14:40.680520 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fr6pp"] Jan 27 16:14:40 crc kubenswrapper[4697]: I0127 16:14:40.743500 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f480ad6-7b35-41ee-83d6-ae4008562d2d-catalog-content\") pod \"certified-operators-fr6pp\" (UID: \"2f480ad6-7b35-41ee-83d6-ae4008562d2d\") " pod="openshift-marketplace/certified-operators-fr6pp" Jan 27 16:14:40 crc kubenswrapper[4697]: I0127 16:14:40.743564 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f480ad6-7b35-41ee-83d6-ae4008562d2d-utilities\") pod \"certified-operators-fr6pp\" (UID: \"2f480ad6-7b35-41ee-83d6-ae4008562d2d\") " pod="openshift-marketplace/certified-operators-fr6pp" Jan 27 16:14:40 crc kubenswrapper[4697]: I0127 16:14:40.743618 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8j8tx\" (UniqueName: \"kubernetes.io/projected/2f480ad6-7b35-41ee-83d6-ae4008562d2d-kube-api-access-8j8tx\") pod \"certified-operators-fr6pp\" (UID: \"2f480ad6-7b35-41ee-83d6-ae4008562d2d\") " pod="openshift-marketplace/certified-operators-fr6pp" Jan 27 16:14:40 crc kubenswrapper[4697]: I0127 16:14:40.845647 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f480ad6-7b35-41ee-83d6-ae4008562d2d-catalog-content\") pod \"certified-operators-fr6pp\" (UID: \"2f480ad6-7b35-41ee-83d6-ae4008562d2d\") " pod="openshift-marketplace/certified-operators-fr6pp" Jan 27 16:14:40 crc kubenswrapper[4697]: I0127 16:14:40.845702 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f480ad6-7b35-41ee-83d6-ae4008562d2d-utilities\") pod \"certified-operators-fr6pp\" (UID: \"2f480ad6-7b35-41ee-83d6-ae4008562d2d\") " pod="openshift-marketplace/certified-operators-fr6pp" Jan 27 16:14:40 crc kubenswrapper[4697]: I0127 16:14:40.845759 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8j8tx\" (UniqueName: \"kubernetes.io/projected/2f480ad6-7b35-41ee-83d6-ae4008562d2d-kube-api-access-8j8tx\") pod \"certified-operators-fr6pp\" (UID: \"2f480ad6-7b35-41ee-83d6-ae4008562d2d\") " pod="openshift-marketplace/certified-operators-fr6pp" Jan 27 16:14:40 crc kubenswrapper[4697]: I0127 16:14:40.846162 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f480ad6-7b35-41ee-83d6-ae4008562d2d-catalog-content\") pod \"certified-operators-fr6pp\" (UID: \"2f480ad6-7b35-41ee-83d6-ae4008562d2d\") " pod="openshift-marketplace/certified-operators-fr6pp" Jan 27 16:14:40 crc kubenswrapper[4697]: I0127 16:14:40.846256 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f480ad6-7b35-41ee-83d6-ae4008562d2d-utilities\") pod \"certified-operators-fr6pp\" (UID: \"2f480ad6-7b35-41ee-83d6-ae4008562d2d\") " pod="openshift-marketplace/certified-operators-fr6pp" Jan 27 16:14:40 crc kubenswrapper[4697]: I0127 16:14:40.864670 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8j8tx\" (UniqueName: \"kubernetes.io/projected/2f480ad6-7b35-41ee-83d6-ae4008562d2d-kube-api-access-8j8tx\") pod \"certified-operators-fr6pp\" (UID: \"2f480ad6-7b35-41ee-83d6-ae4008562d2d\") " pod="openshift-marketplace/certified-operators-fr6pp" Jan 27 16:14:40 crc kubenswrapper[4697]: I0127 16:14:40.989805 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fr6pp" Jan 27 16:14:41 crc kubenswrapper[4697]: I0127 16:14:41.559979 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fr6pp"] Jan 27 16:14:42 crc kubenswrapper[4697]: I0127 16:14:42.391755 4697 generic.go:334] "Generic (PLEG): container finished" podID="2f480ad6-7b35-41ee-83d6-ae4008562d2d" containerID="2897602e51e79266813b0435157ff21335ce43a4812f9326b09ac30cde3dd1f5" exitCode=0 Jan 27 16:14:42 crc kubenswrapper[4697]: I0127 16:14:42.391877 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fr6pp" event={"ID":"2f480ad6-7b35-41ee-83d6-ae4008562d2d","Type":"ContainerDied","Data":"2897602e51e79266813b0435157ff21335ce43a4812f9326b09ac30cde3dd1f5"} Jan 27 16:14:42 crc kubenswrapper[4697]: I0127 16:14:42.393185 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fr6pp" event={"ID":"2f480ad6-7b35-41ee-83d6-ae4008562d2d","Type":"ContainerStarted","Data":"c10bf9de0033bd700684e8018a2ca68c33e8e8b7aa180e5e742249a8e65d5d70"} Jan 27 16:14:42 crc kubenswrapper[4697]: I0127 16:14:42.394246 4697 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 16:14:44 crc kubenswrapper[4697]: I0127 16:14:44.413149 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fr6pp" event={"ID":"2f480ad6-7b35-41ee-83d6-ae4008562d2d","Type":"ContainerStarted","Data":"961f36bf04db1b0d6ce1eccd3e7b759459e0ae75dd25558f352acac60835960d"} Jan 27 16:14:51 crc kubenswrapper[4697]: I0127 16:14:51.474137 4697 generic.go:334] "Generic (PLEG): container finished" podID="2f480ad6-7b35-41ee-83d6-ae4008562d2d" containerID="961f36bf04db1b0d6ce1eccd3e7b759459e0ae75dd25558f352acac60835960d" exitCode=0 Jan 27 16:14:51 crc kubenswrapper[4697]: I0127 16:14:51.474321 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fr6pp" event={"ID":"2f480ad6-7b35-41ee-83d6-ae4008562d2d","Type":"ContainerDied","Data":"961f36bf04db1b0d6ce1eccd3e7b759459e0ae75dd25558f352acac60835960d"} Jan 27 16:14:53 crc kubenswrapper[4697]: I0127 16:14:53.491637 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fr6pp" event={"ID":"2f480ad6-7b35-41ee-83d6-ae4008562d2d","Type":"ContainerStarted","Data":"2db8b7874a9f33b81f20c352dd2850a6564b612cca9e3ffd4c0565757b0fa2e2"} Jan 27 16:14:53 crc kubenswrapper[4697]: I0127 16:14:53.558200 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-fr6pp" podStartSLOduration=3.6113723110000002 podStartE2EDuration="13.558172863s" podCreationTimestamp="2026-01-27 16:14:40 +0000 UTC" firstStartedPulling="2026-01-27 16:14:42.394021446 +0000 UTC m=+3978.566421227" lastFinishedPulling="2026-01-27 16:14:52.340821988 +0000 UTC m=+3988.513221779" observedRunningTime="2026-01-27 16:14:53.521403323 +0000 UTC m=+3989.693803104" watchObservedRunningTime="2026-01-27 16:14:53.558172863 +0000 UTC m=+3989.730572644" Jan 27 16:15:00 crc kubenswrapper[4697]: I0127 16:15:00.230504 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492175-j5ffm"] Jan 27 16:15:00 crc kubenswrapper[4697]: I0127 16:15:00.233698 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492175-j5ffm" Jan 27 16:15:00 crc kubenswrapper[4697]: I0127 16:15:00.238920 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 27 16:15:00 crc kubenswrapper[4697]: I0127 16:15:00.240480 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 27 16:15:00 crc kubenswrapper[4697]: I0127 16:15:00.250613 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492175-j5ffm"] Jan 27 16:15:00 crc kubenswrapper[4697]: I0127 16:15:00.358183 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ac5dee5a-b521-4fab-bd01-5a42d62b13c1-config-volume\") pod \"collect-profiles-29492175-j5ffm\" (UID: \"ac5dee5a-b521-4fab-bd01-5a42d62b13c1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492175-j5ffm" Jan 27 16:15:00 crc kubenswrapper[4697]: I0127 16:15:00.358250 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ac5dee5a-b521-4fab-bd01-5a42d62b13c1-secret-volume\") pod \"collect-profiles-29492175-j5ffm\" (UID: \"ac5dee5a-b521-4fab-bd01-5a42d62b13c1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492175-j5ffm" Jan 27 16:15:00 crc kubenswrapper[4697]: I0127 16:15:00.358309 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzb9q\" (UniqueName: \"kubernetes.io/projected/ac5dee5a-b521-4fab-bd01-5a42d62b13c1-kube-api-access-dzb9q\") pod \"collect-profiles-29492175-j5ffm\" (UID: \"ac5dee5a-b521-4fab-bd01-5a42d62b13c1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492175-j5ffm" Jan 27 16:15:00 crc kubenswrapper[4697]: I0127 16:15:00.460156 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ac5dee5a-b521-4fab-bd01-5a42d62b13c1-config-volume\") pod \"collect-profiles-29492175-j5ffm\" (UID: \"ac5dee5a-b521-4fab-bd01-5a42d62b13c1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492175-j5ffm" Jan 27 16:15:00 crc kubenswrapper[4697]: I0127 16:15:00.460209 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ac5dee5a-b521-4fab-bd01-5a42d62b13c1-secret-volume\") pod \"collect-profiles-29492175-j5ffm\" (UID: \"ac5dee5a-b521-4fab-bd01-5a42d62b13c1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492175-j5ffm" Jan 27 16:15:00 crc kubenswrapper[4697]: I0127 16:15:00.460260 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dzb9q\" (UniqueName: \"kubernetes.io/projected/ac5dee5a-b521-4fab-bd01-5a42d62b13c1-kube-api-access-dzb9q\") pod \"collect-profiles-29492175-j5ffm\" (UID: \"ac5dee5a-b521-4fab-bd01-5a42d62b13c1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492175-j5ffm" Jan 27 16:15:00 crc kubenswrapper[4697]: I0127 16:15:00.461273 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ac5dee5a-b521-4fab-bd01-5a42d62b13c1-config-volume\") pod \"collect-profiles-29492175-j5ffm\" (UID: \"ac5dee5a-b521-4fab-bd01-5a42d62b13c1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492175-j5ffm" Jan 27 16:15:00 crc kubenswrapper[4697]: I0127 16:15:00.468437 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ac5dee5a-b521-4fab-bd01-5a42d62b13c1-secret-volume\") pod \"collect-profiles-29492175-j5ffm\" (UID: \"ac5dee5a-b521-4fab-bd01-5a42d62b13c1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492175-j5ffm" Jan 27 16:15:00 crc kubenswrapper[4697]: I0127 16:15:00.483107 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dzb9q\" (UniqueName: \"kubernetes.io/projected/ac5dee5a-b521-4fab-bd01-5a42d62b13c1-kube-api-access-dzb9q\") pod \"collect-profiles-29492175-j5ffm\" (UID: \"ac5dee5a-b521-4fab-bd01-5a42d62b13c1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492175-j5ffm" Jan 27 16:15:00 crc kubenswrapper[4697]: I0127 16:15:00.555452 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492175-j5ffm" Jan 27 16:15:01 crc kubenswrapper[4697]: I0127 16:15:00.990842 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-fr6pp" Jan 27 16:15:01 crc kubenswrapper[4697]: I0127 16:15:00.993196 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-fr6pp" Jan 27 16:15:01 crc kubenswrapper[4697]: I0127 16:15:01.149399 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492175-j5ffm"] Jan 27 16:15:01 crc kubenswrapper[4697]: I0127 16:15:01.566752 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492175-j5ffm" event={"ID":"ac5dee5a-b521-4fab-bd01-5a42d62b13c1","Type":"ContainerStarted","Data":"918c67933ae6fc0a243057a3b362750b2f7d8bb22e18d5064a825298af6b9fb7"} Jan 27 16:15:01 crc kubenswrapper[4697]: I0127 16:15:01.567057 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492175-j5ffm" event={"ID":"ac5dee5a-b521-4fab-bd01-5a42d62b13c1","Type":"ContainerStarted","Data":"7e1a7fc713298c2f4b8a32707b5f38b9180b4d5bdf426078e4cbc9bb43436e36"} Jan 27 16:15:02 crc kubenswrapper[4697]: I0127 16:15:02.050737 4697 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-fr6pp" podUID="2f480ad6-7b35-41ee-83d6-ae4008562d2d" containerName="registry-server" probeResult="failure" output=< Jan 27 16:15:02 crc kubenswrapper[4697]: timeout: failed to connect service ":50051" within 1s Jan 27 16:15:02 crc kubenswrapper[4697]: > Jan 27 16:15:02 crc kubenswrapper[4697]: I0127 16:15:02.578259 4697 generic.go:334] "Generic (PLEG): container finished" podID="ac5dee5a-b521-4fab-bd01-5a42d62b13c1" containerID="918c67933ae6fc0a243057a3b362750b2f7d8bb22e18d5064a825298af6b9fb7" exitCode=0 Jan 27 16:15:02 crc kubenswrapper[4697]: I0127 16:15:02.586208 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492175-j5ffm" event={"ID":"ac5dee5a-b521-4fab-bd01-5a42d62b13c1","Type":"ContainerDied","Data":"918c67933ae6fc0a243057a3b362750b2f7d8bb22e18d5064a825298af6b9fb7"} Jan 27 16:15:04 crc kubenswrapper[4697]: I0127 16:15:04.286210 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492175-j5ffm" Jan 27 16:15:04 crc kubenswrapper[4697]: I0127 16:15:04.341711 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ac5dee5a-b521-4fab-bd01-5a42d62b13c1-secret-volume\") pod \"ac5dee5a-b521-4fab-bd01-5a42d62b13c1\" (UID: \"ac5dee5a-b521-4fab-bd01-5a42d62b13c1\") " Jan 27 16:15:04 crc kubenswrapper[4697]: I0127 16:15:04.341813 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ac5dee5a-b521-4fab-bd01-5a42d62b13c1-config-volume\") pod \"ac5dee5a-b521-4fab-bd01-5a42d62b13c1\" (UID: \"ac5dee5a-b521-4fab-bd01-5a42d62b13c1\") " Jan 27 16:15:04 crc kubenswrapper[4697]: I0127 16:15:04.341921 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dzb9q\" (UniqueName: \"kubernetes.io/projected/ac5dee5a-b521-4fab-bd01-5a42d62b13c1-kube-api-access-dzb9q\") pod \"ac5dee5a-b521-4fab-bd01-5a42d62b13c1\" (UID: \"ac5dee5a-b521-4fab-bd01-5a42d62b13c1\") " Jan 27 16:15:04 crc kubenswrapper[4697]: I0127 16:15:04.344376 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac5dee5a-b521-4fab-bd01-5a42d62b13c1-config-volume" (OuterVolumeSpecName: "config-volume") pod "ac5dee5a-b521-4fab-bd01-5a42d62b13c1" (UID: "ac5dee5a-b521-4fab-bd01-5a42d62b13c1"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:15:04 crc kubenswrapper[4697]: I0127 16:15:04.355267 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac5dee5a-b521-4fab-bd01-5a42d62b13c1-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "ac5dee5a-b521-4fab-bd01-5a42d62b13c1" (UID: "ac5dee5a-b521-4fab-bd01-5a42d62b13c1"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:15:04 crc kubenswrapper[4697]: I0127 16:15:04.355186 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac5dee5a-b521-4fab-bd01-5a42d62b13c1-kube-api-access-dzb9q" (OuterVolumeSpecName: "kube-api-access-dzb9q") pod "ac5dee5a-b521-4fab-bd01-5a42d62b13c1" (UID: "ac5dee5a-b521-4fab-bd01-5a42d62b13c1"). InnerVolumeSpecName "kube-api-access-dzb9q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:15:04 crc kubenswrapper[4697]: I0127 16:15:04.444350 4697 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ac5dee5a-b521-4fab-bd01-5a42d62b13c1-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 27 16:15:04 crc kubenswrapper[4697]: I0127 16:15:04.444381 4697 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ac5dee5a-b521-4fab-bd01-5a42d62b13c1-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 16:15:04 crc kubenswrapper[4697]: I0127 16:15:04.444392 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dzb9q\" (UniqueName: \"kubernetes.io/projected/ac5dee5a-b521-4fab-bd01-5a42d62b13c1-kube-api-access-dzb9q\") on node \"crc\" DevicePath \"\"" Jan 27 16:15:04 crc kubenswrapper[4697]: I0127 16:15:04.595265 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492175-j5ffm" event={"ID":"ac5dee5a-b521-4fab-bd01-5a42d62b13c1","Type":"ContainerDied","Data":"7e1a7fc713298c2f4b8a32707b5f38b9180b4d5bdf426078e4cbc9bb43436e36"} Jan 27 16:15:04 crc kubenswrapper[4697]: I0127 16:15:04.595491 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492175-j5ffm" Jan 27 16:15:04 crc kubenswrapper[4697]: I0127 16:15:04.595305 4697 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7e1a7fc713298c2f4b8a32707b5f38b9180b4d5bdf426078e4cbc9bb43436e36" Jan 27 16:15:04 crc kubenswrapper[4697]: I0127 16:15:04.704013 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492130-v7pxt"] Jan 27 16:15:04 crc kubenswrapper[4697]: I0127 16:15:04.712911 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492130-v7pxt"] Jan 27 16:15:06 crc kubenswrapper[4697]: I0127 16:15:06.580369 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4c4714f-1111-4d49-88a7-e1ac4dfa70b6" path="/var/lib/kubelet/pods/f4c4714f-1111-4d49-88a7-e1ac4dfa70b6/volumes" Jan 27 16:15:11 crc kubenswrapper[4697]: I0127 16:15:11.047586 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-fr6pp" Jan 27 16:15:11 crc kubenswrapper[4697]: I0127 16:15:11.127025 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-fr6pp" Jan 27 16:15:11 crc kubenswrapper[4697]: I0127 16:15:11.869702 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fr6pp"] Jan 27 16:15:12 crc kubenswrapper[4697]: I0127 16:15:12.669162 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-fr6pp" podUID="2f480ad6-7b35-41ee-83d6-ae4008562d2d" containerName="registry-server" containerID="cri-o://2db8b7874a9f33b81f20c352dd2850a6564b612cca9e3ffd4c0565757b0fa2e2" gracePeriod=2 Jan 27 16:15:13 crc kubenswrapper[4697]: I0127 16:15:13.284637 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fr6pp" Jan 27 16:15:13 crc kubenswrapper[4697]: I0127 16:15:13.345186 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8j8tx\" (UniqueName: \"kubernetes.io/projected/2f480ad6-7b35-41ee-83d6-ae4008562d2d-kube-api-access-8j8tx\") pod \"2f480ad6-7b35-41ee-83d6-ae4008562d2d\" (UID: \"2f480ad6-7b35-41ee-83d6-ae4008562d2d\") " Jan 27 16:15:13 crc kubenswrapper[4697]: I0127 16:15:13.345228 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f480ad6-7b35-41ee-83d6-ae4008562d2d-catalog-content\") pod \"2f480ad6-7b35-41ee-83d6-ae4008562d2d\" (UID: \"2f480ad6-7b35-41ee-83d6-ae4008562d2d\") " Jan 27 16:15:13 crc kubenswrapper[4697]: I0127 16:15:13.345313 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f480ad6-7b35-41ee-83d6-ae4008562d2d-utilities\") pod \"2f480ad6-7b35-41ee-83d6-ae4008562d2d\" (UID: \"2f480ad6-7b35-41ee-83d6-ae4008562d2d\") " Jan 27 16:15:13 crc kubenswrapper[4697]: I0127 16:15:13.346324 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f480ad6-7b35-41ee-83d6-ae4008562d2d-utilities" (OuterVolumeSpecName: "utilities") pod "2f480ad6-7b35-41ee-83d6-ae4008562d2d" (UID: "2f480ad6-7b35-41ee-83d6-ae4008562d2d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:15:13 crc kubenswrapper[4697]: I0127 16:15:13.352976 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f480ad6-7b35-41ee-83d6-ae4008562d2d-kube-api-access-8j8tx" (OuterVolumeSpecName: "kube-api-access-8j8tx") pod "2f480ad6-7b35-41ee-83d6-ae4008562d2d" (UID: "2f480ad6-7b35-41ee-83d6-ae4008562d2d"). InnerVolumeSpecName "kube-api-access-8j8tx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:15:13 crc kubenswrapper[4697]: I0127 16:15:13.397153 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f480ad6-7b35-41ee-83d6-ae4008562d2d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2f480ad6-7b35-41ee-83d6-ae4008562d2d" (UID: "2f480ad6-7b35-41ee-83d6-ae4008562d2d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:15:13 crc kubenswrapper[4697]: I0127 16:15:13.447095 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8j8tx\" (UniqueName: \"kubernetes.io/projected/2f480ad6-7b35-41ee-83d6-ae4008562d2d-kube-api-access-8j8tx\") on node \"crc\" DevicePath \"\"" Jan 27 16:15:13 crc kubenswrapper[4697]: I0127 16:15:13.447128 4697 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f480ad6-7b35-41ee-83d6-ae4008562d2d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 16:15:13 crc kubenswrapper[4697]: I0127 16:15:13.447138 4697 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f480ad6-7b35-41ee-83d6-ae4008562d2d-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 16:15:13 crc kubenswrapper[4697]: I0127 16:15:13.687683 4697 generic.go:334] "Generic (PLEG): container finished" podID="2f480ad6-7b35-41ee-83d6-ae4008562d2d" containerID="2db8b7874a9f33b81f20c352dd2850a6564b612cca9e3ffd4c0565757b0fa2e2" exitCode=0 Jan 27 16:15:13 crc kubenswrapper[4697]: I0127 16:15:13.687753 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fr6pp" event={"ID":"2f480ad6-7b35-41ee-83d6-ae4008562d2d","Type":"ContainerDied","Data":"2db8b7874a9f33b81f20c352dd2850a6564b612cca9e3ffd4c0565757b0fa2e2"} Jan 27 16:15:13 crc kubenswrapper[4697]: I0127 16:15:13.687883 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fr6pp" event={"ID":"2f480ad6-7b35-41ee-83d6-ae4008562d2d","Type":"ContainerDied","Data":"c10bf9de0033bd700684e8018a2ca68c33e8e8b7aa180e5e742249a8e65d5d70"} Jan 27 16:15:13 crc kubenswrapper[4697]: I0127 16:15:13.687932 4697 scope.go:117] "RemoveContainer" containerID="2db8b7874a9f33b81f20c352dd2850a6564b612cca9e3ffd4c0565757b0fa2e2" Jan 27 16:15:13 crc kubenswrapper[4697]: I0127 16:15:13.688019 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fr6pp" Jan 27 16:15:13 crc kubenswrapper[4697]: I0127 16:15:13.741413 4697 scope.go:117] "RemoveContainer" containerID="961f36bf04db1b0d6ce1eccd3e7b759459e0ae75dd25558f352acac60835960d" Jan 27 16:15:13 crc kubenswrapper[4697]: I0127 16:15:13.759695 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fr6pp"] Jan 27 16:15:13 crc kubenswrapper[4697]: I0127 16:15:13.768125 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-fr6pp"] Jan 27 16:15:13 crc kubenswrapper[4697]: I0127 16:15:13.792584 4697 scope.go:117] "RemoveContainer" containerID="2897602e51e79266813b0435157ff21335ce43a4812f9326b09ac30cde3dd1f5" Jan 27 16:15:13 crc kubenswrapper[4697]: I0127 16:15:13.816866 4697 scope.go:117] "RemoveContainer" containerID="2db8b7874a9f33b81f20c352dd2850a6564b612cca9e3ffd4c0565757b0fa2e2" Jan 27 16:15:13 crc kubenswrapper[4697]: E0127 16:15:13.817315 4697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2db8b7874a9f33b81f20c352dd2850a6564b612cca9e3ffd4c0565757b0fa2e2\": container with ID starting with 2db8b7874a9f33b81f20c352dd2850a6564b612cca9e3ffd4c0565757b0fa2e2 not found: ID does not exist" containerID="2db8b7874a9f33b81f20c352dd2850a6564b612cca9e3ffd4c0565757b0fa2e2" Jan 27 16:15:13 crc kubenswrapper[4697]: I0127 16:15:13.817345 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2db8b7874a9f33b81f20c352dd2850a6564b612cca9e3ffd4c0565757b0fa2e2"} err="failed to get container status \"2db8b7874a9f33b81f20c352dd2850a6564b612cca9e3ffd4c0565757b0fa2e2\": rpc error: code = NotFound desc = could not find container \"2db8b7874a9f33b81f20c352dd2850a6564b612cca9e3ffd4c0565757b0fa2e2\": container with ID starting with 2db8b7874a9f33b81f20c352dd2850a6564b612cca9e3ffd4c0565757b0fa2e2 not found: ID does not exist" Jan 27 16:15:13 crc kubenswrapper[4697]: I0127 16:15:13.817369 4697 scope.go:117] "RemoveContainer" containerID="961f36bf04db1b0d6ce1eccd3e7b759459e0ae75dd25558f352acac60835960d" Jan 27 16:15:13 crc kubenswrapper[4697]: E0127 16:15:13.817560 4697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"961f36bf04db1b0d6ce1eccd3e7b759459e0ae75dd25558f352acac60835960d\": container with ID starting with 961f36bf04db1b0d6ce1eccd3e7b759459e0ae75dd25558f352acac60835960d not found: ID does not exist" containerID="961f36bf04db1b0d6ce1eccd3e7b759459e0ae75dd25558f352acac60835960d" Jan 27 16:15:13 crc kubenswrapper[4697]: I0127 16:15:13.817583 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"961f36bf04db1b0d6ce1eccd3e7b759459e0ae75dd25558f352acac60835960d"} err="failed to get container status \"961f36bf04db1b0d6ce1eccd3e7b759459e0ae75dd25558f352acac60835960d\": rpc error: code = NotFound desc = could not find container \"961f36bf04db1b0d6ce1eccd3e7b759459e0ae75dd25558f352acac60835960d\": container with ID starting with 961f36bf04db1b0d6ce1eccd3e7b759459e0ae75dd25558f352acac60835960d not found: ID does not exist" Jan 27 16:15:13 crc kubenswrapper[4697]: I0127 16:15:13.817598 4697 scope.go:117] "RemoveContainer" containerID="2897602e51e79266813b0435157ff21335ce43a4812f9326b09ac30cde3dd1f5" Jan 27 16:15:13 crc kubenswrapper[4697]: E0127 16:15:13.817812 4697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2897602e51e79266813b0435157ff21335ce43a4812f9326b09ac30cde3dd1f5\": container with ID starting with 2897602e51e79266813b0435157ff21335ce43a4812f9326b09ac30cde3dd1f5 not found: ID does not exist" containerID="2897602e51e79266813b0435157ff21335ce43a4812f9326b09ac30cde3dd1f5" Jan 27 16:15:13 crc kubenswrapper[4697]: I0127 16:15:13.817837 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2897602e51e79266813b0435157ff21335ce43a4812f9326b09ac30cde3dd1f5"} err="failed to get container status \"2897602e51e79266813b0435157ff21335ce43a4812f9326b09ac30cde3dd1f5\": rpc error: code = NotFound desc = could not find container \"2897602e51e79266813b0435157ff21335ce43a4812f9326b09ac30cde3dd1f5\": container with ID starting with 2897602e51e79266813b0435157ff21335ce43a4812f9326b09ac30cde3dd1f5 not found: ID does not exist" Jan 27 16:15:14 crc kubenswrapper[4697]: I0127 16:15:14.580664 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f480ad6-7b35-41ee-83d6-ae4008562d2d" path="/var/lib/kubelet/pods/2f480ad6-7b35-41ee-83d6-ae4008562d2d/volumes" Jan 27 16:15:40 crc kubenswrapper[4697]: I0127 16:15:40.325931 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-ktsxv"] Jan 27 16:15:40 crc kubenswrapper[4697]: E0127 16:15:40.327473 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f480ad6-7b35-41ee-83d6-ae4008562d2d" containerName="extract-content" Jan 27 16:15:40 crc kubenswrapper[4697]: I0127 16:15:40.327499 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f480ad6-7b35-41ee-83d6-ae4008562d2d" containerName="extract-content" Jan 27 16:15:40 crc kubenswrapper[4697]: E0127 16:15:40.327511 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f480ad6-7b35-41ee-83d6-ae4008562d2d" containerName="registry-server" Jan 27 16:15:40 crc kubenswrapper[4697]: I0127 16:15:40.327518 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f480ad6-7b35-41ee-83d6-ae4008562d2d" containerName="registry-server" Jan 27 16:15:40 crc kubenswrapper[4697]: E0127 16:15:40.327535 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f480ad6-7b35-41ee-83d6-ae4008562d2d" containerName="extract-utilities" Jan 27 16:15:40 crc kubenswrapper[4697]: I0127 16:15:40.327543 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f480ad6-7b35-41ee-83d6-ae4008562d2d" containerName="extract-utilities" Jan 27 16:15:40 crc kubenswrapper[4697]: E0127 16:15:40.327566 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac5dee5a-b521-4fab-bd01-5a42d62b13c1" containerName="collect-profiles" Jan 27 16:15:40 crc kubenswrapper[4697]: I0127 16:15:40.327572 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac5dee5a-b521-4fab-bd01-5a42d62b13c1" containerName="collect-profiles" Jan 27 16:15:40 crc kubenswrapper[4697]: I0127 16:15:40.327757 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f480ad6-7b35-41ee-83d6-ae4008562d2d" containerName="registry-server" Jan 27 16:15:40 crc kubenswrapper[4697]: I0127 16:15:40.327867 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac5dee5a-b521-4fab-bd01-5a42d62b13c1" containerName="collect-profiles" Jan 27 16:15:40 crc kubenswrapper[4697]: I0127 16:15:40.329138 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ktsxv" Jan 27 16:15:40 crc kubenswrapper[4697]: I0127 16:15:40.347122 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ktsxv"] Jan 27 16:15:40 crc kubenswrapper[4697]: I0127 16:15:40.385184 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbhg8\" (UniqueName: \"kubernetes.io/projected/5f7b4aaf-a11a-436a-b131-fdcf5fcfe976-kube-api-access-sbhg8\") pod \"community-operators-ktsxv\" (UID: \"5f7b4aaf-a11a-436a-b131-fdcf5fcfe976\") " pod="openshift-marketplace/community-operators-ktsxv" Jan 27 16:15:40 crc kubenswrapper[4697]: I0127 16:15:40.385236 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f7b4aaf-a11a-436a-b131-fdcf5fcfe976-utilities\") pod \"community-operators-ktsxv\" (UID: \"5f7b4aaf-a11a-436a-b131-fdcf5fcfe976\") " pod="openshift-marketplace/community-operators-ktsxv" Jan 27 16:15:40 crc kubenswrapper[4697]: I0127 16:15:40.385350 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f7b4aaf-a11a-436a-b131-fdcf5fcfe976-catalog-content\") pod \"community-operators-ktsxv\" (UID: \"5f7b4aaf-a11a-436a-b131-fdcf5fcfe976\") " pod="openshift-marketplace/community-operators-ktsxv" Jan 27 16:15:40 crc kubenswrapper[4697]: I0127 16:15:40.486360 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f7b4aaf-a11a-436a-b131-fdcf5fcfe976-catalog-content\") pod \"community-operators-ktsxv\" (UID: \"5f7b4aaf-a11a-436a-b131-fdcf5fcfe976\") " pod="openshift-marketplace/community-operators-ktsxv" Jan 27 16:15:40 crc kubenswrapper[4697]: I0127 16:15:40.486486 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbhg8\" (UniqueName: \"kubernetes.io/projected/5f7b4aaf-a11a-436a-b131-fdcf5fcfe976-kube-api-access-sbhg8\") pod \"community-operators-ktsxv\" (UID: \"5f7b4aaf-a11a-436a-b131-fdcf5fcfe976\") " pod="openshift-marketplace/community-operators-ktsxv" Jan 27 16:15:40 crc kubenswrapper[4697]: I0127 16:15:40.486531 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f7b4aaf-a11a-436a-b131-fdcf5fcfe976-utilities\") pod \"community-operators-ktsxv\" (UID: \"5f7b4aaf-a11a-436a-b131-fdcf5fcfe976\") " pod="openshift-marketplace/community-operators-ktsxv" Jan 27 16:15:40 crc kubenswrapper[4697]: I0127 16:15:40.487033 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f7b4aaf-a11a-436a-b131-fdcf5fcfe976-catalog-content\") pod \"community-operators-ktsxv\" (UID: \"5f7b4aaf-a11a-436a-b131-fdcf5fcfe976\") " pod="openshift-marketplace/community-operators-ktsxv" Jan 27 16:15:40 crc kubenswrapper[4697]: I0127 16:15:40.487836 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f7b4aaf-a11a-436a-b131-fdcf5fcfe976-utilities\") pod \"community-operators-ktsxv\" (UID: \"5f7b4aaf-a11a-436a-b131-fdcf5fcfe976\") " pod="openshift-marketplace/community-operators-ktsxv" Jan 27 16:15:40 crc kubenswrapper[4697]: I0127 16:15:40.508459 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbhg8\" (UniqueName: \"kubernetes.io/projected/5f7b4aaf-a11a-436a-b131-fdcf5fcfe976-kube-api-access-sbhg8\") pod \"community-operators-ktsxv\" (UID: \"5f7b4aaf-a11a-436a-b131-fdcf5fcfe976\") " pod="openshift-marketplace/community-operators-ktsxv" Jan 27 16:15:40 crc kubenswrapper[4697]: I0127 16:15:40.650771 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ktsxv" Jan 27 16:15:41 crc kubenswrapper[4697]: I0127 16:15:41.311458 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ktsxv"] Jan 27 16:15:41 crc kubenswrapper[4697]: I0127 16:15:41.952162 4697 generic.go:334] "Generic (PLEG): container finished" podID="5f7b4aaf-a11a-436a-b131-fdcf5fcfe976" containerID="ddd9c5587f1695246288e4eeb97f1ca36bb2f9d7fa7bdebc9db32d85fe58384b" exitCode=0 Jan 27 16:15:41 crc kubenswrapper[4697]: I0127 16:15:41.952240 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ktsxv" event={"ID":"5f7b4aaf-a11a-436a-b131-fdcf5fcfe976","Type":"ContainerDied","Data":"ddd9c5587f1695246288e4eeb97f1ca36bb2f9d7fa7bdebc9db32d85fe58384b"} Jan 27 16:15:41 crc kubenswrapper[4697]: I0127 16:15:41.952479 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ktsxv" event={"ID":"5f7b4aaf-a11a-436a-b131-fdcf5fcfe976","Type":"ContainerStarted","Data":"76ac7342fa9ceb5d10589e9c600a55c51e3c74c3bd13dd111fadfebc7e3f6d10"} Jan 27 16:15:43 crc kubenswrapper[4697]: I0127 16:15:43.985849 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ktsxv" event={"ID":"5f7b4aaf-a11a-436a-b131-fdcf5fcfe976","Type":"ContainerStarted","Data":"058bb802daa064568f6cbe4d6239c316289a01dba48f2e2c63606a2081722a26"} Jan 27 16:15:46 crc kubenswrapper[4697]: I0127 16:15:46.002388 4697 generic.go:334] "Generic (PLEG): container finished" podID="5f7b4aaf-a11a-436a-b131-fdcf5fcfe976" containerID="058bb802daa064568f6cbe4d6239c316289a01dba48f2e2c63606a2081722a26" exitCode=0 Jan 27 16:15:46 crc kubenswrapper[4697]: I0127 16:15:46.002471 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ktsxv" event={"ID":"5f7b4aaf-a11a-436a-b131-fdcf5fcfe976","Type":"ContainerDied","Data":"058bb802daa064568f6cbe4d6239c316289a01dba48f2e2c63606a2081722a26"} Jan 27 16:15:48 crc kubenswrapper[4697]: I0127 16:15:48.033334 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ktsxv" event={"ID":"5f7b4aaf-a11a-436a-b131-fdcf5fcfe976","Type":"ContainerStarted","Data":"0bea1c78d3e8472bd5f1e837caef39d38afc6a9c33928ddc359d6dcfe52eb588"} Jan 27 16:15:48 crc kubenswrapper[4697]: I0127 16:15:48.062726 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-ktsxv" podStartSLOduration=3.065421182 podStartE2EDuration="8.062709732s" podCreationTimestamp="2026-01-27 16:15:40 +0000 UTC" firstStartedPulling="2026-01-27 16:15:41.953953828 +0000 UTC m=+4038.126353609" lastFinishedPulling="2026-01-27 16:15:46.951242368 +0000 UTC m=+4043.123642159" observedRunningTime="2026-01-27 16:15:48.054930072 +0000 UTC m=+4044.227329843" watchObservedRunningTime="2026-01-27 16:15:48.062709732 +0000 UTC m=+4044.235109513" Jan 27 16:15:48 crc kubenswrapper[4697]: I0127 16:15:48.937613 4697 scope.go:117] "RemoveContainer" containerID="a26dc14ed26f93a6b72b3e2ca898160523a4bf5818de3bb80b7087b1a0496cd3" Jan 27 16:15:50 crc kubenswrapper[4697]: I0127 16:15:50.651560 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-ktsxv" Jan 27 16:15:50 crc kubenswrapper[4697]: I0127 16:15:50.651905 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-ktsxv" Jan 27 16:15:50 crc kubenswrapper[4697]: I0127 16:15:50.705363 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-ktsxv" Jan 27 16:15:54 crc kubenswrapper[4697]: I0127 16:15:54.618467 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-7cbmw"] Jan 27 16:15:54 crc kubenswrapper[4697]: I0127 16:15:54.622341 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7cbmw" Jan 27 16:15:54 crc kubenswrapper[4697]: I0127 16:15:54.628585 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7cbmw"] Jan 27 16:15:54 crc kubenswrapper[4697]: I0127 16:15:54.778214 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42f6e16d-e57a-4c8e-8175-ca807f02fd35-catalog-content\") pod \"redhat-marketplace-7cbmw\" (UID: \"42f6e16d-e57a-4c8e-8175-ca807f02fd35\") " pod="openshift-marketplace/redhat-marketplace-7cbmw" Jan 27 16:15:54 crc kubenswrapper[4697]: I0127 16:15:54.778293 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjwpd\" (UniqueName: \"kubernetes.io/projected/42f6e16d-e57a-4c8e-8175-ca807f02fd35-kube-api-access-wjwpd\") pod \"redhat-marketplace-7cbmw\" (UID: \"42f6e16d-e57a-4c8e-8175-ca807f02fd35\") " pod="openshift-marketplace/redhat-marketplace-7cbmw" Jan 27 16:15:54 crc kubenswrapper[4697]: I0127 16:15:54.778316 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42f6e16d-e57a-4c8e-8175-ca807f02fd35-utilities\") pod \"redhat-marketplace-7cbmw\" (UID: \"42f6e16d-e57a-4c8e-8175-ca807f02fd35\") " pod="openshift-marketplace/redhat-marketplace-7cbmw" Jan 27 16:15:54 crc kubenswrapper[4697]: I0127 16:15:54.880878 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wjwpd\" (UniqueName: \"kubernetes.io/projected/42f6e16d-e57a-4c8e-8175-ca807f02fd35-kube-api-access-wjwpd\") pod \"redhat-marketplace-7cbmw\" (UID: \"42f6e16d-e57a-4c8e-8175-ca807f02fd35\") " pod="openshift-marketplace/redhat-marketplace-7cbmw" Jan 27 16:15:54 crc kubenswrapper[4697]: I0127 16:15:54.880919 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42f6e16d-e57a-4c8e-8175-ca807f02fd35-utilities\") pod \"redhat-marketplace-7cbmw\" (UID: \"42f6e16d-e57a-4c8e-8175-ca807f02fd35\") " pod="openshift-marketplace/redhat-marketplace-7cbmw" Jan 27 16:15:54 crc kubenswrapper[4697]: I0127 16:15:54.881052 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42f6e16d-e57a-4c8e-8175-ca807f02fd35-catalog-content\") pod \"redhat-marketplace-7cbmw\" (UID: \"42f6e16d-e57a-4c8e-8175-ca807f02fd35\") " pod="openshift-marketplace/redhat-marketplace-7cbmw" Jan 27 16:15:54 crc kubenswrapper[4697]: I0127 16:15:54.882231 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42f6e16d-e57a-4c8e-8175-ca807f02fd35-utilities\") pod \"redhat-marketplace-7cbmw\" (UID: \"42f6e16d-e57a-4c8e-8175-ca807f02fd35\") " pod="openshift-marketplace/redhat-marketplace-7cbmw" Jan 27 16:15:54 crc kubenswrapper[4697]: I0127 16:15:54.882435 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42f6e16d-e57a-4c8e-8175-ca807f02fd35-catalog-content\") pod \"redhat-marketplace-7cbmw\" (UID: \"42f6e16d-e57a-4c8e-8175-ca807f02fd35\") " pod="openshift-marketplace/redhat-marketplace-7cbmw" Jan 27 16:15:54 crc kubenswrapper[4697]: I0127 16:15:54.910663 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wjwpd\" (UniqueName: \"kubernetes.io/projected/42f6e16d-e57a-4c8e-8175-ca807f02fd35-kube-api-access-wjwpd\") pod \"redhat-marketplace-7cbmw\" (UID: \"42f6e16d-e57a-4c8e-8175-ca807f02fd35\") " pod="openshift-marketplace/redhat-marketplace-7cbmw" Jan 27 16:15:54 crc kubenswrapper[4697]: I0127 16:15:54.953300 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7cbmw" Jan 27 16:15:55 crc kubenswrapper[4697]: I0127 16:15:55.111228 4697 patch_prober.go:28] interesting pod/machine-config-daemon-wz495 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 16:15:55 crc kubenswrapper[4697]: I0127 16:15:55.111276 4697 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 16:15:55 crc kubenswrapper[4697]: I0127 16:15:55.500137 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7cbmw"] Jan 27 16:15:55 crc kubenswrapper[4697]: W0127 16:15:55.560854 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod42f6e16d_e57a_4c8e_8175_ca807f02fd35.slice/crio-0b4aecf05f11eb5e44a8217f7bd571303a33be9a070d77d9bffffccb3eed4cef WatchSource:0}: Error finding container 0b4aecf05f11eb5e44a8217f7bd571303a33be9a070d77d9bffffccb3eed4cef: Status 404 returned error can't find the container with id 0b4aecf05f11eb5e44a8217f7bd571303a33be9a070d77d9bffffccb3eed4cef Jan 27 16:15:56 crc kubenswrapper[4697]: I0127 16:15:56.119906 4697 generic.go:334] "Generic (PLEG): container finished" podID="42f6e16d-e57a-4c8e-8175-ca807f02fd35" containerID="a04ef793889675f997c1b082b8b15ebc51d5bfd176845efdc30bcb145c3da545" exitCode=0 Jan 27 16:15:56 crc kubenswrapper[4697]: I0127 16:15:56.120056 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7cbmw" event={"ID":"42f6e16d-e57a-4c8e-8175-ca807f02fd35","Type":"ContainerDied","Data":"a04ef793889675f997c1b082b8b15ebc51d5bfd176845efdc30bcb145c3da545"} Jan 27 16:15:56 crc kubenswrapper[4697]: I0127 16:15:56.120227 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7cbmw" event={"ID":"42f6e16d-e57a-4c8e-8175-ca807f02fd35","Type":"ContainerStarted","Data":"0b4aecf05f11eb5e44a8217f7bd571303a33be9a070d77d9bffffccb3eed4cef"} Jan 27 16:15:58 crc kubenswrapper[4697]: I0127 16:15:58.161497 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7cbmw" event={"ID":"42f6e16d-e57a-4c8e-8175-ca807f02fd35","Type":"ContainerStarted","Data":"3e8c82e85784300faf5f15477488f947fbcf47360ffc4edeafcfe3476f1a5cfe"} Jan 27 16:15:59 crc kubenswrapper[4697]: I0127 16:15:59.171528 4697 generic.go:334] "Generic (PLEG): container finished" podID="42f6e16d-e57a-4c8e-8175-ca807f02fd35" containerID="3e8c82e85784300faf5f15477488f947fbcf47360ffc4edeafcfe3476f1a5cfe" exitCode=0 Jan 27 16:15:59 crc kubenswrapper[4697]: I0127 16:15:59.171586 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7cbmw" event={"ID":"42f6e16d-e57a-4c8e-8175-ca807f02fd35","Type":"ContainerDied","Data":"3e8c82e85784300faf5f15477488f947fbcf47360ffc4edeafcfe3476f1a5cfe"} Jan 27 16:16:00 crc kubenswrapper[4697]: I0127 16:16:00.183601 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7cbmw" event={"ID":"42f6e16d-e57a-4c8e-8175-ca807f02fd35","Type":"ContainerStarted","Data":"0a2ef7f8a9c86b92194738d2f26579617380c11330fbe5d7e1b6f8258fac85e5"} Jan 27 16:16:00 crc kubenswrapper[4697]: I0127 16:16:00.204276 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-7cbmw" podStartSLOduration=2.694283491 podStartE2EDuration="6.204254366s" podCreationTimestamp="2026-01-27 16:15:54 +0000 UTC" firstStartedPulling="2026-01-27 16:15:56.121163276 +0000 UTC m=+4052.293563057" lastFinishedPulling="2026-01-27 16:15:59.631134151 +0000 UTC m=+4055.803533932" observedRunningTime="2026-01-27 16:16:00.198841973 +0000 UTC m=+4056.371241774" watchObservedRunningTime="2026-01-27 16:16:00.204254366 +0000 UTC m=+4056.376654137" Jan 27 16:16:00 crc kubenswrapper[4697]: I0127 16:16:00.711807 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-ktsxv" Jan 27 16:16:01 crc kubenswrapper[4697]: I0127 16:16:01.980699 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ktsxv"] Jan 27 16:16:01 crc kubenswrapper[4697]: I0127 16:16:01.981184 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-ktsxv" podUID="5f7b4aaf-a11a-436a-b131-fdcf5fcfe976" containerName="registry-server" containerID="cri-o://0bea1c78d3e8472bd5f1e837caef39d38afc6a9c33928ddc359d6dcfe52eb588" gracePeriod=2 Jan 27 16:16:02 crc kubenswrapper[4697]: I0127 16:16:02.207092 4697 generic.go:334] "Generic (PLEG): container finished" podID="5f7b4aaf-a11a-436a-b131-fdcf5fcfe976" containerID="0bea1c78d3e8472bd5f1e837caef39d38afc6a9c33928ddc359d6dcfe52eb588" exitCode=0 Jan 27 16:16:02 crc kubenswrapper[4697]: I0127 16:16:02.207202 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ktsxv" event={"ID":"5f7b4aaf-a11a-436a-b131-fdcf5fcfe976","Type":"ContainerDied","Data":"0bea1c78d3e8472bd5f1e837caef39d38afc6a9c33928ddc359d6dcfe52eb588"} Jan 27 16:16:02 crc kubenswrapper[4697]: I0127 16:16:02.756713 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ktsxv" Jan 27 16:16:02 crc kubenswrapper[4697]: I0127 16:16:02.905066 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f7b4aaf-a11a-436a-b131-fdcf5fcfe976-catalog-content\") pod \"5f7b4aaf-a11a-436a-b131-fdcf5fcfe976\" (UID: \"5f7b4aaf-a11a-436a-b131-fdcf5fcfe976\") " Jan 27 16:16:02 crc kubenswrapper[4697]: I0127 16:16:02.905220 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f7b4aaf-a11a-436a-b131-fdcf5fcfe976-utilities\") pod \"5f7b4aaf-a11a-436a-b131-fdcf5fcfe976\" (UID: \"5f7b4aaf-a11a-436a-b131-fdcf5fcfe976\") " Jan 27 16:16:02 crc kubenswrapper[4697]: I0127 16:16:02.905324 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sbhg8\" (UniqueName: \"kubernetes.io/projected/5f7b4aaf-a11a-436a-b131-fdcf5fcfe976-kube-api-access-sbhg8\") pod \"5f7b4aaf-a11a-436a-b131-fdcf5fcfe976\" (UID: \"5f7b4aaf-a11a-436a-b131-fdcf5fcfe976\") " Jan 27 16:16:02 crc kubenswrapper[4697]: I0127 16:16:02.906020 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f7b4aaf-a11a-436a-b131-fdcf5fcfe976-utilities" (OuterVolumeSpecName: "utilities") pod "5f7b4aaf-a11a-436a-b131-fdcf5fcfe976" (UID: "5f7b4aaf-a11a-436a-b131-fdcf5fcfe976"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:16:02 crc kubenswrapper[4697]: I0127 16:16:02.910885 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f7b4aaf-a11a-436a-b131-fdcf5fcfe976-kube-api-access-sbhg8" (OuterVolumeSpecName: "kube-api-access-sbhg8") pod "5f7b4aaf-a11a-436a-b131-fdcf5fcfe976" (UID: "5f7b4aaf-a11a-436a-b131-fdcf5fcfe976"). InnerVolumeSpecName "kube-api-access-sbhg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:16:02 crc kubenswrapper[4697]: I0127 16:16:02.966140 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f7b4aaf-a11a-436a-b131-fdcf5fcfe976-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5f7b4aaf-a11a-436a-b131-fdcf5fcfe976" (UID: "5f7b4aaf-a11a-436a-b131-fdcf5fcfe976"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:16:03 crc kubenswrapper[4697]: I0127 16:16:03.008279 4697 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f7b4aaf-a11a-436a-b131-fdcf5fcfe976-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 16:16:03 crc kubenswrapper[4697]: I0127 16:16:03.008323 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sbhg8\" (UniqueName: \"kubernetes.io/projected/5f7b4aaf-a11a-436a-b131-fdcf5fcfe976-kube-api-access-sbhg8\") on node \"crc\" DevicePath \"\"" Jan 27 16:16:03 crc kubenswrapper[4697]: I0127 16:16:03.008339 4697 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f7b4aaf-a11a-436a-b131-fdcf5fcfe976-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 16:16:03 crc kubenswrapper[4697]: I0127 16:16:03.218137 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ktsxv" event={"ID":"5f7b4aaf-a11a-436a-b131-fdcf5fcfe976","Type":"ContainerDied","Data":"76ac7342fa9ceb5d10589e9c600a55c51e3c74c3bd13dd111fadfebc7e3f6d10"} Jan 27 16:16:03 crc kubenswrapper[4697]: I0127 16:16:03.218203 4697 scope.go:117] "RemoveContainer" containerID="0bea1c78d3e8472bd5f1e837caef39d38afc6a9c33928ddc359d6dcfe52eb588" Jan 27 16:16:03 crc kubenswrapper[4697]: I0127 16:16:03.218211 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ktsxv" Jan 27 16:16:03 crc kubenswrapper[4697]: I0127 16:16:03.263078 4697 scope.go:117] "RemoveContainer" containerID="058bb802daa064568f6cbe4d6239c316289a01dba48f2e2c63606a2081722a26" Jan 27 16:16:03 crc kubenswrapper[4697]: I0127 16:16:03.265151 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ktsxv"] Jan 27 16:16:03 crc kubenswrapper[4697]: I0127 16:16:03.275865 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-ktsxv"] Jan 27 16:16:03 crc kubenswrapper[4697]: I0127 16:16:03.288881 4697 scope.go:117] "RemoveContainer" containerID="ddd9c5587f1695246288e4eeb97f1ca36bb2f9d7fa7bdebc9db32d85fe58384b" Jan 27 16:16:04 crc kubenswrapper[4697]: I0127 16:16:04.580122 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f7b4aaf-a11a-436a-b131-fdcf5fcfe976" path="/var/lib/kubelet/pods/5f7b4aaf-a11a-436a-b131-fdcf5fcfe976/volumes" Jan 27 16:16:04 crc kubenswrapper[4697]: I0127 16:16:04.953887 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-7cbmw" Jan 27 16:16:04 crc kubenswrapper[4697]: I0127 16:16:04.954222 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-7cbmw" Jan 27 16:16:05 crc kubenswrapper[4697]: I0127 16:16:05.015108 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-7cbmw" Jan 27 16:16:05 crc kubenswrapper[4697]: I0127 16:16:05.297148 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-7cbmw" Jan 27 16:16:06 crc kubenswrapper[4697]: I0127 16:16:06.181329 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7cbmw"] Jan 27 16:16:07 crc kubenswrapper[4697]: I0127 16:16:07.253659 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-7cbmw" podUID="42f6e16d-e57a-4c8e-8175-ca807f02fd35" containerName="registry-server" containerID="cri-o://0a2ef7f8a9c86b92194738d2f26579617380c11330fbe5d7e1b6f8258fac85e5" gracePeriod=2 Jan 27 16:16:07 crc kubenswrapper[4697]: I0127 16:16:07.950109 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7cbmw" Jan 27 16:16:08 crc kubenswrapper[4697]: I0127 16:16:08.107058 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42f6e16d-e57a-4c8e-8175-ca807f02fd35-catalog-content\") pod \"42f6e16d-e57a-4c8e-8175-ca807f02fd35\" (UID: \"42f6e16d-e57a-4c8e-8175-ca807f02fd35\") " Jan 27 16:16:08 crc kubenswrapper[4697]: I0127 16:16:08.107154 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wjwpd\" (UniqueName: \"kubernetes.io/projected/42f6e16d-e57a-4c8e-8175-ca807f02fd35-kube-api-access-wjwpd\") pod \"42f6e16d-e57a-4c8e-8175-ca807f02fd35\" (UID: \"42f6e16d-e57a-4c8e-8175-ca807f02fd35\") " Jan 27 16:16:08 crc kubenswrapper[4697]: I0127 16:16:08.107265 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42f6e16d-e57a-4c8e-8175-ca807f02fd35-utilities\") pod \"42f6e16d-e57a-4c8e-8175-ca807f02fd35\" (UID: \"42f6e16d-e57a-4c8e-8175-ca807f02fd35\") " Jan 27 16:16:08 crc kubenswrapper[4697]: I0127 16:16:08.108319 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/42f6e16d-e57a-4c8e-8175-ca807f02fd35-utilities" (OuterVolumeSpecName: "utilities") pod "42f6e16d-e57a-4c8e-8175-ca807f02fd35" (UID: "42f6e16d-e57a-4c8e-8175-ca807f02fd35"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:16:08 crc kubenswrapper[4697]: I0127 16:16:08.137661 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42f6e16d-e57a-4c8e-8175-ca807f02fd35-kube-api-access-wjwpd" (OuterVolumeSpecName: "kube-api-access-wjwpd") pod "42f6e16d-e57a-4c8e-8175-ca807f02fd35" (UID: "42f6e16d-e57a-4c8e-8175-ca807f02fd35"). InnerVolumeSpecName "kube-api-access-wjwpd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:16:08 crc kubenswrapper[4697]: I0127 16:16:08.144091 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/42f6e16d-e57a-4c8e-8175-ca807f02fd35-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "42f6e16d-e57a-4c8e-8175-ca807f02fd35" (UID: "42f6e16d-e57a-4c8e-8175-ca807f02fd35"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:16:08 crc kubenswrapper[4697]: I0127 16:16:08.209563 4697 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42f6e16d-e57a-4c8e-8175-ca807f02fd35-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 16:16:08 crc kubenswrapper[4697]: I0127 16:16:08.209587 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wjwpd\" (UniqueName: \"kubernetes.io/projected/42f6e16d-e57a-4c8e-8175-ca807f02fd35-kube-api-access-wjwpd\") on node \"crc\" DevicePath \"\"" Jan 27 16:16:08 crc kubenswrapper[4697]: I0127 16:16:08.209598 4697 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42f6e16d-e57a-4c8e-8175-ca807f02fd35-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 16:16:08 crc kubenswrapper[4697]: I0127 16:16:08.263813 4697 generic.go:334] "Generic (PLEG): container finished" podID="42f6e16d-e57a-4c8e-8175-ca807f02fd35" containerID="0a2ef7f8a9c86b92194738d2f26579617380c11330fbe5d7e1b6f8258fac85e5" exitCode=0 Jan 27 16:16:08 crc kubenswrapper[4697]: I0127 16:16:08.263861 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7cbmw" event={"ID":"42f6e16d-e57a-4c8e-8175-ca807f02fd35","Type":"ContainerDied","Data":"0a2ef7f8a9c86b92194738d2f26579617380c11330fbe5d7e1b6f8258fac85e5"} Jan 27 16:16:08 crc kubenswrapper[4697]: I0127 16:16:08.263869 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7cbmw" Jan 27 16:16:08 crc kubenswrapper[4697]: I0127 16:16:08.263887 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7cbmw" event={"ID":"42f6e16d-e57a-4c8e-8175-ca807f02fd35","Type":"ContainerDied","Data":"0b4aecf05f11eb5e44a8217f7bd571303a33be9a070d77d9bffffccb3eed4cef"} Jan 27 16:16:08 crc kubenswrapper[4697]: I0127 16:16:08.263906 4697 scope.go:117] "RemoveContainer" containerID="0a2ef7f8a9c86b92194738d2f26579617380c11330fbe5d7e1b6f8258fac85e5" Jan 27 16:16:08 crc kubenswrapper[4697]: I0127 16:16:08.297107 4697 scope.go:117] "RemoveContainer" containerID="3e8c82e85784300faf5f15477488f947fbcf47360ffc4edeafcfe3476f1a5cfe" Jan 27 16:16:08 crc kubenswrapper[4697]: I0127 16:16:08.302457 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7cbmw"] Jan 27 16:16:08 crc kubenswrapper[4697]: I0127 16:16:08.309603 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-7cbmw"] Jan 27 16:16:08 crc kubenswrapper[4697]: I0127 16:16:08.325571 4697 scope.go:117] "RemoveContainer" containerID="a04ef793889675f997c1b082b8b15ebc51d5bfd176845efdc30bcb145c3da545" Jan 27 16:16:08 crc kubenswrapper[4697]: I0127 16:16:08.366688 4697 scope.go:117] "RemoveContainer" containerID="0a2ef7f8a9c86b92194738d2f26579617380c11330fbe5d7e1b6f8258fac85e5" Jan 27 16:16:08 crc kubenswrapper[4697]: E0127 16:16:08.367139 4697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a2ef7f8a9c86b92194738d2f26579617380c11330fbe5d7e1b6f8258fac85e5\": container with ID starting with 0a2ef7f8a9c86b92194738d2f26579617380c11330fbe5d7e1b6f8258fac85e5 not found: ID does not exist" containerID="0a2ef7f8a9c86b92194738d2f26579617380c11330fbe5d7e1b6f8258fac85e5" Jan 27 16:16:08 crc kubenswrapper[4697]: I0127 16:16:08.367183 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a2ef7f8a9c86b92194738d2f26579617380c11330fbe5d7e1b6f8258fac85e5"} err="failed to get container status \"0a2ef7f8a9c86b92194738d2f26579617380c11330fbe5d7e1b6f8258fac85e5\": rpc error: code = NotFound desc = could not find container \"0a2ef7f8a9c86b92194738d2f26579617380c11330fbe5d7e1b6f8258fac85e5\": container with ID starting with 0a2ef7f8a9c86b92194738d2f26579617380c11330fbe5d7e1b6f8258fac85e5 not found: ID does not exist" Jan 27 16:16:08 crc kubenswrapper[4697]: I0127 16:16:08.367210 4697 scope.go:117] "RemoveContainer" containerID="3e8c82e85784300faf5f15477488f947fbcf47360ffc4edeafcfe3476f1a5cfe" Jan 27 16:16:08 crc kubenswrapper[4697]: E0127 16:16:08.368678 4697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e8c82e85784300faf5f15477488f947fbcf47360ffc4edeafcfe3476f1a5cfe\": container with ID starting with 3e8c82e85784300faf5f15477488f947fbcf47360ffc4edeafcfe3476f1a5cfe not found: ID does not exist" containerID="3e8c82e85784300faf5f15477488f947fbcf47360ffc4edeafcfe3476f1a5cfe" Jan 27 16:16:08 crc kubenswrapper[4697]: I0127 16:16:08.368718 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e8c82e85784300faf5f15477488f947fbcf47360ffc4edeafcfe3476f1a5cfe"} err="failed to get container status \"3e8c82e85784300faf5f15477488f947fbcf47360ffc4edeafcfe3476f1a5cfe\": rpc error: code = NotFound desc = could not find container \"3e8c82e85784300faf5f15477488f947fbcf47360ffc4edeafcfe3476f1a5cfe\": container with ID starting with 3e8c82e85784300faf5f15477488f947fbcf47360ffc4edeafcfe3476f1a5cfe not found: ID does not exist" Jan 27 16:16:08 crc kubenswrapper[4697]: I0127 16:16:08.368746 4697 scope.go:117] "RemoveContainer" containerID="a04ef793889675f997c1b082b8b15ebc51d5bfd176845efdc30bcb145c3da545" Jan 27 16:16:08 crc kubenswrapper[4697]: E0127 16:16:08.369693 4697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a04ef793889675f997c1b082b8b15ebc51d5bfd176845efdc30bcb145c3da545\": container with ID starting with a04ef793889675f997c1b082b8b15ebc51d5bfd176845efdc30bcb145c3da545 not found: ID does not exist" containerID="a04ef793889675f997c1b082b8b15ebc51d5bfd176845efdc30bcb145c3da545" Jan 27 16:16:08 crc kubenswrapper[4697]: I0127 16:16:08.369726 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a04ef793889675f997c1b082b8b15ebc51d5bfd176845efdc30bcb145c3da545"} err="failed to get container status \"a04ef793889675f997c1b082b8b15ebc51d5bfd176845efdc30bcb145c3da545\": rpc error: code = NotFound desc = could not find container \"a04ef793889675f997c1b082b8b15ebc51d5bfd176845efdc30bcb145c3da545\": container with ID starting with a04ef793889675f997c1b082b8b15ebc51d5bfd176845efdc30bcb145c3da545 not found: ID does not exist" Jan 27 16:16:08 crc kubenswrapper[4697]: I0127 16:16:08.580652 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42f6e16d-e57a-4c8e-8175-ca807f02fd35" path="/var/lib/kubelet/pods/42f6e16d-e57a-4c8e-8175-ca807f02fd35/volumes" Jan 27 16:16:25 crc kubenswrapper[4697]: I0127 16:16:25.108748 4697 patch_prober.go:28] interesting pod/machine-config-daemon-wz495 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 16:16:25 crc kubenswrapper[4697]: I0127 16:16:25.109434 4697 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 16:16:31 crc kubenswrapper[4697]: I0127 16:16:31.105342 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-hqxp4"] Jan 27 16:16:31 crc kubenswrapper[4697]: E0127 16:16:31.106153 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f7b4aaf-a11a-436a-b131-fdcf5fcfe976" containerName="extract-content" Jan 27 16:16:31 crc kubenswrapper[4697]: I0127 16:16:31.106164 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f7b4aaf-a11a-436a-b131-fdcf5fcfe976" containerName="extract-content" Jan 27 16:16:31 crc kubenswrapper[4697]: E0127 16:16:31.106181 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42f6e16d-e57a-4c8e-8175-ca807f02fd35" containerName="registry-server" Jan 27 16:16:31 crc kubenswrapper[4697]: I0127 16:16:31.106188 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="42f6e16d-e57a-4c8e-8175-ca807f02fd35" containerName="registry-server" Jan 27 16:16:31 crc kubenswrapper[4697]: E0127 16:16:31.106199 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f7b4aaf-a11a-436a-b131-fdcf5fcfe976" containerName="extract-utilities" Jan 27 16:16:31 crc kubenswrapper[4697]: I0127 16:16:31.106205 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f7b4aaf-a11a-436a-b131-fdcf5fcfe976" containerName="extract-utilities" Jan 27 16:16:31 crc kubenswrapper[4697]: E0127 16:16:31.106216 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f7b4aaf-a11a-436a-b131-fdcf5fcfe976" containerName="registry-server" Jan 27 16:16:31 crc kubenswrapper[4697]: I0127 16:16:31.106224 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f7b4aaf-a11a-436a-b131-fdcf5fcfe976" containerName="registry-server" Jan 27 16:16:31 crc kubenswrapper[4697]: E0127 16:16:31.106240 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42f6e16d-e57a-4c8e-8175-ca807f02fd35" containerName="extract-utilities" Jan 27 16:16:31 crc kubenswrapper[4697]: I0127 16:16:31.106246 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="42f6e16d-e57a-4c8e-8175-ca807f02fd35" containerName="extract-utilities" Jan 27 16:16:31 crc kubenswrapper[4697]: E0127 16:16:31.106260 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42f6e16d-e57a-4c8e-8175-ca807f02fd35" containerName="extract-content" Jan 27 16:16:31 crc kubenswrapper[4697]: I0127 16:16:31.106265 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="42f6e16d-e57a-4c8e-8175-ca807f02fd35" containerName="extract-content" Jan 27 16:16:31 crc kubenswrapper[4697]: I0127 16:16:31.106433 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="42f6e16d-e57a-4c8e-8175-ca807f02fd35" containerName="registry-server" Jan 27 16:16:31 crc kubenswrapper[4697]: I0127 16:16:31.106455 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f7b4aaf-a11a-436a-b131-fdcf5fcfe976" containerName="registry-server" Jan 27 16:16:31 crc kubenswrapper[4697]: I0127 16:16:31.107683 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hqxp4" Jan 27 16:16:31 crc kubenswrapper[4697]: I0127 16:16:31.116456 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hqxp4"] Jan 27 16:16:31 crc kubenswrapper[4697]: I0127 16:16:31.175524 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0bc1c18-eb56-4ad4-9d33-c5ea22cd3a49-catalog-content\") pod \"redhat-operators-hqxp4\" (UID: \"a0bc1c18-eb56-4ad4-9d33-c5ea22cd3a49\") " pod="openshift-marketplace/redhat-operators-hqxp4" Jan 27 16:16:31 crc kubenswrapper[4697]: I0127 16:16:31.175596 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kv8k8\" (UniqueName: \"kubernetes.io/projected/a0bc1c18-eb56-4ad4-9d33-c5ea22cd3a49-kube-api-access-kv8k8\") pod \"redhat-operators-hqxp4\" (UID: \"a0bc1c18-eb56-4ad4-9d33-c5ea22cd3a49\") " pod="openshift-marketplace/redhat-operators-hqxp4" Jan 27 16:16:31 crc kubenswrapper[4697]: I0127 16:16:31.175668 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0bc1c18-eb56-4ad4-9d33-c5ea22cd3a49-utilities\") pod \"redhat-operators-hqxp4\" (UID: \"a0bc1c18-eb56-4ad4-9d33-c5ea22cd3a49\") " pod="openshift-marketplace/redhat-operators-hqxp4" Jan 27 16:16:31 crc kubenswrapper[4697]: I0127 16:16:31.277282 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0bc1c18-eb56-4ad4-9d33-c5ea22cd3a49-catalog-content\") pod \"redhat-operators-hqxp4\" (UID: \"a0bc1c18-eb56-4ad4-9d33-c5ea22cd3a49\") " pod="openshift-marketplace/redhat-operators-hqxp4" Jan 27 16:16:31 crc kubenswrapper[4697]: I0127 16:16:31.277396 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kv8k8\" (UniqueName: \"kubernetes.io/projected/a0bc1c18-eb56-4ad4-9d33-c5ea22cd3a49-kube-api-access-kv8k8\") pod \"redhat-operators-hqxp4\" (UID: \"a0bc1c18-eb56-4ad4-9d33-c5ea22cd3a49\") " pod="openshift-marketplace/redhat-operators-hqxp4" Jan 27 16:16:31 crc kubenswrapper[4697]: I0127 16:16:31.277500 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0bc1c18-eb56-4ad4-9d33-c5ea22cd3a49-utilities\") pod \"redhat-operators-hqxp4\" (UID: \"a0bc1c18-eb56-4ad4-9d33-c5ea22cd3a49\") " pod="openshift-marketplace/redhat-operators-hqxp4" Jan 27 16:16:31 crc kubenswrapper[4697]: I0127 16:16:31.277736 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0bc1c18-eb56-4ad4-9d33-c5ea22cd3a49-catalog-content\") pod \"redhat-operators-hqxp4\" (UID: \"a0bc1c18-eb56-4ad4-9d33-c5ea22cd3a49\") " pod="openshift-marketplace/redhat-operators-hqxp4" Jan 27 16:16:31 crc kubenswrapper[4697]: I0127 16:16:31.277933 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0bc1c18-eb56-4ad4-9d33-c5ea22cd3a49-utilities\") pod \"redhat-operators-hqxp4\" (UID: \"a0bc1c18-eb56-4ad4-9d33-c5ea22cd3a49\") " pod="openshift-marketplace/redhat-operators-hqxp4" Jan 27 16:16:31 crc kubenswrapper[4697]: I0127 16:16:31.303994 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kv8k8\" (UniqueName: \"kubernetes.io/projected/a0bc1c18-eb56-4ad4-9d33-c5ea22cd3a49-kube-api-access-kv8k8\") pod \"redhat-operators-hqxp4\" (UID: \"a0bc1c18-eb56-4ad4-9d33-c5ea22cd3a49\") " pod="openshift-marketplace/redhat-operators-hqxp4" Jan 27 16:16:31 crc kubenswrapper[4697]: I0127 16:16:31.476922 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hqxp4" Jan 27 16:16:32 crc kubenswrapper[4697]: I0127 16:16:32.010970 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hqxp4"] Jan 27 16:16:32 crc kubenswrapper[4697]: I0127 16:16:32.469845 4697 generic.go:334] "Generic (PLEG): container finished" podID="a0bc1c18-eb56-4ad4-9d33-c5ea22cd3a49" containerID="358287a3789a08dc1c86949f4d8f2181463c60a0e56a3c2cf5021f2a637cb85c" exitCode=0 Jan 27 16:16:32 crc kubenswrapper[4697]: I0127 16:16:32.470893 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hqxp4" event={"ID":"a0bc1c18-eb56-4ad4-9d33-c5ea22cd3a49","Type":"ContainerDied","Data":"358287a3789a08dc1c86949f4d8f2181463c60a0e56a3c2cf5021f2a637cb85c"} Jan 27 16:16:32 crc kubenswrapper[4697]: I0127 16:16:32.470969 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hqxp4" event={"ID":"a0bc1c18-eb56-4ad4-9d33-c5ea22cd3a49","Type":"ContainerStarted","Data":"a74f005ab21b08ecfee43ccbe43d2aba0879ff798687e4dec7fa253259695b24"} Jan 27 16:16:34 crc kubenswrapper[4697]: I0127 16:16:34.489744 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hqxp4" event={"ID":"a0bc1c18-eb56-4ad4-9d33-c5ea22cd3a49","Type":"ContainerStarted","Data":"07ef58b704d7fd95f5a013ae415d8b0483fa547144b42f471c540cdc9778eb04"} Jan 27 16:16:40 crc kubenswrapper[4697]: I0127 16:16:40.542480 4697 generic.go:334] "Generic (PLEG): container finished" podID="a0bc1c18-eb56-4ad4-9d33-c5ea22cd3a49" containerID="07ef58b704d7fd95f5a013ae415d8b0483fa547144b42f471c540cdc9778eb04" exitCode=0 Jan 27 16:16:40 crc kubenswrapper[4697]: I0127 16:16:40.542568 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hqxp4" event={"ID":"a0bc1c18-eb56-4ad4-9d33-c5ea22cd3a49","Type":"ContainerDied","Data":"07ef58b704d7fd95f5a013ae415d8b0483fa547144b42f471c540cdc9778eb04"} Jan 27 16:16:41 crc kubenswrapper[4697]: I0127 16:16:41.553965 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hqxp4" event={"ID":"a0bc1c18-eb56-4ad4-9d33-c5ea22cd3a49","Type":"ContainerStarted","Data":"c0921427897d84fbacf1b6b95fe2f204a540620842b52342b6a96e1be097715b"} Jan 27 16:16:41 crc kubenswrapper[4697]: I0127 16:16:41.577472 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-hqxp4" podStartSLOduration=1.9844408470000001 podStartE2EDuration="10.577449117s" podCreationTimestamp="2026-01-27 16:16:31 +0000 UTC" firstStartedPulling="2026-01-27 16:16:32.472319271 +0000 UTC m=+4088.644719052" lastFinishedPulling="2026-01-27 16:16:41.065327541 +0000 UTC m=+4097.237727322" observedRunningTime="2026-01-27 16:16:41.570901306 +0000 UTC m=+4097.743301097" watchObservedRunningTime="2026-01-27 16:16:41.577449117 +0000 UTC m=+4097.749848918" Jan 27 16:16:51 crc kubenswrapper[4697]: I0127 16:16:51.478189 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-hqxp4" Jan 27 16:16:51 crc kubenswrapper[4697]: I0127 16:16:51.478598 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-hqxp4" Jan 27 16:16:52 crc kubenswrapper[4697]: I0127 16:16:52.531456 4697 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-hqxp4" podUID="a0bc1c18-eb56-4ad4-9d33-c5ea22cd3a49" containerName="registry-server" probeResult="failure" output=< Jan 27 16:16:52 crc kubenswrapper[4697]: timeout: failed to connect service ":50051" within 1s Jan 27 16:16:52 crc kubenswrapper[4697]: > Jan 27 16:16:55 crc kubenswrapper[4697]: I0127 16:16:55.108975 4697 patch_prober.go:28] interesting pod/machine-config-daemon-wz495 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 16:16:55 crc kubenswrapper[4697]: I0127 16:16:55.109375 4697 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 16:16:55 crc kubenswrapper[4697]: I0127 16:16:55.109442 4697 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wz495" Jan 27 16:16:55 crc kubenswrapper[4697]: I0127 16:16:55.110330 4697 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"69f7f7f2383abcc5335b945a24d4e8423be42e0d7cb37b789173d63bf5bb273d"} pod="openshift-machine-config-operator/machine-config-daemon-wz495" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 16:16:55 crc kubenswrapper[4697]: I0127 16:16:55.110401 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" containerName="machine-config-daemon" containerID="cri-o://69f7f7f2383abcc5335b945a24d4e8423be42e0d7cb37b789173d63bf5bb273d" gracePeriod=600 Jan 27 16:16:55 crc kubenswrapper[4697]: I0127 16:16:55.669242 4697 generic.go:334] "Generic (PLEG): container finished" podID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" containerID="69f7f7f2383abcc5335b945a24d4e8423be42e0d7cb37b789173d63bf5bb273d" exitCode=0 Jan 27 16:16:55 crc kubenswrapper[4697]: I0127 16:16:55.669294 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wz495" event={"ID":"e9bec8bc-b2a6-4865-83ca-692ae5c022a6","Type":"ContainerDied","Data":"69f7f7f2383abcc5335b945a24d4e8423be42e0d7cb37b789173d63bf5bb273d"} Jan 27 16:16:55 crc kubenswrapper[4697]: I0127 16:16:55.669331 4697 scope.go:117] "RemoveContainer" containerID="c1126d34877407d4cc1a3cef5d83fc9212c644d8a477a9d2a40e1aca1c69dcdf" Jan 27 16:16:56 crc kubenswrapper[4697]: I0127 16:16:56.681188 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wz495" event={"ID":"e9bec8bc-b2a6-4865-83ca-692ae5c022a6","Type":"ContainerStarted","Data":"1ed28bba8d311f2ce4742176145e237c684162adcd23036e7e9084723cdfe227"} Jan 27 16:17:01 crc kubenswrapper[4697]: I0127 16:17:01.555141 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-hqxp4" Jan 27 16:17:01 crc kubenswrapper[4697]: I0127 16:17:01.617267 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-hqxp4" Jan 27 16:17:02 crc kubenswrapper[4697]: I0127 16:17:02.302241 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hqxp4"] Jan 27 16:17:02 crc kubenswrapper[4697]: I0127 16:17:02.733053 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-hqxp4" podUID="a0bc1c18-eb56-4ad4-9d33-c5ea22cd3a49" containerName="registry-server" containerID="cri-o://c0921427897d84fbacf1b6b95fe2f204a540620842b52342b6a96e1be097715b" gracePeriod=2 Jan 27 16:17:03 crc kubenswrapper[4697]: I0127 16:17:03.325740 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hqxp4" Jan 27 16:17:03 crc kubenswrapper[4697]: I0127 16:17:03.480155 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0bc1c18-eb56-4ad4-9d33-c5ea22cd3a49-catalog-content\") pod \"a0bc1c18-eb56-4ad4-9d33-c5ea22cd3a49\" (UID: \"a0bc1c18-eb56-4ad4-9d33-c5ea22cd3a49\") " Jan 27 16:17:03 crc kubenswrapper[4697]: I0127 16:17:03.480301 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kv8k8\" (UniqueName: \"kubernetes.io/projected/a0bc1c18-eb56-4ad4-9d33-c5ea22cd3a49-kube-api-access-kv8k8\") pod \"a0bc1c18-eb56-4ad4-9d33-c5ea22cd3a49\" (UID: \"a0bc1c18-eb56-4ad4-9d33-c5ea22cd3a49\") " Jan 27 16:17:03 crc kubenswrapper[4697]: I0127 16:17:03.480332 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0bc1c18-eb56-4ad4-9d33-c5ea22cd3a49-utilities\") pod \"a0bc1c18-eb56-4ad4-9d33-c5ea22cd3a49\" (UID: \"a0bc1c18-eb56-4ad4-9d33-c5ea22cd3a49\") " Jan 27 16:17:03 crc kubenswrapper[4697]: I0127 16:17:03.481711 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a0bc1c18-eb56-4ad4-9d33-c5ea22cd3a49-utilities" (OuterVolumeSpecName: "utilities") pod "a0bc1c18-eb56-4ad4-9d33-c5ea22cd3a49" (UID: "a0bc1c18-eb56-4ad4-9d33-c5ea22cd3a49"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:17:03 crc kubenswrapper[4697]: I0127 16:17:03.492683 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0bc1c18-eb56-4ad4-9d33-c5ea22cd3a49-kube-api-access-kv8k8" (OuterVolumeSpecName: "kube-api-access-kv8k8") pod "a0bc1c18-eb56-4ad4-9d33-c5ea22cd3a49" (UID: "a0bc1c18-eb56-4ad4-9d33-c5ea22cd3a49"). InnerVolumeSpecName "kube-api-access-kv8k8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:17:03 crc kubenswrapper[4697]: I0127 16:17:03.582935 4697 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0bc1c18-eb56-4ad4-9d33-c5ea22cd3a49-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 16:17:03 crc kubenswrapper[4697]: I0127 16:17:03.582966 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kv8k8\" (UniqueName: \"kubernetes.io/projected/a0bc1c18-eb56-4ad4-9d33-c5ea22cd3a49-kube-api-access-kv8k8\") on node \"crc\" DevicePath \"\"" Jan 27 16:17:03 crc kubenswrapper[4697]: I0127 16:17:03.612497 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a0bc1c18-eb56-4ad4-9d33-c5ea22cd3a49-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a0bc1c18-eb56-4ad4-9d33-c5ea22cd3a49" (UID: "a0bc1c18-eb56-4ad4-9d33-c5ea22cd3a49"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:17:03 crc kubenswrapper[4697]: I0127 16:17:03.684806 4697 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0bc1c18-eb56-4ad4-9d33-c5ea22cd3a49-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 16:17:03 crc kubenswrapper[4697]: I0127 16:17:03.743205 4697 generic.go:334] "Generic (PLEG): container finished" podID="a0bc1c18-eb56-4ad4-9d33-c5ea22cd3a49" containerID="c0921427897d84fbacf1b6b95fe2f204a540620842b52342b6a96e1be097715b" exitCode=0 Jan 27 16:17:03 crc kubenswrapper[4697]: I0127 16:17:03.743259 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hqxp4" event={"ID":"a0bc1c18-eb56-4ad4-9d33-c5ea22cd3a49","Type":"ContainerDied","Data":"c0921427897d84fbacf1b6b95fe2f204a540620842b52342b6a96e1be097715b"} Jan 27 16:17:03 crc kubenswrapper[4697]: I0127 16:17:03.743283 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hqxp4" Jan 27 16:17:03 crc kubenswrapper[4697]: I0127 16:17:03.743317 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hqxp4" event={"ID":"a0bc1c18-eb56-4ad4-9d33-c5ea22cd3a49","Type":"ContainerDied","Data":"a74f005ab21b08ecfee43ccbe43d2aba0879ff798687e4dec7fa253259695b24"} Jan 27 16:17:03 crc kubenswrapper[4697]: I0127 16:17:03.743338 4697 scope.go:117] "RemoveContainer" containerID="c0921427897d84fbacf1b6b95fe2f204a540620842b52342b6a96e1be097715b" Jan 27 16:17:03 crc kubenswrapper[4697]: I0127 16:17:03.782102 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hqxp4"] Jan 27 16:17:03 crc kubenswrapper[4697]: I0127 16:17:03.782471 4697 scope.go:117] "RemoveContainer" containerID="07ef58b704d7fd95f5a013ae415d8b0483fa547144b42f471c540cdc9778eb04" Jan 27 16:17:03 crc kubenswrapper[4697]: I0127 16:17:03.795265 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-hqxp4"] Jan 27 16:17:03 crc kubenswrapper[4697]: I0127 16:17:03.835031 4697 scope.go:117] "RemoveContainer" containerID="358287a3789a08dc1c86949f4d8f2181463c60a0e56a3c2cf5021f2a637cb85c" Jan 27 16:17:03 crc kubenswrapper[4697]: I0127 16:17:03.854833 4697 scope.go:117] "RemoveContainer" containerID="c0921427897d84fbacf1b6b95fe2f204a540620842b52342b6a96e1be097715b" Jan 27 16:17:03 crc kubenswrapper[4697]: E0127 16:17:03.855518 4697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c0921427897d84fbacf1b6b95fe2f204a540620842b52342b6a96e1be097715b\": container with ID starting with c0921427897d84fbacf1b6b95fe2f204a540620842b52342b6a96e1be097715b not found: ID does not exist" containerID="c0921427897d84fbacf1b6b95fe2f204a540620842b52342b6a96e1be097715b" Jan 27 16:17:03 crc kubenswrapper[4697]: I0127 16:17:03.855555 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c0921427897d84fbacf1b6b95fe2f204a540620842b52342b6a96e1be097715b"} err="failed to get container status \"c0921427897d84fbacf1b6b95fe2f204a540620842b52342b6a96e1be097715b\": rpc error: code = NotFound desc = could not find container \"c0921427897d84fbacf1b6b95fe2f204a540620842b52342b6a96e1be097715b\": container with ID starting with c0921427897d84fbacf1b6b95fe2f204a540620842b52342b6a96e1be097715b not found: ID does not exist" Jan 27 16:17:03 crc kubenswrapper[4697]: I0127 16:17:03.855583 4697 scope.go:117] "RemoveContainer" containerID="07ef58b704d7fd95f5a013ae415d8b0483fa547144b42f471c540cdc9778eb04" Jan 27 16:17:03 crc kubenswrapper[4697]: E0127 16:17:03.855761 4697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"07ef58b704d7fd95f5a013ae415d8b0483fa547144b42f471c540cdc9778eb04\": container with ID starting with 07ef58b704d7fd95f5a013ae415d8b0483fa547144b42f471c540cdc9778eb04 not found: ID does not exist" containerID="07ef58b704d7fd95f5a013ae415d8b0483fa547144b42f471c540cdc9778eb04" Jan 27 16:17:03 crc kubenswrapper[4697]: I0127 16:17:03.855826 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07ef58b704d7fd95f5a013ae415d8b0483fa547144b42f471c540cdc9778eb04"} err="failed to get container status \"07ef58b704d7fd95f5a013ae415d8b0483fa547144b42f471c540cdc9778eb04\": rpc error: code = NotFound desc = could not find container \"07ef58b704d7fd95f5a013ae415d8b0483fa547144b42f471c540cdc9778eb04\": container with ID starting with 07ef58b704d7fd95f5a013ae415d8b0483fa547144b42f471c540cdc9778eb04 not found: ID does not exist" Jan 27 16:17:03 crc kubenswrapper[4697]: I0127 16:17:03.855839 4697 scope.go:117] "RemoveContainer" containerID="358287a3789a08dc1c86949f4d8f2181463c60a0e56a3c2cf5021f2a637cb85c" Jan 27 16:17:03 crc kubenswrapper[4697]: E0127 16:17:03.856003 4697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"358287a3789a08dc1c86949f4d8f2181463c60a0e56a3c2cf5021f2a637cb85c\": container with ID starting with 358287a3789a08dc1c86949f4d8f2181463c60a0e56a3c2cf5021f2a637cb85c not found: ID does not exist" containerID="358287a3789a08dc1c86949f4d8f2181463c60a0e56a3c2cf5021f2a637cb85c" Jan 27 16:17:03 crc kubenswrapper[4697]: I0127 16:17:03.856024 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"358287a3789a08dc1c86949f4d8f2181463c60a0e56a3c2cf5021f2a637cb85c"} err="failed to get container status \"358287a3789a08dc1c86949f4d8f2181463c60a0e56a3c2cf5021f2a637cb85c\": rpc error: code = NotFound desc = could not find container \"358287a3789a08dc1c86949f4d8f2181463c60a0e56a3c2cf5021f2a637cb85c\": container with ID starting with 358287a3789a08dc1c86949f4d8f2181463c60a0e56a3c2cf5021f2a637cb85c not found: ID does not exist" Jan 27 16:17:04 crc kubenswrapper[4697]: I0127 16:17:04.578377 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0bc1c18-eb56-4ad4-9d33-c5ea22cd3a49" path="/var/lib/kubelet/pods/a0bc1c18-eb56-4ad4-9d33-c5ea22cd3a49/volumes" Jan 27 16:18:55 crc kubenswrapper[4697]: I0127 16:18:55.109671 4697 patch_prober.go:28] interesting pod/machine-config-daemon-wz495 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 16:18:55 crc kubenswrapper[4697]: I0127 16:18:55.110519 4697 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 16:19:25 crc kubenswrapper[4697]: I0127 16:19:25.109272 4697 patch_prober.go:28] interesting pod/machine-config-daemon-wz495 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 16:19:25 crc kubenswrapper[4697]: I0127 16:19:25.110050 4697 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 16:19:55 crc kubenswrapper[4697]: I0127 16:19:55.109389 4697 patch_prober.go:28] interesting pod/machine-config-daemon-wz495 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 16:19:55 crc kubenswrapper[4697]: I0127 16:19:55.109952 4697 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 16:19:55 crc kubenswrapper[4697]: I0127 16:19:55.109998 4697 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wz495" Jan 27 16:19:55 crc kubenswrapper[4697]: I0127 16:19:55.110765 4697 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1ed28bba8d311f2ce4742176145e237c684162adcd23036e7e9084723cdfe227"} pod="openshift-machine-config-operator/machine-config-daemon-wz495" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 16:19:55 crc kubenswrapper[4697]: I0127 16:19:55.110836 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" containerName="machine-config-daemon" containerID="cri-o://1ed28bba8d311f2ce4742176145e237c684162adcd23036e7e9084723cdfe227" gracePeriod=600 Jan 27 16:19:55 crc kubenswrapper[4697]: E0127 16:19:55.270228 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wz495_openshift-machine-config-operator(e9bec8bc-b2a6-4865-83ca-692ae5c022a6)\"" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" Jan 27 16:19:55 crc kubenswrapper[4697]: I0127 16:19:55.991374 4697 generic.go:334] "Generic (PLEG): container finished" podID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" containerID="1ed28bba8d311f2ce4742176145e237c684162adcd23036e7e9084723cdfe227" exitCode=0 Jan 27 16:19:55 crc kubenswrapper[4697]: I0127 16:19:55.991449 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wz495" event={"ID":"e9bec8bc-b2a6-4865-83ca-692ae5c022a6","Type":"ContainerDied","Data":"1ed28bba8d311f2ce4742176145e237c684162adcd23036e7e9084723cdfe227"} Jan 27 16:19:55 crc kubenswrapper[4697]: I0127 16:19:55.991859 4697 scope.go:117] "RemoveContainer" containerID="69f7f7f2383abcc5335b945a24d4e8423be42e0d7cb37b789173d63bf5bb273d" Jan 27 16:19:55 crc kubenswrapper[4697]: I0127 16:19:55.993111 4697 scope.go:117] "RemoveContainer" containerID="1ed28bba8d311f2ce4742176145e237c684162adcd23036e7e9084723cdfe227" Jan 27 16:19:55 crc kubenswrapper[4697]: E0127 16:19:55.993557 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wz495_openshift-machine-config-operator(e9bec8bc-b2a6-4865-83ca-692ae5c022a6)\"" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" Jan 27 16:20:06 crc kubenswrapper[4697]: I0127 16:20:06.570037 4697 scope.go:117] "RemoveContainer" containerID="1ed28bba8d311f2ce4742176145e237c684162adcd23036e7e9084723cdfe227" Jan 27 16:20:06 crc kubenswrapper[4697]: E0127 16:20:06.570773 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wz495_openshift-machine-config-operator(e9bec8bc-b2a6-4865-83ca-692ae5c022a6)\"" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" Jan 27 16:20:17 crc kubenswrapper[4697]: I0127 16:20:17.568525 4697 scope.go:117] "RemoveContainer" containerID="1ed28bba8d311f2ce4742176145e237c684162adcd23036e7e9084723cdfe227" Jan 27 16:20:17 crc kubenswrapper[4697]: E0127 16:20:17.569698 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wz495_openshift-machine-config-operator(e9bec8bc-b2a6-4865-83ca-692ae5c022a6)\"" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" Jan 27 16:20:32 crc kubenswrapper[4697]: I0127 16:20:32.568287 4697 scope.go:117] "RemoveContainer" containerID="1ed28bba8d311f2ce4742176145e237c684162adcd23036e7e9084723cdfe227" Jan 27 16:20:32 crc kubenswrapper[4697]: E0127 16:20:32.569029 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wz495_openshift-machine-config-operator(e9bec8bc-b2a6-4865-83ca-692ae5c022a6)\"" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" Jan 27 16:20:47 crc kubenswrapper[4697]: I0127 16:20:47.568327 4697 scope.go:117] "RemoveContainer" containerID="1ed28bba8d311f2ce4742176145e237c684162adcd23036e7e9084723cdfe227" Jan 27 16:20:47 crc kubenswrapper[4697]: E0127 16:20:47.569149 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wz495_openshift-machine-config-operator(e9bec8bc-b2a6-4865-83ca-692ae5c022a6)\"" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" Jan 27 16:21:00 crc kubenswrapper[4697]: I0127 16:21:00.569225 4697 scope.go:117] "RemoveContainer" containerID="1ed28bba8d311f2ce4742176145e237c684162adcd23036e7e9084723cdfe227" Jan 27 16:21:00 crc kubenswrapper[4697]: E0127 16:21:00.570100 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wz495_openshift-machine-config-operator(e9bec8bc-b2a6-4865-83ca-692ae5c022a6)\"" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" Jan 27 16:21:11 crc kubenswrapper[4697]: I0127 16:21:11.568537 4697 scope.go:117] "RemoveContainer" containerID="1ed28bba8d311f2ce4742176145e237c684162adcd23036e7e9084723cdfe227" Jan 27 16:21:11 crc kubenswrapper[4697]: E0127 16:21:11.569365 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wz495_openshift-machine-config-operator(e9bec8bc-b2a6-4865-83ca-692ae5c022a6)\"" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" Jan 27 16:21:22 crc kubenswrapper[4697]: I0127 16:21:22.568909 4697 scope.go:117] "RemoveContainer" containerID="1ed28bba8d311f2ce4742176145e237c684162adcd23036e7e9084723cdfe227" Jan 27 16:21:22 crc kubenswrapper[4697]: E0127 16:21:22.570038 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wz495_openshift-machine-config-operator(e9bec8bc-b2a6-4865-83ca-692ae5c022a6)\"" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" Jan 27 16:21:34 crc kubenswrapper[4697]: I0127 16:21:34.575164 4697 scope.go:117] "RemoveContainer" containerID="1ed28bba8d311f2ce4742176145e237c684162adcd23036e7e9084723cdfe227" Jan 27 16:21:34 crc kubenswrapper[4697]: E0127 16:21:34.576318 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wz495_openshift-machine-config-operator(e9bec8bc-b2a6-4865-83ca-692ae5c022a6)\"" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" Jan 27 16:21:45 crc kubenswrapper[4697]: I0127 16:21:45.569136 4697 scope.go:117] "RemoveContainer" containerID="1ed28bba8d311f2ce4742176145e237c684162adcd23036e7e9084723cdfe227" Jan 27 16:21:45 crc kubenswrapper[4697]: E0127 16:21:45.569979 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wz495_openshift-machine-config-operator(e9bec8bc-b2a6-4865-83ca-692ae5c022a6)\"" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" Jan 27 16:21:58 crc kubenswrapper[4697]: I0127 16:21:58.568117 4697 scope.go:117] "RemoveContainer" containerID="1ed28bba8d311f2ce4742176145e237c684162adcd23036e7e9084723cdfe227" Jan 27 16:21:58 crc kubenswrapper[4697]: E0127 16:21:58.568894 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wz495_openshift-machine-config-operator(e9bec8bc-b2a6-4865-83ca-692ae5c022a6)\"" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" Jan 27 16:22:02 crc kubenswrapper[4697]: I0127 16:22:02.760185 4697 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="43946a66-4e74-47e4-bfd3-63256993e153" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Jan 27 16:22:11 crc kubenswrapper[4697]: I0127 16:22:11.569088 4697 scope.go:117] "RemoveContainer" containerID="1ed28bba8d311f2ce4742176145e237c684162adcd23036e7e9084723cdfe227" Jan 27 16:22:11 crc kubenswrapper[4697]: E0127 16:22:11.569947 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wz495_openshift-machine-config-operator(e9bec8bc-b2a6-4865-83ca-692ae5c022a6)\"" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" Jan 27 16:22:19 crc kubenswrapper[4697]: I0127 16:22:19.344423 4697 generic.go:334] "Generic (PLEG): container finished" podID="76805ce8-13c7-4d04-83c6-b70eaf33b9d8" containerID="a4c7a0bbff0ebc952f4d45b407537f098be7e52fba2721dea1e9dd3fafa743bc" exitCode=0 Jan 27 16:22:19 crc kubenswrapper[4697]: I0127 16:22:19.344535 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"76805ce8-13c7-4d04-83c6-b70eaf33b9d8","Type":"ContainerDied","Data":"a4c7a0bbff0ebc952f4d45b407537f098be7e52fba2721dea1e9dd3fafa743bc"} Jan 27 16:22:20 crc kubenswrapper[4697]: I0127 16:22:20.856887 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 27 16:22:20 crc kubenswrapper[4697]: I0127 16:22:20.947351 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/76805ce8-13c7-4d04-83c6-b70eaf33b9d8-test-operator-ephemeral-workdir\") pod \"76805ce8-13c7-4d04-83c6-b70eaf33b9d8\" (UID: \"76805ce8-13c7-4d04-83c6-b70eaf33b9d8\") " Jan 27 16:22:20 crc kubenswrapper[4697]: I0127 16:22:20.947700 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"76805ce8-13c7-4d04-83c6-b70eaf33b9d8\" (UID: \"76805ce8-13c7-4d04-83c6-b70eaf33b9d8\") " Jan 27 16:22:20 crc kubenswrapper[4697]: I0127 16:22:20.947741 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/76805ce8-13c7-4d04-83c6-b70eaf33b9d8-openstack-config-secret\") pod \"76805ce8-13c7-4d04-83c6-b70eaf33b9d8\" (UID: \"76805ce8-13c7-4d04-83c6-b70eaf33b9d8\") " Jan 27 16:22:20 crc kubenswrapper[4697]: I0127 16:22:20.947763 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/76805ce8-13c7-4d04-83c6-b70eaf33b9d8-ssh-key\") pod \"76805ce8-13c7-4d04-83c6-b70eaf33b9d8\" (UID: \"76805ce8-13c7-4d04-83c6-b70eaf33b9d8\") " Jan 27 16:22:20 crc kubenswrapper[4697]: I0127 16:22:20.947824 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/76805ce8-13c7-4d04-83c6-b70eaf33b9d8-config-data\") pod \"76805ce8-13c7-4d04-83c6-b70eaf33b9d8\" (UID: \"76805ce8-13c7-4d04-83c6-b70eaf33b9d8\") " Jan 27 16:22:20 crc kubenswrapper[4697]: I0127 16:22:20.947856 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/76805ce8-13c7-4d04-83c6-b70eaf33b9d8-openstack-config\") pod \"76805ce8-13c7-4d04-83c6-b70eaf33b9d8\" (UID: \"76805ce8-13c7-4d04-83c6-b70eaf33b9d8\") " Jan 27 16:22:20 crc kubenswrapper[4697]: I0127 16:22:20.947885 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/76805ce8-13c7-4d04-83c6-b70eaf33b9d8-ca-certs\") pod \"76805ce8-13c7-4d04-83c6-b70eaf33b9d8\" (UID: \"76805ce8-13c7-4d04-83c6-b70eaf33b9d8\") " Jan 27 16:22:20 crc kubenswrapper[4697]: I0127 16:22:20.947915 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qbrqc\" (UniqueName: \"kubernetes.io/projected/76805ce8-13c7-4d04-83c6-b70eaf33b9d8-kube-api-access-qbrqc\") pod \"76805ce8-13c7-4d04-83c6-b70eaf33b9d8\" (UID: \"76805ce8-13c7-4d04-83c6-b70eaf33b9d8\") " Jan 27 16:22:20 crc kubenswrapper[4697]: I0127 16:22:20.947944 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/76805ce8-13c7-4d04-83c6-b70eaf33b9d8-test-operator-ephemeral-temporary\") pod \"76805ce8-13c7-4d04-83c6-b70eaf33b9d8\" (UID: \"76805ce8-13c7-4d04-83c6-b70eaf33b9d8\") " Jan 27 16:22:20 crc kubenswrapper[4697]: I0127 16:22:20.950453 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/76805ce8-13c7-4d04-83c6-b70eaf33b9d8-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "76805ce8-13c7-4d04-83c6-b70eaf33b9d8" (UID: "76805ce8-13c7-4d04-83c6-b70eaf33b9d8"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:22:20 crc kubenswrapper[4697]: I0127 16:22:20.952965 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76805ce8-13c7-4d04-83c6-b70eaf33b9d8-config-data" (OuterVolumeSpecName: "config-data") pod "76805ce8-13c7-4d04-83c6-b70eaf33b9d8" (UID: "76805ce8-13c7-4d04-83c6-b70eaf33b9d8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:22:20 crc kubenswrapper[4697]: I0127 16:22:20.954915 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/76805ce8-13c7-4d04-83c6-b70eaf33b9d8-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "76805ce8-13c7-4d04-83c6-b70eaf33b9d8" (UID: "76805ce8-13c7-4d04-83c6-b70eaf33b9d8"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:22:20 crc kubenswrapper[4697]: I0127 16:22:20.972541 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76805ce8-13c7-4d04-83c6-b70eaf33b9d8-kube-api-access-qbrqc" (OuterVolumeSpecName: "kube-api-access-qbrqc") pod "76805ce8-13c7-4d04-83c6-b70eaf33b9d8" (UID: "76805ce8-13c7-4d04-83c6-b70eaf33b9d8"). InnerVolumeSpecName "kube-api-access-qbrqc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:22:20 crc kubenswrapper[4697]: I0127 16:22:20.981490 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "test-operator-logs") pod "76805ce8-13c7-4d04-83c6-b70eaf33b9d8" (UID: "76805ce8-13c7-4d04-83c6-b70eaf33b9d8"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 27 16:22:20 crc kubenswrapper[4697]: I0127 16:22:20.984914 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76805ce8-13c7-4d04-83c6-b70eaf33b9d8-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "76805ce8-13c7-4d04-83c6-b70eaf33b9d8" (UID: "76805ce8-13c7-4d04-83c6-b70eaf33b9d8"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:22:20 crc kubenswrapper[4697]: I0127 16:22:20.987029 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76805ce8-13c7-4d04-83c6-b70eaf33b9d8-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "76805ce8-13c7-4d04-83c6-b70eaf33b9d8" (UID: "76805ce8-13c7-4d04-83c6-b70eaf33b9d8"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:22:21 crc kubenswrapper[4697]: I0127 16:22:21.004033 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76805ce8-13c7-4d04-83c6-b70eaf33b9d8-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "76805ce8-13c7-4d04-83c6-b70eaf33b9d8" (UID: "76805ce8-13c7-4d04-83c6-b70eaf33b9d8"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:22:21 crc kubenswrapper[4697]: I0127 16:22:21.025061 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76805ce8-13c7-4d04-83c6-b70eaf33b9d8-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "76805ce8-13c7-4d04-83c6-b70eaf33b9d8" (UID: "76805ce8-13c7-4d04-83c6-b70eaf33b9d8"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:22:21 crc kubenswrapper[4697]: I0127 16:22:21.050112 4697 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/76805ce8-13c7-4d04-83c6-b70eaf33b9d8-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 27 16:22:21 crc kubenswrapper[4697]: I0127 16:22:21.050141 4697 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/76805ce8-13c7-4d04-83c6-b70eaf33b9d8-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 27 16:22:21 crc kubenswrapper[4697]: I0127 16:22:21.050152 4697 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/76805ce8-13c7-4d04-83c6-b70eaf33b9d8-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 16:22:21 crc kubenswrapper[4697]: I0127 16:22:21.050163 4697 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/76805ce8-13c7-4d04-83c6-b70eaf33b9d8-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 27 16:22:21 crc kubenswrapper[4697]: I0127 16:22:21.050171 4697 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/76805ce8-13c7-4d04-83c6-b70eaf33b9d8-ca-certs\") on node \"crc\" DevicePath \"\"" Jan 27 16:22:21 crc kubenswrapper[4697]: I0127 16:22:21.050179 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qbrqc\" (UniqueName: \"kubernetes.io/projected/76805ce8-13c7-4d04-83c6-b70eaf33b9d8-kube-api-access-qbrqc\") on node \"crc\" DevicePath \"\"" Jan 27 16:22:21 crc kubenswrapper[4697]: I0127 16:22:21.050188 4697 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/76805ce8-13c7-4d04-83c6-b70eaf33b9d8-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Jan 27 16:22:21 crc kubenswrapper[4697]: I0127 16:22:21.050199 4697 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/76805ce8-13c7-4d04-83c6-b70eaf33b9d8-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Jan 27 16:22:21 crc kubenswrapper[4697]: I0127 16:22:21.052195 4697 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Jan 27 16:22:21 crc kubenswrapper[4697]: I0127 16:22:21.084942 4697 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Jan 27 16:22:21 crc kubenswrapper[4697]: I0127 16:22:21.154932 4697 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Jan 27 16:22:21 crc kubenswrapper[4697]: I0127 16:22:21.373107 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"76805ce8-13c7-4d04-83c6-b70eaf33b9d8","Type":"ContainerDied","Data":"4d185a3b05d3717bb71ef179553bf7e6f6a3da247b1ee10d7e73874cb0a370b8"} Jan 27 16:22:21 crc kubenswrapper[4697]: I0127 16:22:21.373183 4697 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4d185a3b05d3717bb71ef179553bf7e6f6a3da247b1ee10d7e73874cb0a370b8" Jan 27 16:22:21 crc kubenswrapper[4697]: I0127 16:22:21.373190 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 27 16:22:26 crc kubenswrapper[4697]: I0127 16:22:26.568402 4697 scope.go:117] "RemoveContainer" containerID="1ed28bba8d311f2ce4742176145e237c684162adcd23036e7e9084723cdfe227" Jan 27 16:22:26 crc kubenswrapper[4697]: E0127 16:22:26.569137 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wz495_openshift-machine-config-operator(e9bec8bc-b2a6-4865-83ca-692ae5c022a6)\"" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" Jan 27 16:22:34 crc kubenswrapper[4697]: I0127 16:22:34.038621 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 27 16:22:34 crc kubenswrapper[4697]: E0127 16:22:34.039451 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0bc1c18-eb56-4ad4-9d33-c5ea22cd3a49" containerName="extract-content" Jan 27 16:22:34 crc kubenswrapper[4697]: I0127 16:22:34.039990 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0bc1c18-eb56-4ad4-9d33-c5ea22cd3a49" containerName="extract-content" Jan 27 16:22:34 crc kubenswrapper[4697]: E0127 16:22:34.040029 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0bc1c18-eb56-4ad4-9d33-c5ea22cd3a49" containerName="extract-utilities" Jan 27 16:22:34 crc kubenswrapper[4697]: I0127 16:22:34.040041 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0bc1c18-eb56-4ad4-9d33-c5ea22cd3a49" containerName="extract-utilities" Jan 27 16:22:34 crc kubenswrapper[4697]: E0127 16:22:34.040062 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76805ce8-13c7-4d04-83c6-b70eaf33b9d8" containerName="tempest-tests-tempest-tests-runner" Jan 27 16:22:34 crc kubenswrapper[4697]: I0127 16:22:34.040074 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="76805ce8-13c7-4d04-83c6-b70eaf33b9d8" containerName="tempest-tests-tempest-tests-runner" Jan 27 16:22:34 crc kubenswrapper[4697]: E0127 16:22:34.040120 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0bc1c18-eb56-4ad4-9d33-c5ea22cd3a49" containerName="registry-server" Jan 27 16:22:34 crc kubenswrapper[4697]: I0127 16:22:34.040134 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0bc1c18-eb56-4ad4-9d33-c5ea22cd3a49" containerName="registry-server" Jan 27 16:22:34 crc kubenswrapper[4697]: I0127 16:22:34.040448 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="76805ce8-13c7-4d04-83c6-b70eaf33b9d8" containerName="tempest-tests-tempest-tests-runner" Jan 27 16:22:34 crc kubenswrapper[4697]: I0127 16:22:34.040486 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0bc1c18-eb56-4ad4-9d33-c5ea22cd3a49" containerName="registry-server" Jan 27 16:22:34 crc kubenswrapper[4697]: I0127 16:22:34.041497 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 27 16:22:34 crc kubenswrapper[4697]: I0127 16:22:34.047130 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-2q9kp" Jan 27 16:22:34 crc kubenswrapper[4697]: I0127 16:22:34.050388 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 27 16:22:34 crc kubenswrapper[4697]: I0127 16:22:34.223361 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"adb48667-7dff-4826-858e-5825e64dfd59\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 27 16:22:34 crc kubenswrapper[4697]: I0127 16:22:34.223637 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vvxs\" (UniqueName: \"kubernetes.io/projected/adb48667-7dff-4826-858e-5825e64dfd59-kube-api-access-6vvxs\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"adb48667-7dff-4826-858e-5825e64dfd59\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 27 16:22:34 crc kubenswrapper[4697]: I0127 16:22:34.325637 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vvxs\" (UniqueName: \"kubernetes.io/projected/adb48667-7dff-4826-858e-5825e64dfd59-kube-api-access-6vvxs\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"adb48667-7dff-4826-858e-5825e64dfd59\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 27 16:22:34 crc kubenswrapper[4697]: I0127 16:22:34.325714 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"adb48667-7dff-4826-858e-5825e64dfd59\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 27 16:22:34 crc kubenswrapper[4697]: I0127 16:22:34.327164 4697 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"adb48667-7dff-4826-858e-5825e64dfd59\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 27 16:22:34 crc kubenswrapper[4697]: I0127 16:22:34.353102 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vvxs\" (UniqueName: \"kubernetes.io/projected/adb48667-7dff-4826-858e-5825e64dfd59-kube-api-access-6vvxs\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"adb48667-7dff-4826-858e-5825e64dfd59\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 27 16:22:34 crc kubenswrapper[4697]: I0127 16:22:34.360343 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"adb48667-7dff-4826-858e-5825e64dfd59\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 27 16:22:34 crc kubenswrapper[4697]: I0127 16:22:34.374372 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 27 16:22:34 crc kubenswrapper[4697]: I0127 16:22:34.918907 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 27 16:22:34 crc kubenswrapper[4697]: W0127 16:22:34.927987 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podadb48667_7dff_4826_858e_5825e64dfd59.slice/crio-a2876640bda4504f1515b05bdc1ad7013ee3e8f1269697273acaa657336b2c06 WatchSource:0}: Error finding container a2876640bda4504f1515b05bdc1ad7013ee3e8f1269697273acaa657336b2c06: Status 404 returned error can't find the container with id a2876640bda4504f1515b05bdc1ad7013ee3e8f1269697273acaa657336b2c06 Jan 27 16:22:34 crc kubenswrapper[4697]: I0127 16:22:34.964596 4697 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 16:22:35 crc kubenswrapper[4697]: I0127 16:22:35.516075 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"adb48667-7dff-4826-858e-5825e64dfd59","Type":"ContainerStarted","Data":"a2876640bda4504f1515b05bdc1ad7013ee3e8f1269697273acaa657336b2c06"} Jan 27 16:22:37 crc kubenswrapper[4697]: I0127 16:22:37.532883 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"adb48667-7dff-4826-858e-5825e64dfd59","Type":"ContainerStarted","Data":"fa39b1fdd784005392dd59d319017110246dd89db48bc365b7a16e7671b4bdfb"} Jan 27 16:22:37 crc kubenswrapper[4697]: I0127 16:22:37.551328 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=2.762926823 podStartE2EDuration="4.551307401s" podCreationTimestamp="2026-01-27 16:22:33 +0000 UTC" firstStartedPulling="2026-01-27 16:22:34.964190689 +0000 UTC m=+4451.136590480" lastFinishedPulling="2026-01-27 16:22:36.752571277 +0000 UTC m=+4452.924971058" observedRunningTime="2026-01-27 16:22:37.544132775 +0000 UTC m=+4453.716532556" watchObservedRunningTime="2026-01-27 16:22:37.551307401 +0000 UTC m=+4453.723707182" Jan 27 16:22:37 crc kubenswrapper[4697]: I0127 16:22:37.569866 4697 scope.go:117] "RemoveContainer" containerID="1ed28bba8d311f2ce4742176145e237c684162adcd23036e7e9084723cdfe227" Jan 27 16:22:37 crc kubenswrapper[4697]: E0127 16:22:37.570090 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wz495_openshift-machine-config-operator(e9bec8bc-b2a6-4865-83ca-692ae5c022a6)\"" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" Jan 27 16:22:51 crc kubenswrapper[4697]: I0127 16:22:51.568337 4697 scope.go:117] "RemoveContainer" containerID="1ed28bba8d311f2ce4742176145e237c684162adcd23036e7e9084723cdfe227" Jan 27 16:22:51 crc kubenswrapper[4697]: E0127 16:22:51.569245 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wz495_openshift-machine-config-operator(e9bec8bc-b2a6-4865-83ca-692ae5c022a6)\"" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" Jan 27 16:23:00 crc kubenswrapper[4697]: I0127 16:23:00.560812 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-fbgrj/must-gather-hxbrc"] Jan 27 16:23:00 crc kubenswrapper[4697]: I0127 16:23:00.563014 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fbgrj/must-gather-hxbrc" Jan 27 16:23:00 crc kubenswrapper[4697]: I0127 16:23:00.568421 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-fbgrj"/"kube-root-ca.crt" Jan 27 16:23:00 crc kubenswrapper[4697]: I0127 16:23:00.587460 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-fbgrj/must-gather-hxbrc"] Jan 27 16:23:00 crc kubenswrapper[4697]: I0127 16:23:00.599070 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-fbgrj"/"openshift-service-ca.crt" Jan 27 16:23:00 crc kubenswrapper[4697]: I0127 16:23:00.671597 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67zxd\" (UniqueName: \"kubernetes.io/projected/42dad571-2960-4925-8218-db035c05b9cb-kube-api-access-67zxd\") pod \"must-gather-hxbrc\" (UID: \"42dad571-2960-4925-8218-db035c05b9cb\") " pod="openshift-must-gather-fbgrj/must-gather-hxbrc" Jan 27 16:23:00 crc kubenswrapper[4697]: I0127 16:23:00.671683 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/42dad571-2960-4925-8218-db035c05b9cb-must-gather-output\") pod \"must-gather-hxbrc\" (UID: \"42dad571-2960-4925-8218-db035c05b9cb\") " pod="openshift-must-gather-fbgrj/must-gather-hxbrc" Jan 27 16:23:00 crc kubenswrapper[4697]: I0127 16:23:00.773237 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/42dad571-2960-4925-8218-db035c05b9cb-must-gather-output\") pod \"must-gather-hxbrc\" (UID: \"42dad571-2960-4925-8218-db035c05b9cb\") " pod="openshift-must-gather-fbgrj/must-gather-hxbrc" Jan 27 16:23:00 crc kubenswrapper[4697]: I0127 16:23:00.773436 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67zxd\" (UniqueName: \"kubernetes.io/projected/42dad571-2960-4925-8218-db035c05b9cb-kube-api-access-67zxd\") pod \"must-gather-hxbrc\" (UID: \"42dad571-2960-4925-8218-db035c05b9cb\") " pod="openshift-must-gather-fbgrj/must-gather-hxbrc" Jan 27 16:23:00 crc kubenswrapper[4697]: I0127 16:23:00.773678 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/42dad571-2960-4925-8218-db035c05b9cb-must-gather-output\") pod \"must-gather-hxbrc\" (UID: \"42dad571-2960-4925-8218-db035c05b9cb\") " pod="openshift-must-gather-fbgrj/must-gather-hxbrc" Jan 27 16:23:00 crc kubenswrapper[4697]: I0127 16:23:00.807025 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67zxd\" (UniqueName: \"kubernetes.io/projected/42dad571-2960-4925-8218-db035c05b9cb-kube-api-access-67zxd\") pod \"must-gather-hxbrc\" (UID: \"42dad571-2960-4925-8218-db035c05b9cb\") " pod="openshift-must-gather-fbgrj/must-gather-hxbrc" Jan 27 16:23:00 crc kubenswrapper[4697]: I0127 16:23:00.880428 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fbgrj/must-gather-hxbrc" Jan 27 16:23:01 crc kubenswrapper[4697]: I0127 16:23:01.428945 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-fbgrj/must-gather-hxbrc"] Jan 27 16:23:01 crc kubenswrapper[4697]: W0127 16:23:01.560511 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod42dad571_2960_4925_8218_db035c05b9cb.slice/crio-3fce9eac7fa6d9cbb36b419789af6d61f32968e65f977c1d850104a8ad5a1ac4 WatchSource:0}: Error finding container 3fce9eac7fa6d9cbb36b419789af6d61f32968e65f977c1d850104a8ad5a1ac4: Status 404 returned error can't find the container with id 3fce9eac7fa6d9cbb36b419789af6d61f32968e65f977c1d850104a8ad5a1ac4 Jan 27 16:23:01 crc kubenswrapper[4697]: I0127 16:23:01.751744 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-fbgrj/must-gather-hxbrc" event={"ID":"42dad571-2960-4925-8218-db035c05b9cb","Type":"ContainerStarted","Data":"3fce9eac7fa6d9cbb36b419789af6d61f32968e65f977c1d850104a8ad5a1ac4"} Jan 27 16:23:06 crc kubenswrapper[4697]: I0127 16:23:06.568865 4697 scope.go:117] "RemoveContainer" containerID="1ed28bba8d311f2ce4742176145e237c684162adcd23036e7e9084723cdfe227" Jan 27 16:23:06 crc kubenswrapper[4697]: E0127 16:23:06.570660 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wz495_openshift-machine-config-operator(e9bec8bc-b2a6-4865-83ca-692ae5c022a6)\"" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" Jan 27 16:23:12 crc kubenswrapper[4697]: I0127 16:23:12.865465 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-fbgrj/must-gather-hxbrc" event={"ID":"42dad571-2960-4925-8218-db035c05b9cb","Type":"ContainerStarted","Data":"e9f961bb0a344467d99e50c256c0a2de495c120a013237ff3b3897ff13041e4c"} Jan 27 16:23:12 crc kubenswrapper[4697]: I0127 16:23:12.866050 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-fbgrj/must-gather-hxbrc" event={"ID":"42dad571-2960-4925-8218-db035c05b9cb","Type":"ContainerStarted","Data":"f3d79fd89f3e5741ad81ab89b3cafd5c401973847770a9d0e653f65dde8239d8"} Jan 27 16:23:12 crc kubenswrapper[4697]: I0127 16:23:12.890451 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-fbgrj/must-gather-hxbrc" podStartSLOduration=3.163756034 podStartE2EDuration="12.890427074s" podCreationTimestamp="2026-01-27 16:23:00 +0000 UTC" firstStartedPulling="2026-01-27 16:23:01.563902076 +0000 UTC m=+4477.736301857" lastFinishedPulling="2026-01-27 16:23:11.290573116 +0000 UTC m=+4487.462972897" observedRunningTime="2026-01-27 16:23:12.881345411 +0000 UTC m=+4489.053745212" watchObservedRunningTime="2026-01-27 16:23:12.890427074 +0000 UTC m=+4489.062826855" Jan 27 16:23:18 crc kubenswrapper[4697]: I0127 16:23:18.568711 4697 scope.go:117] "RemoveContainer" containerID="1ed28bba8d311f2ce4742176145e237c684162adcd23036e7e9084723cdfe227" Jan 27 16:23:18 crc kubenswrapper[4697]: E0127 16:23:18.569367 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wz495_openshift-machine-config-operator(e9bec8bc-b2a6-4865-83ca-692ae5c022a6)\"" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" Jan 27 16:23:20 crc kubenswrapper[4697]: I0127 16:23:20.825731 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-fbgrj/crc-debug-tnsq5"] Jan 27 16:23:20 crc kubenswrapper[4697]: I0127 16:23:20.827824 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fbgrj/crc-debug-tnsq5" Jan 27 16:23:20 crc kubenswrapper[4697]: I0127 16:23:20.833304 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-fbgrj"/"default-dockercfg-jkp4h" Jan 27 16:23:20 crc kubenswrapper[4697]: I0127 16:23:20.897916 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rn84c\" (UniqueName: \"kubernetes.io/projected/9e857b57-231c-40ad-b92d-a127f6d6e798-kube-api-access-rn84c\") pod \"crc-debug-tnsq5\" (UID: \"9e857b57-231c-40ad-b92d-a127f6d6e798\") " pod="openshift-must-gather-fbgrj/crc-debug-tnsq5" Jan 27 16:23:20 crc kubenswrapper[4697]: I0127 16:23:20.897969 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9e857b57-231c-40ad-b92d-a127f6d6e798-host\") pod \"crc-debug-tnsq5\" (UID: \"9e857b57-231c-40ad-b92d-a127f6d6e798\") " pod="openshift-must-gather-fbgrj/crc-debug-tnsq5" Jan 27 16:23:21 crc kubenswrapper[4697]: I0127 16:23:21.000236 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rn84c\" (UniqueName: \"kubernetes.io/projected/9e857b57-231c-40ad-b92d-a127f6d6e798-kube-api-access-rn84c\") pod \"crc-debug-tnsq5\" (UID: \"9e857b57-231c-40ad-b92d-a127f6d6e798\") " pod="openshift-must-gather-fbgrj/crc-debug-tnsq5" Jan 27 16:23:21 crc kubenswrapper[4697]: I0127 16:23:21.000275 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9e857b57-231c-40ad-b92d-a127f6d6e798-host\") pod \"crc-debug-tnsq5\" (UID: \"9e857b57-231c-40ad-b92d-a127f6d6e798\") " pod="openshift-must-gather-fbgrj/crc-debug-tnsq5" Jan 27 16:23:21 crc kubenswrapper[4697]: I0127 16:23:21.001258 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9e857b57-231c-40ad-b92d-a127f6d6e798-host\") pod \"crc-debug-tnsq5\" (UID: \"9e857b57-231c-40ad-b92d-a127f6d6e798\") " pod="openshift-must-gather-fbgrj/crc-debug-tnsq5" Jan 27 16:23:21 crc kubenswrapper[4697]: I0127 16:23:21.018373 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rn84c\" (UniqueName: \"kubernetes.io/projected/9e857b57-231c-40ad-b92d-a127f6d6e798-kube-api-access-rn84c\") pod \"crc-debug-tnsq5\" (UID: \"9e857b57-231c-40ad-b92d-a127f6d6e798\") " pod="openshift-must-gather-fbgrj/crc-debug-tnsq5" Jan 27 16:23:21 crc kubenswrapper[4697]: I0127 16:23:21.145662 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fbgrj/crc-debug-tnsq5" Jan 27 16:23:21 crc kubenswrapper[4697]: I0127 16:23:21.942142 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-fbgrj/crc-debug-tnsq5" event={"ID":"9e857b57-231c-40ad-b92d-a127f6d6e798","Type":"ContainerStarted","Data":"8eb042720e8f028d5e4162ca2e20a93727a62c8076840ee45d86aaa7c2c49cfb"} Jan 27 16:23:33 crc kubenswrapper[4697]: I0127 16:23:33.568338 4697 scope.go:117] "RemoveContainer" containerID="1ed28bba8d311f2ce4742176145e237c684162adcd23036e7e9084723cdfe227" Jan 27 16:23:33 crc kubenswrapper[4697]: E0127 16:23:33.569109 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wz495_openshift-machine-config-operator(e9bec8bc-b2a6-4865-83ca-692ae5c022a6)\"" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" Jan 27 16:23:35 crc kubenswrapper[4697]: I0127 16:23:35.075302 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-fbgrj/crc-debug-tnsq5" event={"ID":"9e857b57-231c-40ad-b92d-a127f6d6e798","Type":"ContainerStarted","Data":"be4574f794532d12231280b097dea4a3ebb514eb1509564ab13feaf418864988"} Jan 27 16:23:35 crc kubenswrapper[4697]: I0127 16:23:35.094433 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-fbgrj/crc-debug-tnsq5" podStartSLOduration=1.500202597 podStartE2EDuration="15.094415223s" podCreationTimestamp="2026-01-27 16:23:20 +0000 UTC" firstStartedPulling="2026-01-27 16:23:21.175661167 +0000 UTC m=+4497.348060948" lastFinishedPulling="2026-01-27 16:23:34.769873793 +0000 UTC m=+4510.942273574" observedRunningTime="2026-01-27 16:23:35.089925702 +0000 UTC m=+4511.262325503" watchObservedRunningTime="2026-01-27 16:23:35.094415223 +0000 UTC m=+4511.266815004" Jan 27 16:23:46 crc kubenswrapper[4697]: I0127 16:23:46.568240 4697 scope.go:117] "RemoveContainer" containerID="1ed28bba8d311f2ce4742176145e237c684162adcd23036e7e9084723cdfe227" Jan 27 16:23:46 crc kubenswrapper[4697]: E0127 16:23:46.568930 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wz495_openshift-machine-config-operator(e9bec8bc-b2a6-4865-83ca-692ae5c022a6)\"" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" Jan 27 16:23:59 crc kubenswrapper[4697]: I0127 16:23:59.569073 4697 scope.go:117] "RemoveContainer" containerID="1ed28bba8d311f2ce4742176145e237c684162adcd23036e7e9084723cdfe227" Jan 27 16:23:59 crc kubenswrapper[4697]: E0127 16:23:59.570011 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wz495_openshift-machine-config-operator(e9bec8bc-b2a6-4865-83ca-692ae5c022a6)\"" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" Jan 27 16:24:10 crc kubenswrapper[4697]: I0127 16:24:10.569614 4697 scope.go:117] "RemoveContainer" containerID="1ed28bba8d311f2ce4742176145e237c684162adcd23036e7e9084723cdfe227" Jan 27 16:24:10 crc kubenswrapper[4697]: E0127 16:24:10.570419 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wz495_openshift-machine-config-operator(e9bec8bc-b2a6-4865-83ca-692ae5c022a6)\"" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" Jan 27 16:24:23 crc kubenswrapper[4697]: I0127 16:24:23.568891 4697 scope.go:117] "RemoveContainer" containerID="1ed28bba8d311f2ce4742176145e237c684162adcd23036e7e9084723cdfe227" Jan 27 16:24:23 crc kubenswrapper[4697]: E0127 16:24:23.569496 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wz495_openshift-machine-config-operator(e9bec8bc-b2a6-4865-83ca-692ae5c022a6)\"" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" Jan 27 16:24:34 crc kubenswrapper[4697]: I0127 16:24:34.598831 4697 generic.go:334] "Generic (PLEG): container finished" podID="9e857b57-231c-40ad-b92d-a127f6d6e798" containerID="be4574f794532d12231280b097dea4a3ebb514eb1509564ab13feaf418864988" exitCode=0 Jan 27 16:24:34 crc kubenswrapper[4697]: I0127 16:24:34.598855 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-fbgrj/crc-debug-tnsq5" event={"ID":"9e857b57-231c-40ad-b92d-a127f6d6e798","Type":"ContainerDied","Data":"be4574f794532d12231280b097dea4a3ebb514eb1509564ab13feaf418864988"} Jan 27 16:24:35 crc kubenswrapper[4697]: I0127 16:24:35.721409 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fbgrj/crc-debug-tnsq5" Jan 27 16:24:35 crc kubenswrapper[4697]: I0127 16:24:35.761523 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-fbgrj/crc-debug-tnsq5"] Jan 27 16:24:35 crc kubenswrapper[4697]: I0127 16:24:35.774063 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-fbgrj/crc-debug-tnsq5"] Jan 27 16:24:35 crc kubenswrapper[4697]: I0127 16:24:35.822882 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9e857b57-231c-40ad-b92d-a127f6d6e798-host\") pod \"9e857b57-231c-40ad-b92d-a127f6d6e798\" (UID: \"9e857b57-231c-40ad-b92d-a127f6d6e798\") " Jan 27 16:24:35 crc kubenswrapper[4697]: I0127 16:24:35.822995 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9e857b57-231c-40ad-b92d-a127f6d6e798-host" (OuterVolumeSpecName: "host") pod "9e857b57-231c-40ad-b92d-a127f6d6e798" (UID: "9e857b57-231c-40ad-b92d-a127f6d6e798"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 16:24:35 crc kubenswrapper[4697]: I0127 16:24:35.823239 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rn84c\" (UniqueName: \"kubernetes.io/projected/9e857b57-231c-40ad-b92d-a127f6d6e798-kube-api-access-rn84c\") pod \"9e857b57-231c-40ad-b92d-a127f6d6e798\" (UID: \"9e857b57-231c-40ad-b92d-a127f6d6e798\") " Jan 27 16:24:35 crc kubenswrapper[4697]: I0127 16:24:35.823744 4697 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9e857b57-231c-40ad-b92d-a127f6d6e798-host\") on node \"crc\" DevicePath \"\"" Jan 27 16:24:35 crc kubenswrapper[4697]: I0127 16:24:35.839178 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e857b57-231c-40ad-b92d-a127f6d6e798-kube-api-access-rn84c" (OuterVolumeSpecName: "kube-api-access-rn84c") pod "9e857b57-231c-40ad-b92d-a127f6d6e798" (UID: "9e857b57-231c-40ad-b92d-a127f6d6e798"). InnerVolumeSpecName "kube-api-access-rn84c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:24:35 crc kubenswrapper[4697]: I0127 16:24:35.925811 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rn84c\" (UniqueName: \"kubernetes.io/projected/9e857b57-231c-40ad-b92d-a127f6d6e798-kube-api-access-rn84c\") on node \"crc\" DevicePath \"\"" Jan 27 16:24:36 crc kubenswrapper[4697]: I0127 16:24:36.577837 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e857b57-231c-40ad-b92d-a127f6d6e798" path="/var/lib/kubelet/pods/9e857b57-231c-40ad-b92d-a127f6d6e798/volumes" Jan 27 16:24:36 crc kubenswrapper[4697]: I0127 16:24:36.625346 4697 scope.go:117] "RemoveContainer" containerID="be4574f794532d12231280b097dea4a3ebb514eb1509564ab13feaf418864988" Jan 27 16:24:36 crc kubenswrapper[4697]: I0127 16:24:36.625594 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fbgrj/crc-debug-tnsq5" Jan 27 16:24:36 crc kubenswrapper[4697]: I0127 16:24:36.939694 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-fbgrj/crc-debug-k5m8x"] Jan 27 16:24:36 crc kubenswrapper[4697]: E0127 16:24:36.940350 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e857b57-231c-40ad-b92d-a127f6d6e798" containerName="container-00" Jan 27 16:24:36 crc kubenswrapper[4697]: I0127 16:24:36.940362 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e857b57-231c-40ad-b92d-a127f6d6e798" containerName="container-00" Jan 27 16:24:36 crc kubenswrapper[4697]: I0127 16:24:36.940528 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e857b57-231c-40ad-b92d-a127f6d6e798" containerName="container-00" Jan 27 16:24:36 crc kubenswrapper[4697]: I0127 16:24:36.941289 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fbgrj/crc-debug-k5m8x" Jan 27 16:24:36 crc kubenswrapper[4697]: I0127 16:24:36.943838 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-fbgrj"/"default-dockercfg-jkp4h" Jan 27 16:24:37 crc kubenswrapper[4697]: I0127 16:24:37.063890 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cq5w8\" (UniqueName: \"kubernetes.io/projected/24e77763-869c-4894-94b0-327c3a21695f-kube-api-access-cq5w8\") pod \"crc-debug-k5m8x\" (UID: \"24e77763-869c-4894-94b0-327c3a21695f\") " pod="openshift-must-gather-fbgrj/crc-debug-k5m8x" Jan 27 16:24:37 crc kubenswrapper[4697]: I0127 16:24:37.064028 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/24e77763-869c-4894-94b0-327c3a21695f-host\") pod \"crc-debug-k5m8x\" (UID: \"24e77763-869c-4894-94b0-327c3a21695f\") " pod="openshift-must-gather-fbgrj/crc-debug-k5m8x" Jan 27 16:24:37 crc kubenswrapper[4697]: I0127 16:24:37.166697 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cq5w8\" (UniqueName: \"kubernetes.io/projected/24e77763-869c-4894-94b0-327c3a21695f-kube-api-access-cq5w8\") pod \"crc-debug-k5m8x\" (UID: \"24e77763-869c-4894-94b0-327c3a21695f\") " pod="openshift-must-gather-fbgrj/crc-debug-k5m8x" Jan 27 16:24:37 crc kubenswrapper[4697]: I0127 16:24:37.166819 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/24e77763-869c-4894-94b0-327c3a21695f-host\") pod \"crc-debug-k5m8x\" (UID: \"24e77763-869c-4894-94b0-327c3a21695f\") " pod="openshift-must-gather-fbgrj/crc-debug-k5m8x" Jan 27 16:24:37 crc kubenswrapper[4697]: I0127 16:24:37.166909 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/24e77763-869c-4894-94b0-327c3a21695f-host\") pod \"crc-debug-k5m8x\" (UID: \"24e77763-869c-4894-94b0-327c3a21695f\") " pod="openshift-must-gather-fbgrj/crc-debug-k5m8x" Jan 27 16:24:37 crc kubenswrapper[4697]: I0127 16:24:37.184552 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cq5w8\" (UniqueName: \"kubernetes.io/projected/24e77763-869c-4894-94b0-327c3a21695f-kube-api-access-cq5w8\") pod \"crc-debug-k5m8x\" (UID: \"24e77763-869c-4894-94b0-327c3a21695f\") " pod="openshift-must-gather-fbgrj/crc-debug-k5m8x" Jan 27 16:24:37 crc kubenswrapper[4697]: I0127 16:24:37.258526 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fbgrj/crc-debug-k5m8x" Jan 27 16:24:37 crc kubenswrapper[4697]: I0127 16:24:37.635812 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-fbgrj/crc-debug-k5m8x" event={"ID":"24e77763-869c-4894-94b0-327c3a21695f","Type":"ContainerStarted","Data":"4b69c5a9c6435ab492e37d8ee5484de8e3cdaf132215bed187770fcede7eed93"} Jan 27 16:24:37 crc kubenswrapper[4697]: I0127 16:24:37.636102 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-fbgrj/crc-debug-k5m8x" event={"ID":"24e77763-869c-4894-94b0-327c3a21695f","Type":"ContainerStarted","Data":"24865fe6888dc453e8ba0541bd8872791a7a79b1cfa5105a72c30d5ba145f870"} Jan 27 16:24:37 crc kubenswrapper[4697]: I0127 16:24:37.648994 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-fbgrj/crc-debug-k5m8x" podStartSLOduration=1.6489772089999999 podStartE2EDuration="1.648977209s" podCreationTimestamp="2026-01-27 16:24:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 16:24:37.648552569 +0000 UTC m=+4573.820952340" watchObservedRunningTime="2026-01-27 16:24:37.648977209 +0000 UTC m=+4573.821376990" Jan 27 16:24:38 crc kubenswrapper[4697]: I0127 16:24:38.572561 4697 scope.go:117] "RemoveContainer" containerID="1ed28bba8d311f2ce4742176145e237c684162adcd23036e7e9084723cdfe227" Jan 27 16:24:38 crc kubenswrapper[4697]: E0127 16:24:38.573755 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wz495_openshift-machine-config-operator(e9bec8bc-b2a6-4865-83ca-692ae5c022a6)\"" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" Jan 27 16:24:38 crc kubenswrapper[4697]: I0127 16:24:38.647756 4697 generic.go:334] "Generic (PLEG): container finished" podID="24e77763-869c-4894-94b0-327c3a21695f" containerID="4b69c5a9c6435ab492e37d8ee5484de8e3cdaf132215bed187770fcede7eed93" exitCode=0 Jan 27 16:24:38 crc kubenswrapper[4697]: I0127 16:24:38.647811 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-fbgrj/crc-debug-k5m8x" event={"ID":"24e77763-869c-4894-94b0-327c3a21695f","Type":"ContainerDied","Data":"4b69c5a9c6435ab492e37d8ee5484de8e3cdaf132215bed187770fcede7eed93"} Jan 27 16:24:39 crc kubenswrapper[4697]: I0127 16:24:39.782688 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fbgrj/crc-debug-k5m8x" Jan 27 16:24:39 crc kubenswrapper[4697]: I0127 16:24:39.819386 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-fbgrj/crc-debug-k5m8x"] Jan 27 16:24:39 crc kubenswrapper[4697]: I0127 16:24:39.828217 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-fbgrj/crc-debug-k5m8x"] Jan 27 16:24:39 crc kubenswrapper[4697]: I0127 16:24:39.920584 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cq5w8\" (UniqueName: \"kubernetes.io/projected/24e77763-869c-4894-94b0-327c3a21695f-kube-api-access-cq5w8\") pod \"24e77763-869c-4894-94b0-327c3a21695f\" (UID: \"24e77763-869c-4894-94b0-327c3a21695f\") " Jan 27 16:24:39 crc kubenswrapper[4697]: I0127 16:24:39.920694 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/24e77763-869c-4894-94b0-327c3a21695f-host\") pod \"24e77763-869c-4894-94b0-327c3a21695f\" (UID: \"24e77763-869c-4894-94b0-327c3a21695f\") " Jan 27 16:24:39 crc kubenswrapper[4697]: I0127 16:24:39.921130 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/24e77763-869c-4894-94b0-327c3a21695f-host" (OuterVolumeSpecName: "host") pod "24e77763-869c-4894-94b0-327c3a21695f" (UID: "24e77763-869c-4894-94b0-327c3a21695f"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 16:24:39 crc kubenswrapper[4697]: I0127 16:24:39.926755 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24e77763-869c-4894-94b0-327c3a21695f-kube-api-access-cq5w8" (OuterVolumeSpecName: "kube-api-access-cq5w8") pod "24e77763-869c-4894-94b0-327c3a21695f" (UID: "24e77763-869c-4894-94b0-327c3a21695f"). InnerVolumeSpecName "kube-api-access-cq5w8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:24:40 crc kubenswrapper[4697]: I0127 16:24:40.022841 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cq5w8\" (UniqueName: \"kubernetes.io/projected/24e77763-869c-4894-94b0-327c3a21695f-kube-api-access-cq5w8\") on node \"crc\" DevicePath \"\"" Jan 27 16:24:40 crc kubenswrapper[4697]: I0127 16:24:40.022878 4697 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/24e77763-869c-4894-94b0-327c3a21695f-host\") on node \"crc\" DevicePath \"\"" Jan 27 16:24:40 crc kubenswrapper[4697]: I0127 16:24:40.579164 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24e77763-869c-4894-94b0-327c3a21695f" path="/var/lib/kubelet/pods/24e77763-869c-4894-94b0-327c3a21695f/volumes" Jan 27 16:24:40 crc kubenswrapper[4697]: I0127 16:24:40.665583 4697 scope.go:117] "RemoveContainer" containerID="4b69c5a9c6435ab492e37d8ee5484de8e3cdaf132215bed187770fcede7eed93" Jan 27 16:24:40 crc kubenswrapper[4697]: I0127 16:24:40.665590 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fbgrj/crc-debug-k5m8x" Jan 27 16:24:41 crc kubenswrapper[4697]: I0127 16:24:41.051395 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-fbgrj/crc-debug-hfv2m"] Jan 27 16:24:41 crc kubenswrapper[4697]: E0127 16:24:41.052078 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24e77763-869c-4894-94b0-327c3a21695f" containerName="container-00" Jan 27 16:24:41 crc kubenswrapper[4697]: I0127 16:24:41.052091 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="24e77763-869c-4894-94b0-327c3a21695f" containerName="container-00" Jan 27 16:24:41 crc kubenswrapper[4697]: I0127 16:24:41.052263 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="24e77763-869c-4894-94b0-327c3a21695f" containerName="container-00" Jan 27 16:24:41 crc kubenswrapper[4697]: I0127 16:24:41.052844 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fbgrj/crc-debug-hfv2m" Jan 27 16:24:41 crc kubenswrapper[4697]: I0127 16:24:41.054834 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-fbgrj"/"default-dockercfg-jkp4h" Jan 27 16:24:41 crc kubenswrapper[4697]: I0127 16:24:41.142953 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a4658d63-45d8-4d68-bd9e-0a5919436e1c-host\") pod \"crc-debug-hfv2m\" (UID: \"a4658d63-45d8-4d68-bd9e-0a5919436e1c\") " pod="openshift-must-gather-fbgrj/crc-debug-hfv2m" Jan 27 16:24:41 crc kubenswrapper[4697]: I0127 16:24:41.143039 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jf8vw\" (UniqueName: \"kubernetes.io/projected/a4658d63-45d8-4d68-bd9e-0a5919436e1c-kube-api-access-jf8vw\") pod \"crc-debug-hfv2m\" (UID: \"a4658d63-45d8-4d68-bd9e-0a5919436e1c\") " pod="openshift-must-gather-fbgrj/crc-debug-hfv2m" Jan 27 16:24:41 crc kubenswrapper[4697]: I0127 16:24:41.244865 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a4658d63-45d8-4d68-bd9e-0a5919436e1c-host\") pod \"crc-debug-hfv2m\" (UID: \"a4658d63-45d8-4d68-bd9e-0a5919436e1c\") " pod="openshift-must-gather-fbgrj/crc-debug-hfv2m" Jan 27 16:24:41 crc kubenswrapper[4697]: I0127 16:24:41.244987 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jf8vw\" (UniqueName: \"kubernetes.io/projected/a4658d63-45d8-4d68-bd9e-0a5919436e1c-kube-api-access-jf8vw\") pod \"crc-debug-hfv2m\" (UID: \"a4658d63-45d8-4d68-bd9e-0a5919436e1c\") " pod="openshift-must-gather-fbgrj/crc-debug-hfv2m" Jan 27 16:24:41 crc kubenswrapper[4697]: I0127 16:24:41.245003 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a4658d63-45d8-4d68-bd9e-0a5919436e1c-host\") pod \"crc-debug-hfv2m\" (UID: \"a4658d63-45d8-4d68-bd9e-0a5919436e1c\") " pod="openshift-must-gather-fbgrj/crc-debug-hfv2m" Jan 27 16:24:41 crc kubenswrapper[4697]: I0127 16:24:41.261064 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jf8vw\" (UniqueName: \"kubernetes.io/projected/a4658d63-45d8-4d68-bd9e-0a5919436e1c-kube-api-access-jf8vw\") pod \"crc-debug-hfv2m\" (UID: \"a4658d63-45d8-4d68-bd9e-0a5919436e1c\") " pod="openshift-must-gather-fbgrj/crc-debug-hfv2m" Jan 27 16:24:41 crc kubenswrapper[4697]: I0127 16:24:41.375800 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fbgrj/crc-debug-hfv2m" Jan 27 16:24:41 crc kubenswrapper[4697]: W0127 16:24:41.413371 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda4658d63_45d8_4d68_bd9e_0a5919436e1c.slice/crio-253f208a5537872499f1a8274a5b6dcf9e6fac1b08f8d8c12d2626505f3fc1f6 WatchSource:0}: Error finding container 253f208a5537872499f1a8274a5b6dcf9e6fac1b08f8d8c12d2626505f3fc1f6: Status 404 returned error can't find the container with id 253f208a5537872499f1a8274a5b6dcf9e6fac1b08f8d8c12d2626505f3fc1f6 Jan 27 16:24:41 crc kubenswrapper[4697]: I0127 16:24:41.681460 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-fbgrj/crc-debug-hfv2m" event={"ID":"a4658d63-45d8-4d68-bd9e-0a5919436e1c","Type":"ContainerStarted","Data":"0880ba291186249e6bf6d75585162136608ea2c5720fe09e046cffa5045c900f"} Jan 27 16:24:41 crc kubenswrapper[4697]: I0127 16:24:41.681854 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-fbgrj/crc-debug-hfv2m" event={"ID":"a4658d63-45d8-4d68-bd9e-0a5919436e1c","Type":"ContainerStarted","Data":"253f208a5537872499f1a8274a5b6dcf9e6fac1b08f8d8c12d2626505f3fc1f6"} Jan 27 16:24:41 crc kubenswrapper[4697]: I0127 16:24:41.702044 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-fbgrj/crc-debug-hfv2m" podStartSLOduration=0.702027019 podStartE2EDuration="702.027019ms" podCreationTimestamp="2026-01-27 16:24:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 16:24:41.69759155 +0000 UTC m=+4577.869991331" watchObservedRunningTime="2026-01-27 16:24:41.702027019 +0000 UTC m=+4577.874426800" Jan 27 16:24:42 crc kubenswrapper[4697]: I0127 16:24:42.694552 4697 generic.go:334] "Generic (PLEG): container finished" podID="a4658d63-45d8-4d68-bd9e-0a5919436e1c" containerID="0880ba291186249e6bf6d75585162136608ea2c5720fe09e046cffa5045c900f" exitCode=0 Jan 27 16:24:42 crc kubenswrapper[4697]: I0127 16:24:42.694604 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-fbgrj/crc-debug-hfv2m" event={"ID":"a4658d63-45d8-4d68-bd9e-0a5919436e1c","Type":"ContainerDied","Data":"0880ba291186249e6bf6d75585162136608ea2c5720fe09e046cffa5045c900f"} Jan 27 16:24:43 crc kubenswrapper[4697]: I0127 16:24:43.816985 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fbgrj/crc-debug-hfv2m" Jan 27 16:24:43 crc kubenswrapper[4697]: I0127 16:24:43.855718 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-fbgrj/crc-debug-hfv2m"] Jan 27 16:24:43 crc kubenswrapper[4697]: I0127 16:24:43.863895 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-fbgrj/crc-debug-hfv2m"] Jan 27 16:24:43 crc kubenswrapper[4697]: I0127 16:24:43.891336 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a4658d63-45d8-4d68-bd9e-0a5919436e1c-host\") pod \"a4658d63-45d8-4d68-bd9e-0a5919436e1c\" (UID: \"a4658d63-45d8-4d68-bd9e-0a5919436e1c\") " Jan 27 16:24:43 crc kubenswrapper[4697]: I0127 16:24:43.891621 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jf8vw\" (UniqueName: \"kubernetes.io/projected/a4658d63-45d8-4d68-bd9e-0a5919436e1c-kube-api-access-jf8vw\") pod \"a4658d63-45d8-4d68-bd9e-0a5919436e1c\" (UID: \"a4658d63-45d8-4d68-bd9e-0a5919436e1c\") " Jan 27 16:24:43 crc kubenswrapper[4697]: I0127 16:24:43.891751 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a4658d63-45d8-4d68-bd9e-0a5919436e1c-host" (OuterVolumeSpecName: "host") pod "a4658d63-45d8-4d68-bd9e-0a5919436e1c" (UID: "a4658d63-45d8-4d68-bd9e-0a5919436e1c"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 16:24:43 crc kubenswrapper[4697]: I0127 16:24:43.892246 4697 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a4658d63-45d8-4d68-bd9e-0a5919436e1c-host\") on node \"crc\" DevicePath \"\"" Jan 27 16:24:43 crc kubenswrapper[4697]: I0127 16:24:43.896976 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4658d63-45d8-4d68-bd9e-0a5919436e1c-kube-api-access-jf8vw" (OuterVolumeSpecName: "kube-api-access-jf8vw") pod "a4658d63-45d8-4d68-bd9e-0a5919436e1c" (UID: "a4658d63-45d8-4d68-bd9e-0a5919436e1c"). InnerVolumeSpecName "kube-api-access-jf8vw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:24:43 crc kubenswrapper[4697]: I0127 16:24:43.994073 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jf8vw\" (UniqueName: \"kubernetes.io/projected/a4658d63-45d8-4d68-bd9e-0a5919436e1c-kube-api-access-jf8vw\") on node \"crc\" DevicePath \"\"" Jan 27 16:24:44 crc kubenswrapper[4697]: I0127 16:24:44.609190 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4658d63-45d8-4d68-bd9e-0a5919436e1c" path="/var/lib/kubelet/pods/a4658d63-45d8-4d68-bd9e-0a5919436e1c/volumes" Jan 27 16:24:44 crc kubenswrapper[4697]: I0127 16:24:44.711412 4697 scope.go:117] "RemoveContainer" containerID="0880ba291186249e6bf6d75585162136608ea2c5720fe09e046cffa5045c900f" Jan 27 16:24:44 crc kubenswrapper[4697]: I0127 16:24:44.711571 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fbgrj/crc-debug-hfv2m" Jan 27 16:24:50 crc kubenswrapper[4697]: I0127 16:24:50.568485 4697 scope.go:117] "RemoveContainer" containerID="1ed28bba8d311f2ce4742176145e237c684162adcd23036e7e9084723cdfe227" Jan 27 16:24:50 crc kubenswrapper[4697]: E0127 16:24:50.569162 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wz495_openshift-machine-config-operator(e9bec8bc-b2a6-4865-83ca-692ae5c022a6)\"" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" Jan 27 16:25:05 crc kubenswrapper[4697]: I0127 16:25:05.569258 4697 scope.go:117] "RemoveContainer" containerID="1ed28bba8d311f2ce4742176145e237c684162adcd23036e7e9084723cdfe227" Jan 27 16:25:05 crc kubenswrapper[4697]: I0127 16:25:05.902771 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wz495" event={"ID":"e9bec8bc-b2a6-4865-83ca-692ae5c022a6","Type":"ContainerStarted","Data":"33ef3249056f5753476b3fb4dab581920d9492912c671892b553ffa873cec697"} Jan 27 16:25:05 crc kubenswrapper[4697]: I0127 16:25:05.961131 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-cbf684fd-9bzgt_94a26d25-9d4f-4d9e-becb-5fef1852a9cc/barbican-api/0.log" Jan 27 16:25:06 crc kubenswrapper[4697]: I0127 16:25:06.139336 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-cbf684fd-9bzgt_94a26d25-9d4f-4d9e-becb-5fef1852a9cc/barbican-api-log/0.log" Jan 27 16:25:06 crc kubenswrapper[4697]: I0127 16:25:06.219609 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-6b5997ff6-w2vq4_4ab6ee6b-e923-4905-8d8d-56f96e3bd471/barbican-keystone-listener/0.log" Jan 27 16:25:06 crc kubenswrapper[4697]: I0127 16:25:06.393221 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-6b5997ff6-w2vq4_4ab6ee6b-e923-4905-8d8d-56f96e3bd471/barbican-keystone-listener-log/0.log" Jan 27 16:25:06 crc kubenswrapper[4697]: I0127 16:25:06.956436 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-8c986997f-97nkx_c283033b-665a-4e84-b347-5ab724df37be/barbican-worker/0.log" Jan 27 16:25:07 crc kubenswrapper[4697]: I0127 16:25:07.022624 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-8c986997f-97nkx_c283033b-665a-4e84-b347-5ab724df37be/barbican-worker-log/0.log" Jan 27 16:25:07 crc kubenswrapper[4697]: I0127 16:25:07.385244 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-rm5t2_e6db178e-d462-4895-84e2-10695b0df557/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 16:25:07 crc kubenswrapper[4697]: I0127 16:25:07.407104 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_43946a66-4e74-47e4-bfd3-63256993e153/ceilometer-central-agent/0.log" Jan 27 16:25:07 crc kubenswrapper[4697]: I0127 16:25:07.464638 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_43946a66-4e74-47e4-bfd3-63256993e153/ceilometer-notification-agent/0.log" Jan 27 16:25:07 crc kubenswrapper[4697]: I0127 16:25:07.674716 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_43946a66-4e74-47e4-bfd3-63256993e153/proxy-httpd/0.log" Jan 27 16:25:07 crc kubenswrapper[4697]: I0127 16:25:07.742980 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_43946a66-4e74-47e4-bfd3-63256993e153/sg-core/0.log" Jan 27 16:25:07 crc kubenswrapper[4697]: I0127 16:25:07.862153 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_6c37afd4-a5ce-450f-8d51-231aba899e23/cinder-api/0.log" Jan 27 16:25:07 crc kubenswrapper[4697]: I0127 16:25:07.935019 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_6c37afd4-a5ce-450f-8d51-231aba899e23/cinder-api-log/0.log" Jan 27 16:25:08 crc kubenswrapper[4697]: I0127 16:25:08.576556 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_b69b7a05-c4c4-48a4-a4fa-0cc140a18080/probe/0.log" Jan 27 16:25:08 crc kubenswrapper[4697]: I0127 16:25:08.618492 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_b69b7a05-c4c4-48a4-a4fa-0cc140a18080/cinder-scheduler/0.log" Jan 27 16:25:09 crc kubenswrapper[4697]: I0127 16:25:09.051605 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-fpd2r_ed55f221-f5eb-421e-88b3-682ff73202dc/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 16:25:09 crc kubenswrapper[4697]: I0127 16:25:09.246584 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-666b2_e3f4f826-3a5f-4eb5-a34b-c1c0ff66d4e3/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 16:25:09 crc kubenswrapper[4697]: I0127 16:25:09.347803 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6ff66b85ff-jdz29_56c582a3-145c-4300-8680-1720a7581f60/init/0.log" Jan 27 16:25:09 crc kubenswrapper[4697]: I0127 16:25:09.611362 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6ff66b85ff-jdz29_56c582a3-145c-4300-8680-1720a7581f60/init/0.log" Jan 27 16:25:09 crc kubenswrapper[4697]: I0127 16:25:09.635480 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-67tjj_0b244a0a-7ccb-49be-bcef-497d3b0f99be/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 16:25:09 crc kubenswrapper[4697]: I0127 16:25:09.849333 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6ff66b85ff-jdz29_56c582a3-145c-4300-8680-1720a7581f60/dnsmasq-dns/0.log" Jan 27 16:25:09 crc kubenswrapper[4697]: I0127 16:25:09.996854 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_8397e2ec-7b94-4690-b567-716eae78b6d0/glance-httpd/0.log" Jan 27 16:25:10 crc kubenswrapper[4697]: I0127 16:25:10.144637 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_8397e2ec-7b94-4690-b567-716eae78b6d0/glance-log/0.log" Jan 27 16:25:10 crc kubenswrapper[4697]: I0127 16:25:10.187447 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_31e0b520-a0d8-4d9c-a53a-dbc75c401f4f/glance-httpd/0.log" Jan 27 16:25:10 crc kubenswrapper[4697]: I0127 16:25:10.246680 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_31e0b520-a0d8-4d9c-a53a-dbc75c401f4f/glance-log/0.log" Jan 27 16:25:10 crc kubenswrapper[4697]: I0127 16:25:10.583926 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-5b9dc56b78-cpxnx_ca5e937a-90cf-44e0-bf5c-bcb75c95a2f4/horizon/1.log" Jan 27 16:25:10 crc kubenswrapper[4697]: I0127 16:25:10.586524 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-5b9dc56b78-cpxnx_ca5e937a-90cf-44e0-bf5c-bcb75c95a2f4/horizon/2.log" Jan 27 16:25:10 crc kubenswrapper[4697]: I0127 16:25:10.912768 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-5b9dc56b78-cpxnx_ca5e937a-90cf-44e0-bf5c-bcb75c95a2f4/horizon-log/0.log" Jan 27 16:25:11 crc kubenswrapper[4697]: I0127 16:25:11.034310 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-j646z_59c9a20e-f30b-44c1-86ff-fc751969cb24/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 16:25:11 crc kubenswrapper[4697]: I0127 16:25:11.126448 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-vfmvd_91be9d7e-7513-4b5f-a897-9bb94f9d7649/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 16:25:11 crc kubenswrapper[4697]: I0127 16:25:11.577261 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29492161-pqk7t_02b729a6-604c-42d7-94d9-0d39bfcaf203/keystone-cron/0.log" Jan 27 16:25:11 crc kubenswrapper[4697]: I0127 16:25:11.719878 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-59df5b454d-5c7dx_d505d1b9-c72c-4515-8f3f-f543d0276487/keystone-api/0.log" Jan 27 16:25:11 crc kubenswrapper[4697]: I0127 16:25:11.773722 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_48bb37d7-5e93-4523-8526-b8b664997fb3/kube-state-metrics/0.log" Jan 27 16:25:11 crc kubenswrapper[4697]: I0127 16:25:11.988677 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-4cl49_cb80b572-758d-4bd1-b54a-eb5b40cce9db/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 16:25:12 crc kubenswrapper[4697]: I0127 16:25:12.580055 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-gtfnf_38a907af-3d24-434c-a097-3b3635db95d3/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 16:25:12 crc kubenswrapper[4697]: I0127 16:25:12.805742 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-dff8b9f65-4b4q2_f67513b8-77d5-4a24-b1ee-ce73e70cb72d/neutron-httpd/0.log" Jan 27 16:25:12 crc kubenswrapper[4697]: I0127 16:25:12.987692 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-dff8b9f65-4b4q2_f67513b8-77d5-4a24-b1ee-ce73e70cb72d/neutron-api/0.log" Jan 27 16:25:13 crc kubenswrapper[4697]: I0127 16:25:13.763690 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_42230c5d-4496-4618-bd71-9b11d49bde9b/nova-cell0-conductor-conductor/0.log" Jan 27 16:25:13 crc kubenswrapper[4697]: I0127 16:25:13.971714 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_39bb5256-76f5-4ada-8803-c88ee4ccd881/memcached/0.log" Jan 27 16:25:14 crc kubenswrapper[4697]: I0127 16:25:14.026677 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_41233cf8-f273-4cae-a02d-9e0fb56b2f1d/nova-cell1-conductor-conductor/0.log" Jan 27 16:25:14 crc kubenswrapper[4697]: I0127 16:25:14.306884 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_f7a25f76-cbe2-44a4-911d-40b875d2f934/nova-api-log/0.log" Jan 27 16:25:14 crc kubenswrapper[4697]: I0127 16:25:14.462759 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_6ac3c287-657e-4e2a-be91-50e9fbce6ea0/nova-cell1-novncproxy-novncproxy/0.log" Jan 27 16:25:14 crc kubenswrapper[4697]: I0127 16:25:14.591521 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-jk5zz_8b858060-b802-452d-aa2a-1be4f38efe74/nova-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 16:25:14 crc kubenswrapper[4697]: I0127 16:25:14.596551 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_f7a25f76-cbe2-44a4-911d-40b875d2f934/nova-api-api/0.log" Jan 27 16:25:14 crc kubenswrapper[4697]: I0127 16:25:14.676978 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_3c1b720b-0b31-4c5d-9306-ca65e780dc12/nova-metadata-log/0.log" Jan 27 16:25:14 crc kubenswrapper[4697]: I0127 16:25:14.957855 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_a07684ab-be65-430a-89ff-7e3503304f07/mysql-bootstrap/0.log" Jan 27 16:25:15 crc kubenswrapper[4697]: I0127 16:25:15.290839 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_a07684ab-be65-430a-89ff-7e3503304f07/galera/0.log" Jan 27 16:25:15 crc kubenswrapper[4697]: I0127 16:25:15.343871 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_0aac0bcf-d6ae-4188-b597-e42935d81d0e/nova-scheduler-scheduler/0.log" Jan 27 16:25:15 crc kubenswrapper[4697]: I0127 16:25:15.368225 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_a07684ab-be65-430a-89ff-7e3503304f07/mysql-bootstrap/0.log" Jan 27 16:25:15 crc kubenswrapper[4697]: I0127 16:25:15.614033 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_b1f75076-2324-44ff-9a33-e083e3de3c02/mysql-bootstrap/0.log" Jan 27 16:25:15 crc kubenswrapper[4697]: I0127 16:25:15.851206 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_b1f75076-2324-44ff-9a33-e083e3de3c02/mysql-bootstrap/0.log" Jan 27 16:25:15 crc kubenswrapper[4697]: I0127 16:25:15.923627 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_3c1b720b-0b31-4c5d-9306-ca65e780dc12/nova-metadata-metadata/0.log" Jan 27 16:25:15 crc kubenswrapper[4697]: I0127 16:25:15.967394 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_b1f75076-2324-44ff-9a33-e083e3de3c02/galera/0.log" Jan 27 16:25:16 crc kubenswrapper[4697]: I0127 16:25:16.002516 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_3d176f9c-9152-4162-b723-1f6e8330118a/openstackclient/0.log" Jan 27 16:25:16 crc kubenswrapper[4697]: I0127 16:25:16.208489 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-6sgqx_72f31a1f-c388-4fed-9842-13f65cf91e9b/ovn-controller/0.log" Jan 27 16:25:16 crc kubenswrapper[4697]: I0127 16:25:16.227548 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-mfkdn_e7853fb5-d995-44ae-b1b5-c4c38fcadbd2/openstack-network-exporter/0.log" Jan 27 16:25:16 crc kubenswrapper[4697]: I0127 16:25:16.382530 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-p278q_9cb679cc-394c-4c45-8712-058fad1090e7/ovsdb-server-init/0.log" Jan 27 16:25:16 crc kubenswrapper[4697]: I0127 16:25:16.999951 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-p278q_9cb679cc-394c-4c45-8712-058fad1090e7/ovsdb-server-init/0.log" Jan 27 16:25:17 crc kubenswrapper[4697]: I0127 16:25:17.049711 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-p278q_9cb679cc-394c-4c45-8712-058fad1090e7/ovs-vswitchd/0.log" Jan 27 16:25:17 crc kubenswrapper[4697]: I0127 16:25:17.064462 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-p278q_9cb679cc-394c-4c45-8712-058fad1090e7/ovsdb-server/0.log" Jan 27 16:25:17 crc kubenswrapper[4697]: I0127 16:25:17.079700 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-ljqck_679f5e04-5c46-49e5-9216-f850ca38d84d/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 16:25:17 crc kubenswrapper[4697]: I0127 16:25:17.313264 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_93a06b1b-be54-4517-a12a-83c9a4f91367/openstack-network-exporter/0.log" Jan 27 16:25:17 crc kubenswrapper[4697]: I0127 16:25:17.340006 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_0e5111c0-346b-4994-822c-c86f4ee166bc/openstack-network-exporter/0.log" Jan 27 16:25:17 crc kubenswrapper[4697]: I0127 16:25:17.369073 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_93a06b1b-be54-4517-a12a-83c9a4f91367/ovn-northd/0.log" Jan 27 16:25:17 crc kubenswrapper[4697]: I0127 16:25:17.577103 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_0e5111c0-346b-4994-822c-c86f4ee166bc/ovsdbserver-nb/0.log" Jan 27 16:25:17 crc kubenswrapper[4697]: I0127 16:25:17.685833 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_0c5d2358-058b-4d32-86b7-20228aff9677/openstack-network-exporter/0.log" Jan 27 16:25:17 crc kubenswrapper[4697]: I0127 16:25:17.703964 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_0c5d2358-058b-4d32-86b7-20228aff9677/ovsdbserver-sb/0.log" Jan 27 16:25:17 crc kubenswrapper[4697]: I0127 16:25:17.998879 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_e1aa709a-61ff-458d-a4b9-ca6d06bc537c/setup-container/0.log" Jan 27 16:25:18 crc kubenswrapper[4697]: I0127 16:25:18.009902 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-85b9bd5db8-9x55q_5663a40f-33b6-4e0b-9f94-94aecd69e3af/placement-api/0.log" Jan 27 16:25:18 crc kubenswrapper[4697]: I0127 16:25:18.163491 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-85b9bd5db8-9x55q_5663a40f-33b6-4e0b-9f94-94aecd69e3af/placement-log/0.log" Jan 27 16:25:18 crc kubenswrapper[4697]: I0127 16:25:18.583696 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_e1aa709a-61ff-458d-a4b9-ca6d06bc537c/setup-container/0.log" Jan 27 16:25:18 crc kubenswrapper[4697]: I0127 16:25:18.630032 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_e1aa709a-61ff-458d-a4b9-ca6d06bc537c/rabbitmq/0.log" Jan 27 16:25:18 crc kubenswrapper[4697]: I0127 16:25:18.637489 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_b9b87d14-1e98-448a-9b9c-3c47e4782ede/setup-container/0.log" Jan 27 16:25:18 crc kubenswrapper[4697]: I0127 16:25:18.822815 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_b9b87d14-1e98-448a-9b9c-3c47e4782ede/setup-container/0.log" Jan 27 16:25:18 crc kubenswrapper[4697]: I0127 16:25:18.894287 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_b9b87d14-1e98-448a-9b9c-3c47e4782ede/rabbitmq/0.log" Jan 27 16:25:18 crc kubenswrapper[4697]: I0127 16:25:18.918585 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-2cdg9_59725918-a0f4-46fb-afcf-393ee1d4d22b/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 16:25:19 crc kubenswrapper[4697]: I0127 16:25:19.048529 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-bhvqs_e7fe5183-36d1-4594-859b-b999146707ad/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 16:25:19 crc kubenswrapper[4697]: I0127 16:25:19.163925 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-45gtt_6eb281af-668c-4872-8100-3a9db4eb4c5a/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 16:25:19 crc kubenswrapper[4697]: I0127 16:25:19.323657 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-4tcrw_f15a6662-a671-40da-9473-59daaedbe07c/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 16:25:19 crc kubenswrapper[4697]: I0127 16:25:19.442617 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-nkk45_707d2908-c632-4cb5-9a3f-8d44f79aedcb/ssh-known-hosts-edpm-deployment/0.log" Jan 27 16:25:19 crc kubenswrapper[4697]: I0127 16:25:19.554305 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-8bdd65479-mrv2d_8f6bc9e4-3f3f-4e33-a648-4381818937f1/proxy-httpd/0.log" Jan 27 16:25:19 crc kubenswrapper[4697]: I0127 16:25:19.629021 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-w2t78_afa66008-cd63-46fa-8ac6-622e2b465eec/swift-ring-rebalance/0.log" Jan 27 16:25:19 crc kubenswrapper[4697]: I0127 16:25:19.752051 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-8bdd65479-mrv2d_8f6bc9e4-3f3f-4e33-a648-4381818937f1/proxy-server/0.log" Jan 27 16:25:19 crc kubenswrapper[4697]: I0127 16:25:19.820547 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c4c66cac-c142-4579-9d13-053d43983229/account-auditor/0.log" Jan 27 16:25:19 crc kubenswrapper[4697]: I0127 16:25:19.921970 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c4c66cac-c142-4579-9d13-053d43983229/account-reaper/0.log" Jan 27 16:25:19 crc kubenswrapper[4697]: I0127 16:25:19.962044 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c4c66cac-c142-4579-9d13-053d43983229/account-replicator/0.log" Jan 27 16:25:20 crc kubenswrapper[4697]: I0127 16:25:20.048949 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c4c66cac-c142-4579-9d13-053d43983229/account-server/0.log" Jan 27 16:25:20 crc kubenswrapper[4697]: I0127 16:25:20.154910 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-b88ns"] Jan 27 16:25:20 crc kubenswrapper[4697]: E0127 16:25:20.155288 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4658d63-45d8-4d68-bd9e-0a5919436e1c" containerName="container-00" Jan 27 16:25:20 crc kubenswrapper[4697]: I0127 16:25:20.155303 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4658d63-45d8-4d68-bd9e-0a5919436e1c" containerName="container-00" Jan 27 16:25:20 crc kubenswrapper[4697]: I0127 16:25:20.155475 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4658d63-45d8-4d68-bd9e-0a5919436e1c" containerName="container-00" Jan 27 16:25:20 crc kubenswrapper[4697]: I0127 16:25:20.156709 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b88ns" Jan 27 16:25:20 crc kubenswrapper[4697]: I0127 16:25:20.185295 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c4c66cac-c142-4579-9d13-053d43983229/container-replicator/0.log" Jan 27 16:25:20 crc kubenswrapper[4697]: I0127 16:25:20.187295 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c4c66cac-c142-4579-9d13-053d43983229/container-auditor/0.log" Jan 27 16:25:20 crc kubenswrapper[4697]: I0127 16:25:20.196969 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-b88ns"] Jan 27 16:25:20 crc kubenswrapper[4697]: I0127 16:25:20.227219 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jxjw\" (UniqueName: \"kubernetes.io/projected/58e74c48-877d-4675-afcf-ddc1a98e0daf-kube-api-access-2jxjw\") pod \"certified-operators-b88ns\" (UID: \"58e74c48-877d-4675-afcf-ddc1a98e0daf\") " pod="openshift-marketplace/certified-operators-b88ns" Jan 27 16:25:20 crc kubenswrapper[4697]: I0127 16:25:20.227273 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58e74c48-877d-4675-afcf-ddc1a98e0daf-utilities\") pod \"certified-operators-b88ns\" (UID: \"58e74c48-877d-4675-afcf-ddc1a98e0daf\") " pod="openshift-marketplace/certified-operators-b88ns" Jan 27 16:25:20 crc kubenswrapper[4697]: I0127 16:25:20.227375 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58e74c48-877d-4675-afcf-ddc1a98e0daf-catalog-content\") pod \"certified-operators-b88ns\" (UID: \"58e74c48-877d-4675-afcf-ddc1a98e0daf\") " pod="openshift-marketplace/certified-operators-b88ns" Jan 27 16:25:20 crc kubenswrapper[4697]: I0127 16:25:20.329480 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2jxjw\" (UniqueName: \"kubernetes.io/projected/58e74c48-877d-4675-afcf-ddc1a98e0daf-kube-api-access-2jxjw\") pod \"certified-operators-b88ns\" (UID: \"58e74c48-877d-4675-afcf-ddc1a98e0daf\") " pod="openshift-marketplace/certified-operators-b88ns" Jan 27 16:25:20 crc kubenswrapper[4697]: I0127 16:25:20.329540 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58e74c48-877d-4675-afcf-ddc1a98e0daf-utilities\") pod \"certified-operators-b88ns\" (UID: \"58e74c48-877d-4675-afcf-ddc1a98e0daf\") " pod="openshift-marketplace/certified-operators-b88ns" Jan 27 16:25:20 crc kubenswrapper[4697]: I0127 16:25:20.329647 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58e74c48-877d-4675-afcf-ddc1a98e0daf-catalog-content\") pod \"certified-operators-b88ns\" (UID: \"58e74c48-877d-4675-afcf-ddc1a98e0daf\") " pod="openshift-marketplace/certified-operators-b88ns" Jan 27 16:25:20 crc kubenswrapper[4697]: I0127 16:25:20.330354 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58e74c48-877d-4675-afcf-ddc1a98e0daf-catalog-content\") pod \"certified-operators-b88ns\" (UID: \"58e74c48-877d-4675-afcf-ddc1a98e0daf\") " pod="openshift-marketplace/certified-operators-b88ns" Jan 27 16:25:20 crc kubenswrapper[4697]: I0127 16:25:20.330391 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58e74c48-877d-4675-afcf-ddc1a98e0daf-utilities\") pod \"certified-operators-b88ns\" (UID: \"58e74c48-877d-4675-afcf-ddc1a98e0daf\") " pod="openshift-marketplace/certified-operators-b88ns" Jan 27 16:25:20 crc kubenswrapper[4697]: I0127 16:25:20.368491 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2jxjw\" (UniqueName: \"kubernetes.io/projected/58e74c48-877d-4675-afcf-ddc1a98e0daf-kube-api-access-2jxjw\") pod \"certified-operators-b88ns\" (UID: \"58e74c48-877d-4675-afcf-ddc1a98e0daf\") " pod="openshift-marketplace/certified-operators-b88ns" Jan 27 16:25:20 crc kubenswrapper[4697]: I0127 16:25:20.476075 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b88ns" Jan 27 16:25:20 crc kubenswrapper[4697]: I0127 16:25:20.573537 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c4c66cac-c142-4579-9d13-053d43983229/container-updater/0.log" Jan 27 16:25:20 crc kubenswrapper[4697]: I0127 16:25:20.582990 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c4c66cac-c142-4579-9d13-053d43983229/container-server/0.log" Jan 27 16:25:20 crc kubenswrapper[4697]: I0127 16:25:20.813236 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c4c66cac-c142-4579-9d13-053d43983229/object-auditor/0.log" Jan 27 16:25:20 crc kubenswrapper[4697]: I0127 16:25:20.863803 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c4c66cac-c142-4579-9d13-053d43983229/object-replicator/0.log" Jan 27 16:25:20 crc kubenswrapper[4697]: I0127 16:25:20.978683 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-b88ns"] Jan 27 16:25:21 crc kubenswrapper[4697]: I0127 16:25:21.052299 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b88ns" event={"ID":"58e74c48-877d-4675-afcf-ddc1a98e0daf","Type":"ContainerStarted","Data":"b737e992f4f09dfc30582ac3c123a81de64116c82e64300813aca3fbbb230f2f"} Jan 27 16:25:21 crc kubenswrapper[4697]: I0127 16:25:21.075244 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c4c66cac-c142-4579-9d13-053d43983229/object-server/0.log" Jan 27 16:25:21 crc kubenswrapper[4697]: I0127 16:25:21.075340 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c4c66cac-c142-4579-9d13-053d43983229/object-updater/0.log" Jan 27 16:25:21 crc kubenswrapper[4697]: I0127 16:25:21.143287 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c4c66cac-c142-4579-9d13-053d43983229/object-expirer/0.log" Jan 27 16:25:21 crc kubenswrapper[4697]: I0127 16:25:21.475920 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c4c66cac-c142-4579-9d13-053d43983229/rsync/0.log" Jan 27 16:25:21 crc kubenswrapper[4697]: I0127 16:25:21.580040 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c4c66cac-c142-4579-9d13-053d43983229/swift-recon-cron/0.log" Jan 27 16:25:21 crc kubenswrapper[4697]: I0127 16:25:21.733967 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-4hmkx_2b30d2e1-f8ce-4e50-9476-eb2d454bc1ce/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 16:25:21 crc kubenswrapper[4697]: I0127 16:25:21.824600 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_76805ce8-13c7-4d04-83c6-b70eaf33b9d8/tempest-tests-tempest-tests-runner/0.log" Jan 27 16:25:22 crc kubenswrapper[4697]: I0127 16:25:22.000267 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_adb48667-7dff-4826-858e-5825e64dfd59/test-operator-logs-container/0.log" Jan 27 16:25:22 crc kubenswrapper[4697]: I0127 16:25:22.043775 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-h6skx_dba3e49a-c1cb-4006-b821-a341645c7fba/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 16:25:22 crc kubenswrapper[4697]: I0127 16:25:22.062677 4697 generic.go:334] "Generic (PLEG): container finished" podID="58e74c48-877d-4675-afcf-ddc1a98e0daf" containerID="6646f7933da063f4e10920b9eb5af8486c4c5d985ad63695f50b6b65921b78cd" exitCode=0 Jan 27 16:25:22 crc kubenswrapper[4697]: I0127 16:25:22.062720 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b88ns" event={"ID":"58e74c48-877d-4675-afcf-ddc1a98e0daf","Type":"ContainerDied","Data":"6646f7933da063f4e10920b9eb5af8486c4c5d985ad63695f50b6b65921b78cd"} Jan 27 16:25:24 crc kubenswrapper[4697]: I0127 16:25:24.080099 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b88ns" event={"ID":"58e74c48-877d-4675-afcf-ddc1a98e0daf","Type":"ContainerStarted","Data":"d593e1e8f187f11b9845991af08d8a40ea584d530d131246975df0dae7f63b83"} Jan 27 16:25:27 crc kubenswrapper[4697]: I0127 16:25:27.119935 4697 generic.go:334] "Generic (PLEG): container finished" podID="58e74c48-877d-4675-afcf-ddc1a98e0daf" containerID="d593e1e8f187f11b9845991af08d8a40ea584d530d131246975df0dae7f63b83" exitCode=0 Jan 27 16:25:27 crc kubenswrapper[4697]: I0127 16:25:27.120017 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b88ns" event={"ID":"58e74c48-877d-4675-afcf-ddc1a98e0daf","Type":"ContainerDied","Data":"d593e1e8f187f11b9845991af08d8a40ea584d530d131246975df0dae7f63b83"} Jan 27 16:25:28 crc kubenswrapper[4697]: I0127 16:25:28.130977 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b88ns" event={"ID":"58e74c48-877d-4675-afcf-ddc1a98e0daf","Type":"ContainerStarted","Data":"d266eeb26b16d4beb20456a1a29c67a57108c698ad5eb0780831db8025824829"} Jan 27 16:25:28 crc kubenswrapper[4697]: I0127 16:25:28.162264 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-b88ns" podStartSLOduration=2.677324946 podStartE2EDuration="8.162245098s" podCreationTimestamp="2026-01-27 16:25:20 +0000 UTC" firstStartedPulling="2026-01-27 16:25:22.06485166 +0000 UTC m=+4618.237251441" lastFinishedPulling="2026-01-27 16:25:27.549771812 +0000 UTC m=+4623.722171593" observedRunningTime="2026-01-27 16:25:28.153108496 +0000 UTC m=+4624.325508277" watchObservedRunningTime="2026-01-27 16:25:28.162245098 +0000 UTC m=+4624.334644879" Jan 27 16:25:30 crc kubenswrapper[4697]: I0127 16:25:30.476907 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-b88ns" Jan 27 16:25:30 crc kubenswrapper[4697]: I0127 16:25:30.478183 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-b88ns" Jan 27 16:25:30 crc kubenswrapper[4697]: I0127 16:25:30.530977 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-b88ns" Jan 27 16:25:32 crc kubenswrapper[4697]: I0127 16:25:32.218626 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-b88ns" Jan 27 16:25:32 crc kubenswrapper[4697]: I0127 16:25:32.263500 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-b88ns"] Jan 27 16:25:34 crc kubenswrapper[4697]: I0127 16:25:34.176236 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-b88ns" podUID="58e74c48-877d-4675-afcf-ddc1a98e0daf" containerName="registry-server" containerID="cri-o://d266eeb26b16d4beb20456a1a29c67a57108c698ad5eb0780831db8025824829" gracePeriod=2 Jan 27 16:25:35 crc kubenswrapper[4697]: I0127 16:25:35.958011 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b88ns" Jan 27 16:25:36 crc kubenswrapper[4697]: I0127 16:25:36.137858 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2jxjw\" (UniqueName: \"kubernetes.io/projected/58e74c48-877d-4675-afcf-ddc1a98e0daf-kube-api-access-2jxjw\") pod \"58e74c48-877d-4675-afcf-ddc1a98e0daf\" (UID: \"58e74c48-877d-4675-afcf-ddc1a98e0daf\") " Jan 27 16:25:36 crc kubenswrapper[4697]: I0127 16:25:36.138010 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58e74c48-877d-4675-afcf-ddc1a98e0daf-utilities\") pod \"58e74c48-877d-4675-afcf-ddc1a98e0daf\" (UID: \"58e74c48-877d-4675-afcf-ddc1a98e0daf\") " Jan 27 16:25:36 crc kubenswrapper[4697]: I0127 16:25:36.138107 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58e74c48-877d-4675-afcf-ddc1a98e0daf-catalog-content\") pod \"58e74c48-877d-4675-afcf-ddc1a98e0daf\" (UID: \"58e74c48-877d-4675-afcf-ddc1a98e0daf\") " Jan 27 16:25:36 crc kubenswrapper[4697]: I0127 16:25:36.138864 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/58e74c48-877d-4675-afcf-ddc1a98e0daf-utilities" (OuterVolumeSpecName: "utilities") pod "58e74c48-877d-4675-afcf-ddc1a98e0daf" (UID: "58e74c48-877d-4675-afcf-ddc1a98e0daf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:25:36 crc kubenswrapper[4697]: I0127 16:25:36.147119 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58e74c48-877d-4675-afcf-ddc1a98e0daf-kube-api-access-2jxjw" (OuterVolumeSpecName: "kube-api-access-2jxjw") pod "58e74c48-877d-4675-afcf-ddc1a98e0daf" (UID: "58e74c48-877d-4675-afcf-ddc1a98e0daf"). InnerVolumeSpecName "kube-api-access-2jxjw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:25:36 crc kubenswrapper[4697]: I0127 16:25:36.199419 4697 generic.go:334] "Generic (PLEG): container finished" podID="58e74c48-877d-4675-afcf-ddc1a98e0daf" containerID="d266eeb26b16d4beb20456a1a29c67a57108c698ad5eb0780831db8025824829" exitCode=0 Jan 27 16:25:36 crc kubenswrapper[4697]: I0127 16:25:36.199455 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b88ns" event={"ID":"58e74c48-877d-4675-afcf-ddc1a98e0daf","Type":"ContainerDied","Data":"d266eeb26b16d4beb20456a1a29c67a57108c698ad5eb0780831db8025824829"} Jan 27 16:25:36 crc kubenswrapper[4697]: I0127 16:25:36.199480 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b88ns" event={"ID":"58e74c48-877d-4675-afcf-ddc1a98e0daf","Type":"ContainerDied","Data":"b737e992f4f09dfc30582ac3c123a81de64116c82e64300813aca3fbbb230f2f"} Jan 27 16:25:36 crc kubenswrapper[4697]: I0127 16:25:36.199499 4697 scope.go:117] "RemoveContainer" containerID="d266eeb26b16d4beb20456a1a29c67a57108c698ad5eb0780831db8025824829" Jan 27 16:25:36 crc kubenswrapper[4697]: I0127 16:25:36.199636 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b88ns" Jan 27 16:25:36 crc kubenswrapper[4697]: I0127 16:25:36.205844 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/58e74c48-877d-4675-afcf-ddc1a98e0daf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "58e74c48-877d-4675-afcf-ddc1a98e0daf" (UID: "58e74c48-877d-4675-afcf-ddc1a98e0daf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:25:36 crc kubenswrapper[4697]: I0127 16:25:36.220967 4697 scope.go:117] "RemoveContainer" containerID="d593e1e8f187f11b9845991af08d8a40ea584d530d131246975df0dae7f63b83" Jan 27 16:25:36 crc kubenswrapper[4697]: I0127 16:25:36.243999 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2jxjw\" (UniqueName: \"kubernetes.io/projected/58e74c48-877d-4675-afcf-ddc1a98e0daf-kube-api-access-2jxjw\") on node \"crc\" DevicePath \"\"" Jan 27 16:25:36 crc kubenswrapper[4697]: I0127 16:25:36.244038 4697 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58e74c48-877d-4675-afcf-ddc1a98e0daf-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 16:25:36 crc kubenswrapper[4697]: I0127 16:25:36.244066 4697 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58e74c48-877d-4675-afcf-ddc1a98e0daf-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 16:25:36 crc kubenswrapper[4697]: I0127 16:25:36.246733 4697 scope.go:117] "RemoveContainer" containerID="6646f7933da063f4e10920b9eb5af8486c4c5d985ad63695f50b6b65921b78cd" Jan 27 16:25:36 crc kubenswrapper[4697]: I0127 16:25:36.287808 4697 scope.go:117] "RemoveContainer" containerID="d266eeb26b16d4beb20456a1a29c67a57108c698ad5eb0780831db8025824829" Jan 27 16:25:36 crc kubenswrapper[4697]: E0127 16:25:36.288766 4697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d266eeb26b16d4beb20456a1a29c67a57108c698ad5eb0780831db8025824829\": container with ID starting with d266eeb26b16d4beb20456a1a29c67a57108c698ad5eb0780831db8025824829 not found: ID does not exist" containerID="d266eeb26b16d4beb20456a1a29c67a57108c698ad5eb0780831db8025824829" Jan 27 16:25:36 crc kubenswrapper[4697]: I0127 16:25:36.288840 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d266eeb26b16d4beb20456a1a29c67a57108c698ad5eb0780831db8025824829"} err="failed to get container status \"d266eeb26b16d4beb20456a1a29c67a57108c698ad5eb0780831db8025824829\": rpc error: code = NotFound desc = could not find container \"d266eeb26b16d4beb20456a1a29c67a57108c698ad5eb0780831db8025824829\": container with ID starting with d266eeb26b16d4beb20456a1a29c67a57108c698ad5eb0780831db8025824829 not found: ID does not exist" Jan 27 16:25:36 crc kubenswrapper[4697]: I0127 16:25:36.288868 4697 scope.go:117] "RemoveContainer" containerID="d593e1e8f187f11b9845991af08d8a40ea584d530d131246975df0dae7f63b83" Jan 27 16:25:36 crc kubenswrapper[4697]: E0127 16:25:36.294232 4697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d593e1e8f187f11b9845991af08d8a40ea584d530d131246975df0dae7f63b83\": container with ID starting with d593e1e8f187f11b9845991af08d8a40ea584d530d131246975df0dae7f63b83 not found: ID does not exist" containerID="d593e1e8f187f11b9845991af08d8a40ea584d530d131246975df0dae7f63b83" Jan 27 16:25:36 crc kubenswrapper[4697]: I0127 16:25:36.294283 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d593e1e8f187f11b9845991af08d8a40ea584d530d131246975df0dae7f63b83"} err="failed to get container status \"d593e1e8f187f11b9845991af08d8a40ea584d530d131246975df0dae7f63b83\": rpc error: code = NotFound desc = could not find container \"d593e1e8f187f11b9845991af08d8a40ea584d530d131246975df0dae7f63b83\": container with ID starting with d593e1e8f187f11b9845991af08d8a40ea584d530d131246975df0dae7f63b83 not found: ID does not exist" Jan 27 16:25:36 crc kubenswrapper[4697]: I0127 16:25:36.294322 4697 scope.go:117] "RemoveContainer" containerID="6646f7933da063f4e10920b9eb5af8486c4c5d985ad63695f50b6b65921b78cd" Jan 27 16:25:36 crc kubenswrapper[4697]: E0127 16:25:36.294745 4697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6646f7933da063f4e10920b9eb5af8486c4c5d985ad63695f50b6b65921b78cd\": container with ID starting with 6646f7933da063f4e10920b9eb5af8486c4c5d985ad63695f50b6b65921b78cd not found: ID does not exist" containerID="6646f7933da063f4e10920b9eb5af8486c4c5d985ad63695f50b6b65921b78cd" Jan 27 16:25:36 crc kubenswrapper[4697]: I0127 16:25:36.294777 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6646f7933da063f4e10920b9eb5af8486c4c5d985ad63695f50b6b65921b78cd"} err="failed to get container status \"6646f7933da063f4e10920b9eb5af8486c4c5d985ad63695f50b6b65921b78cd\": rpc error: code = NotFound desc = could not find container \"6646f7933da063f4e10920b9eb5af8486c4c5d985ad63695f50b6b65921b78cd\": container with ID starting with 6646f7933da063f4e10920b9eb5af8486c4c5d985ad63695f50b6b65921b78cd not found: ID does not exist" Jan 27 16:25:36 crc kubenswrapper[4697]: I0127 16:25:36.538528 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-b88ns"] Jan 27 16:25:36 crc kubenswrapper[4697]: I0127 16:25:36.548619 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-b88ns"] Jan 27 16:25:36 crc kubenswrapper[4697]: I0127 16:25:36.578963 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58e74c48-877d-4675-afcf-ddc1a98e0daf" path="/var/lib/kubelet/pods/58e74c48-877d-4675-afcf-ddc1a98e0daf/volumes" Jan 27 16:25:52 crc kubenswrapper[4697]: I0127 16:25:52.393463 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_6ebbdeb42ee59bc46cd5a9affeefe7a428e186e004b54bc44478e0857bm8tsj_e2327960-3adb-4edf-97cb-ffb7cbe0db07/util/0.log" Jan 27 16:25:52 crc kubenswrapper[4697]: I0127 16:25:52.545223 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_6ebbdeb42ee59bc46cd5a9affeefe7a428e186e004b54bc44478e0857bm8tsj_e2327960-3adb-4edf-97cb-ffb7cbe0db07/util/0.log" Jan 27 16:25:52 crc kubenswrapper[4697]: I0127 16:25:52.622302 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_6ebbdeb42ee59bc46cd5a9affeefe7a428e186e004b54bc44478e0857bm8tsj_e2327960-3adb-4edf-97cb-ffb7cbe0db07/pull/0.log" Jan 27 16:25:52 crc kubenswrapper[4697]: I0127 16:25:52.688895 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_6ebbdeb42ee59bc46cd5a9affeefe7a428e186e004b54bc44478e0857bm8tsj_e2327960-3adb-4edf-97cb-ffb7cbe0db07/pull/0.log" Jan 27 16:25:52 crc kubenswrapper[4697]: I0127 16:25:52.878463 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_6ebbdeb42ee59bc46cd5a9affeefe7a428e186e004b54bc44478e0857bm8tsj_e2327960-3adb-4edf-97cb-ffb7cbe0db07/extract/0.log" Jan 27 16:25:52 crc kubenswrapper[4697]: I0127 16:25:52.881943 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_6ebbdeb42ee59bc46cd5a9affeefe7a428e186e004b54bc44478e0857bm8tsj_e2327960-3adb-4edf-97cb-ffb7cbe0db07/pull/0.log" Jan 27 16:25:52 crc kubenswrapper[4697]: I0127 16:25:52.890990 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_6ebbdeb42ee59bc46cd5a9affeefe7a428e186e004b54bc44478e0857bm8tsj_e2327960-3adb-4edf-97cb-ffb7cbe0db07/util/0.log" Jan 27 16:25:53 crc kubenswrapper[4697]: I0127 16:25:53.150517 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-65ff799cfd-666rh_349690fb-f1d2-4848-8424-01e794dc6317/manager/0.log" Jan 27 16:25:53 crc kubenswrapper[4697]: I0127 16:25:53.231756 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-655bf9cfbb-wppqr_6be24454-9d04-4e38-a00e-d6f62e156bd0/manager/0.log" Jan 27 16:25:53 crc kubenswrapper[4697]: I0127 16:25:53.428109 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-77554cdc5c-s4rdx_d930a939-ecb8-4955-88bf-274d35ed9e6a/manager/0.log" Jan 27 16:25:53 crc kubenswrapper[4697]: I0127 16:25:53.616016 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-67dd55ff59-hv8n2_71562cb6-5243-4433-bd90-07c45cf11203/manager/0.log" Jan 27 16:25:53 crc kubenswrapper[4697]: I0127 16:25:53.687201 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-575ffb885b-5h569_ab1c79ce-8e28-4565-9760-5fd20ddf47eb/manager/0.log" Jan 27 16:25:53 crc kubenswrapper[4697]: I0127 16:25:53.800158 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-9nsp6_88db0cc4-3d70-47be-83e1-e5d2d3f3ff24/manager/0.log" Jan 27 16:25:54 crc kubenswrapper[4697]: I0127 16:25:54.121938 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-768b776ffb-9qhk4_ea0ee1bf-fe8d-4c6d-bf66-bb6b4b632ccf/manager/0.log" Jan 27 16:25:54 crc kubenswrapper[4697]: I0127 16:25:54.200403 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-7d75bc88d5-2zk5c_d26a6673-d71e-4f0a-a8f6-e87866dafa6a/manager/0.log" Jan 27 16:25:54 crc kubenswrapper[4697]: I0127 16:25:54.429705 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-55f684fd56-zppcc_39770161-132e-4037-aec7-9db6d10d17d8/manager/0.log" Jan 27 16:25:54 crc kubenswrapper[4697]: I0127 16:25:54.469654 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-849fcfbb6b-5frlr_a068f004-7f2c-4c3d-8bfe-98fbc4b65a73/manager/0.log" Jan 27 16:25:54 crc kubenswrapper[4697]: I0127 16:25:54.678955 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6b9fb5fdcb-s2mqs_b23f7e1b-6141-4dc3-bf18-70732ae7889a/manager/0.log" Jan 27 16:25:54 crc kubenswrapper[4697]: I0127 16:25:54.744885 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-7ffd8d76d4-7w8b9_42edadff-8683-4551-b634-33e4ad590fb1/manager/0.log" Jan 27 16:25:55 crc kubenswrapper[4697]: I0127 16:25:55.368989 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7875d7675-kvp8m_66cf11a2-77ca-44a8-ade8-610d02430a2d/manager/0.log" Jan 27 16:25:55 crc kubenswrapper[4697]: I0127 16:25:55.378234 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-ddcbfd695-nx7cr_c3d1f921-6d2e-4c30-9f75-14f206a1fb7e/manager/0.log" Jan 27 16:25:55 crc kubenswrapper[4697]: I0127 16:25:55.636599 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b854dgsgx_ee7cb913-d3ef-459b-bd70-d6a2aea9ace3/manager/0.log" Jan 27 16:25:55 crc kubenswrapper[4697]: I0127 16:25:55.804248 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-6fb647f7d4-299rw_de629115-105c-4dac-b1d9-ce37c3cf02b2/operator/0.log" Jan 27 16:25:56 crc kubenswrapper[4697]: I0127 16:25:56.174405 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-mvqgc_779425e2-ee9e-45ea-b8c9-07df5c5278b2/registry-server/0.log" Jan 27 16:25:56 crc kubenswrapper[4697]: I0127 16:25:56.848265 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-79d5ccc684-44hkp_a484e650-0a10-44e5-8b88-0f4157293d48/manager/0.log" Jan 27 16:25:56 crc kubenswrapper[4697]: I0127 16:25:56.868406 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-6f75f45d54-ql7xq_cb062e69-364e-4798-9a7e-4cfb1b1ca571/manager/0.log" Jan 27 16:25:57 crc kubenswrapper[4697]: I0127 16:25:57.190747 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-4wpgd_c74a171d-554d-4e80-ae59-cc340cad54be/operator/0.log" Jan 27 16:25:57 crc kubenswrapper[4697]: I0127 16:25:57.365862 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-ff554fc88-js46k_a0f8d486-5d8d-4ae1-9d4c-02f4ab128ede/manager/0.log" Jan 27 16:25:57 crc kubenswrapper[4697]: I0127 16:25:57.464308 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-547cbdb99f-6hdkv_eae7ff28-7cf8-4e7e-bb04-3e75bb4156ec/manager/0.log" Jan 27 16:25:57 crc kubenswrapper[4697]: I0127 16:25:57.622696 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-799bc87c89-bzmfz_386961d6-c4f3-48c7-a03f-768c470daee4/manager/0.log" Jan 27 16:25:57 crc kubenswrapper[4697]: I0127 16:25:57.736218 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-69797bbcbd-bkw8p_081ab885-5c5c-41c5-a1ca-69ab3e0b5b45/manager/0.log" Jan 27 16:25:57 crc kubenswrapper[4697]: I0127 16:25:57.913122 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-6c9bb4b66c-xktdh_89a02bfb-edab-48f6-8c52-6d5f56541057/manager/0.log" Jan 27 16:26:21 crc kubenswrapper[4697]: I0127 16:26:21.776113 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-4tnq9_49dd977b-6315-4446-8804-242e7e94a375/control-plane-machine-set-operator/0.log" Jan 27 16:26:22 crc kubenswrapper[4697]: I0127 16:26:22.025625 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-9p5q5_cbd9208d-08ed-47af-a7cf-b9ee3973b964/kube-rbac-proxy/0.log" Jan 27 16:26:22 crc kubenswrapper[4697]: I0127 16:26:22.086235 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-9p5q5_cbd9208d-08ed-47af-a7cf-b9ee3973b964/machine-api-operator/0.log" Jan 27 16:26:36 crc kubenswrapper[4697]: I0127 16:26:36.570451 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-z8rp7_276bf9e3-2608-4096-bc3a-fff69d9dfc64/cert-manager-controller/0.log" Jan 27 16:26:36 crc kubenswrapper[4697]: I0127 16:26:36.896311 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-8n4gj_a29b72d6-fcd5-4a5a-b779-437cfc4c8365/cert-manager-webhook/0.log" Jan 27 16:26:36 crc kubenswrapper[4697]: I0127 16:26:36.965545 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-2vqhk_4cf2332b-1a6a-460c-a3a8-d7110b0960a2/cert-manager-cainjector/0.log" Jan 27 16:26:50 crc kubenswrapper[4697]: I0127 16:26:50.603311 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-7brtn_a10fdbc2-63e2-4b0b-afee-5ce01520801e/nmstate-console-plugin/0.log" Jan 27 16:26:50 crc kubenswrapper[4697]: I0127 16:26:50.742559 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-nlwcd_0cb7d58a-50bd-4ae2-9e83-5c689667726d/nmstate-handler/0.log" Jan 27 16:26:50 crc kubenswrapper[4697]: I0127 16:26:50.847881 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-hkj82_c1ae0702-73b7-45df-88fb-4e93ab7f6496/kube-rbac-proxy/0.log" Jan 27 16:26:50 crc kubenswrapper[4697]: I0127 16:26:50.881959 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-hkj82_c1ae0702-73b7-45df-88fb-4e93ab7f6496/nmstate-metrics/0.log" Jan 27 16:26:51 crc kubenswrapper[4697]: I0127 16:26:51.023889 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-9ztbp_a886f00e-2d21-4e80-81d0-06650c1e178f/nmstate-operator/0.log" Jan 27 16:26:51 crc kubenswrapper[4697]: I0127 16:26:51.130871 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-vt4ml_0133bab9-91e4-4ff6-8dc1-cf282e197dd0/nmstate-webhook/0.log" Jan 27 16:27:19 crc kubenswrapper[4697]: I0127 16:27:19.933183 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-shgkw_0ecbc291-e00b-42be-b1dc-fd53bcb5256a/kube-rbac-proxy/0.log" Jan 27 16:27:20 crc kubenswrapper[4697]: I0127 16:27:20.087426 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-shgkw_0ecbc291-e00b-42be-b1dc-fd53bcb5256a/controller/0.log" Jan 27 16:27:20 crc kubenswrapper[4697]: I0127 16:27:20.371929 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-g49qd_de0040d7-f7cb-4a80-ba9a-bbc8898365e1/cp-frr-files/0.log" Jan 27 16:27:20 crc kubenswrapper[4697]: I0127 16:27:20.513982 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-g49qd_de0040d7-f7cb-4a80-ba9a-bbc8898365e1/cp-reloader/0.log" Jan 27 16:27:20 crc kubenswrapper[4697]: I0127 16:27:20.518124 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-g49qd_de0040d7-f7cb-4a80-ba9a-bbc8898365e1/cp-frr-files/0.log" Jan 27 16:27:20 crc kubenswrapper[4697]: I0127 16:27:20.598448 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-g49qd_de0040d7-f7cb-4a80-ba9a-bbc8898365e1/cp-reloader/0.log" Jan 27 16:27:20 crc kubenswrapper[4697]: I0127 16:27:20.616097 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-g49qd_de0040d7-f7cb-4a80-ba9a-bbc8898365e1/cp-metrics/0.log" Jan 27 16:27:20 crc kubenswrapper[4697]: I0127 16:27:20.766484 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-g49qd_de0040d7-f7cb-4a80-ba9a-bbc8898365e1/cp-reloader/0.log" Jan 27 16:27:20 crc kubenswrapper[4697]: I0127 16:27:20.768254 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-g49qd_de0040d7-f7cb-4a80-ba9a-bbc8898365e1/cp-frr-files/0.log" Jan 27 16:27:20 crc kubenswrapper[4697]: I0127 16:27:20.795052 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-g49qd_de0040d7-f7cb-4a80-ba9a-bbc8898365e1/cp-metrics/0.log" Jan 27 16:27:20 crc kubenswrapper[4697]: I0127 16:27:20.835109 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-g49qd_de0040d7-f7cb-4a80-ba9a-bbc8898365e1/cp-metrics/0.log" Jan 27 16:27:21 crc kubenswrapper[4697]: I0127 16:27:21.024014 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-g49qd_de0040d7-f7cb-4a80-ba9a-bbc8898365e1/cp-reloader/0.log" Jan 27 16:27:21 crc kubenswrapper[4697]: I0127 16:27:21.032050 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-g49qd_de0040d7-f7cb-4a80-ba9a-bbc8898365e1/cp-frr-files/0.log" Jan 27 16:27:21 crc kubenswrapper[4697]: I0127 16:27:21.039653 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-g49qd_de0040d7-f7cb-4a80-ba9a-bbc8898365e1/cp-metrics/0.log" Jan 27 16:27:21 crc kubenswrapper[4697]: I0127 16:27:21.087501 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-g49qd_de0040d7-f7cb-4a80-ba9a-bbc8898365e1/controller/0.log" Jan 27 16:27:21 crc kubenswrapper[4697]: I0127 16:27:21.265321 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-g49qd_de0040d7-f7cb-4a80-ba9a-bbc8898365e1/kube-rbac-proxy/0.log" Jan 27 16:27:21 crc kubenswrapper[4697]: I0127 16:27:21.283181 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-g49qd_de0040d7-f7cb-4a80-ba9a-bbc8898365e1/frr-metrics/0.log" Jan 27 16:27:21 crc kubenswrapper[4697]: I0127 16:27:21.377895 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-g49qd_de0040d7-f7cb-4a80-ba9a-bbc8898365e1/kube-rbac-proxy-frr/0.log" Jan 27 16:27:21 crc kubenswrapper[4697]: I0127 16:27:21.544569 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-g49qd_de0040d7-f7cb-4a80-ba9a-bbc8898365e1/reloader/0.log" Jan 27 16:27:22 crc kubenswrapper[4697]: I0127 16:27:22.234354 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-976dcb485-6tnr7_fc55dd19-5186-4ee0-b54d-0fec0c93f30a/manager/0.log" Jan 27 16:27:22 crc kubenswrapper[4697]: I0127 16:27:22.291484 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-kk5jh_cb6be63b-c3fd-4e21-a1b3-ffc11357a98f/frr-k8s-webhook-server/0.log" Jan 27 16:27:22 crc kubenswrapper[4697]: I0127 16:27:22.558924 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-5476f886c6-mrv5l_4779f8a7-b446-4128-8800-0b6420fda6d8/webhook-server/0.log" Jan 27 16:27:22 crc kubenswrapper[4697]: I0127 16:27:22.675609 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-g49qd_de0040d7-f7cb-4a80-ba9a-bbc8898365e1/frr/0.log" Jan 27 16:27:22 crc kubenswrapper[4697]: I0127 16:27:22.856592 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-8stft_18479ade-7486-4889-b313-79c6598cc773/kube-rbac-proxy/0.log" Jan 27 16:27:23 crc kubenswrapper[4697]: I0127 16:27:23.131356 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-8stft_18479ade-7486-4889-b313-79c6598cc773/speaker/0.log" Jan 27 16:27:25 crc kubenswrapper[4697]: I0127 16:27:25.108861 4697 patch_prober.go:28] interesting pod/machine-config-daemon-wz495 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 16:27:25 crc kubenswrapper[4697]: I0127 16:27:25.109224 4697 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 16:27:35 crc kubenswrapper[4697]: I0127 16:27:35.954519 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6kvwz_a5c419f2-da90-4ed6-8155-03cba6840bc7/util/0.log" Jan 27 16:27:36 crc kubenswrapper[4697]: I0127 16:27:36.114527 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6kvwz_a5c419f2-da90-4ed6-8155-03cba6840bc7/util/0.log" Jan 27 16:27:36 crc kubenswrapper[4697]: I0127 16:27:36.208187 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6kvwz_a5c419f2-da90-4ed6-8155-03cba6840bc7/pull/0.log" Jan 27 16:27:36 crc kubenswrapper[4697]: I0127 16:27:36.229923 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6kvwz_a5c419f2-da90-4ed6-8155-03cba6840bc7/pull/0.log" Jan 27 16:27:36 crc kubenswrapper[4697]: I0127 16:27:36.388857 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6kvwz_a5c419f2-da90-4ed6-8155-03cba6840bc7/pull/0.log" Jan 27 16:27:36 crc kubenswrapper[4697]: I0127 16:27:36.389312 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6kvwz_a5c419f2-da90-4ed6-8155-03cba6840bc7/util/0.log" Jan 27 16:27:36 crc kubenswrapper[4697]: I0127 16:27:36.422178 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6kvwz_a5c419f2-da90-4ed6-8155-03cba6840bc7/extract/0.log" Jan 27 16:27:36 crc kubenswrapper[4697]: I0127 16:27:36.626006 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136h4fd_efe31ae7-f928-4690-b47c-57c996d20817/util/0.log" Jan 27 16:27:36 crc kubenswrapper[4697]: I0127 16:27:36.838246 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136h4fd_efe31ae7-f928-4690-b47c-57c996d20817/pull/0.log" Jan 27 16:27:36 crc kubenswrapper[4697]: I0127 16:27:36.843549 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136h4fd_efe31ae7-f928-4690-b47c-57c996d20817/util/0.log" Jan 27 16:27:36 crc kubenswrapper[4697]: I0127 16:27:36.879228 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136h4fd_efe31ae7-f928-4690-b47c-57c996d20817/pull/0.log" Jan 27 16:27:37 crc kubenswrapper[4697]: I0127 16:27:37.089880 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136h4fd_efe31ae7-f928-4690-b47c-57c996d20817/util/0.log" Jan 27 16:27:37 crc kubenswrapper[4697]: I0127 16:27:37.104119 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136h4fd_efe31ae7-f928-4690-b47c-57c996d20817/extract/0.log" Jan 27 16:27:37 crc kubenswrapper[4697]: I0127 16:27:37.125117 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136h4fd_efe31ae7-f928-4690-b47c-57c996d20817/pull/0.log" Jan 27 16:27:37 crc kubenswrapper[4697]: I0127 16:27:37.323723 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-sc5n9_238b6b97-1f60-4c86-a041-351dba477c64/extract-utilities/0.log" Jan 27 16:27:37 crc kubenswrapper[4697]: I0127 16:27:37.537484 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-sc5n9_238b6b97-1f60-4c86-a041-351dba477c64/extract-content/0.log" Jan 27 16:27:37 crc kubenswrapper[4697]: I0127 16:27:37.539356 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-sc5n9_238b6b97-1f60-4c86-a041-351dba477c64/extract-content/0.log" Jan 27 16:27:37 crc kubenswrapper[4697]: I0127 16:27:37.556402 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-sc5n9_238b6b97-1f60-4c86-a041-351dba477c64/extract-utilities/0.log" Jan 27 16:27:37 crc kubenswrapper[4697]: I0127 16:27:37.766836 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-sc5n9_238b6b97-1f60-4c86-a041-351dba477c64/extract-content/0.log" Jan 27 16:27:37 crc kubenswrapper[4697]: I0127 16:27:37.772356 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-sc5n9_238b6b97-1f60-4c86-a041-351dba477c64/extract-utilities/0.log" Jan 27 16:27:38 crc kubenswrapper[4697]: I0127 16:27:38.030050 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-6d8lm_53f86999-4825-49b5-8652-a6b6bcc1dc5e/extract-utilities/0.log" Jan 27 16:27:38 crc kubenswrapper[4697]: I0127 16:27:38.405423 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-6d8lm_53f86999-4825-49b5-8652-a6b6bcc1dc5e/extract-content/0.log" Jan 27 16:27:38 crc kubenswrapper[4697]: I0127 16:27:38.470582 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-6d8lm_53f86999-4825-49b5-8652-a6b6bcc1dc5e/extract-utilities/0.log" Jan 27 16:27:38 crc kubenswrapper[4697]: I0127 16:27:38.471765 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-sc5n9_238b6b97-1f60-4c86-a041-351dba477c64/registry-server/0.log" Jan 27 16:27:38 crc kubenswrapper[4697]: I0127 16:27:38.508988 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-6d8lm_53f86999-4825-49b5-8652-a6b6bcc1dc5e/extract-content/0.log" Jan 27 16:27:38 crc kubenswrapper[4697]: I0127 16:27:38.672633 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-6d8lm_53f86999-4825-49b5-8652-a6b6bcc1dc5e/extract-utilities/0.log" Jan 27 16:27:38 crc kubenswrapper[4697]: I0127 16:27:38.691349 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-6d8lm_53f86999-4825-49b5-8652-a6b6bcc1dc5e/extract-content/0.log" Jan 27 16:27:39 crc kubenswrapper[4697]: I0127 16:27:39.040621 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-hwq4c_e4c801e2-39ef-4230-8bb0-fed36eccba1a/marketplace-operator/0.log" Jan 27 16:27:39 crc kubenswrapper[4697]: I0127 16:27:39.240173 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-q9lwr_24c17e72-9143-4da4-8b8f-0a777f568dfc/extract-utilities/0.log" Jan 27 16:27:39 crc kubenswrapper[4697]: I0127 16:27:39.362822 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-q9lwr_24c17e72-9143-4da4-8b8f-0a777f568dfc/extract-content/0.log" Jan 27 16:27:39 crc kubenswrapper[4697]: I0127 16:27:39.383618 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-q9lwr_24c17e72-9143-4da4-8b8f-0a777f568dfc/extract-utilities/0.log" Jan 27 16:27:39 crc kubenswrapper[4697]: I0127 16:27:39.487299 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-q9lwr_24c17e72-9143-4da4-8b8f-0a777f568dfc/extract-content/0.log" Jan 27 16:27:39 crc kubenswrapper[4697]: I0127 16:27:39.560198 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-6d8lm_53f86999-4825-49b5-8652-a6b6bcc1dc5e/registry-server/0.log" Jan 27 16:27:39 crc kubenswrapper[4697]: I0127 16:27:39.723267 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-q9lwr_24c17e72-9143-4da4-8b8f-0a777f568dfc/extract-content/0.log" Jan 27 16:27:39 crc kubenswrapper[4697]: I0127 16:27:39.769894 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-q9lwr_24c17e72-9143-4da4-8b8f-0a777f568dfc/extract-utilities/0.log" Jan 27 16:27:39 crc kubenswrapper[4697]: I0127 16:27:39.970718 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-cbj7r_52ecc276-9ad2-4527-9e59-a4e19c63d851/extract-utilities/0.log" Jan 27 16:27:39 crc kubenswrapper[4697]: I0127 16:27:39.978732 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-q9lwr_24c17e72-9143-4da4-8b8f-0a777f568dfc/registry-server/0.log" Jan 27 16:27:40 crc kubenswrapper[4697]: I0127 16:27:40.191434 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-cbj7r_52ecc276-9ad2-4527-9e59-a4e19c63d851/extract-utilities/0.log" Jan 27 16:27:40 crc kubenswrapper[4697]: I0127 16:27:40.193539 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-cbj7r_52ecc276-9ad2-4527-9e59-a4e19c63d851/extract-content/0.log" Jan 27 16:27:40 crc kubenswrapper[4697]: I0127 16:27:40.196178 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-cbj7r_52ecc276-9ad2-4527-9e59-a4e19c63d851/extract-content/0.log" Jan 27 16:27:40 crc kubenswrapper[4697]: I0127 16:27:40.452174 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-cbj7r_52ecc276-9ad2-4527-9e59-a4e19c63d851/extract-content/0.log" Jan 27 16:27:40 crc kubenswrapper[4697]: I0127 16:27:40.453209 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-cbj7r_52ecc276-9ad2-4527-9e59-a4e19c63d851/extract-utilities/0.log" Jan 27 16:27:41 crc kubenswrapper[4697]: I0127 16:27:41.005419 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-cbj7r_52ecc276-9ad2-4527-9e59-a4e19c63d851/registry-server/0.log" Jan 27 16:27:55 crc kubenswrapper[4697]: I0127 16:27:55.109021 4697 patch_prober.go:28] interesting pod/machine-config-daemon-wz495 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 16:27:55 crc kubenswrapper[4697]: I0127 16:27:55.109556 4697 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 16:28:15 crc kubenswrapper[4697]: I0127 16:28:15.101700 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-vsv54"] Jan 27 16:28:15 crc kubenswrapper[4697]: E0127 16:28:15.102714 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58e74c48-877d-4675-afcf-ddc1a98e0daf" containerName="registry-server" Jan 27 16:28:15 crc kubenswrapper[4697]: I0127 16:28:15.102731 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="58e74c48-877d-4675-afcf-ddc1a98e0daf" containerName="registry-server" Jan 27 16:28:15 crc kubenswrapper[4697]: E0127 16:28:15.102778 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58e74c48-877d-4675-afcf-ddc1a98e0daf" containerName="extract-content" Jan 27 16:28:15 crc kubenswrapper[4697]: I0127 16:28:15.102802 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="58e74c48-877d-4675-afcf-ddc1a98e0daf" containerName="extract-content" Jan 27 16:28:15 crc kubenswrapper[4697]: E0127 16:28:15.102812 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58e74c48-877d-4675-afcf-ddc1a98e0daf" containerName="extract-utilities" Jan 27 16:28:15 crc kubenswrapper[4697]: I0127 16:28:15.102819 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="58e74c48-877d-4675-afcf-ddc1a98e0daf" containerName="extract-utilities" Jan 27 16:28:15 crc kubenswrapper[4697]: I0127 16:28:15.102991 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="58e74c48-877d-4675-afcf-ddc1a98e0daf" containerName="registry-server" Jan 27 16:28:15 crc kubenswrapper[4697]: I0127 16:28:15.108734 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vsv54" Jan 27 16:28:15 crc kubenswrapper[4697]: I0127 16:28:15.123416 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vsv54"] Jan 27 16:28:15 crc kubenswrapper[4697]: I0127 16:28:15.264205 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bz4l\" (UniqueName: \"kubernetes.io/projected/e7c264a8-28d0-487a-872b-c2e5dd245053-kube-api-access-7bz4l\") pod \"redhat-operators-vsv54\" (UID: \"e7c264a8-28d0-487a-872b-c2e5dd245053\") " pod="openshift-marketplace/redhat-operators-vsv54" Jan 27 16:28:15 crc kubenswrapper[4697]: I0127 16:28:15.264267 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e7c264a8-28d0-487a-872b-c2e5dd245053-utilities\") pod \"redhat-operators-vsv54\" (UID: \"e7c264a8-28d0-487a-872b-c2e5dd245053\") " pod="openshift-marketplace/redhat-operators-vsv54" Jan 27 16:28:15 crc kubenswrapper[4697]: I0127 16:28:15.264357 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e7c264a8-28d0-487a-872b-c2e5dd245053-catalog-content\") pod \"redhat-operators-vsv54\" (UID: \"e7c264a8-28d0-487a-872b-c2e5dd245053\") " pod="openshift-marketplace/redhat-operators-vsv54" Jan 27 16:28:15 crc kubenswrapper[4697]: I0127 16:28:15.366338 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7bz4l\" (UniqueName: \"kubernetes.io/projected/e7c264a8-28d0-487a-872b-c2e5dd245053-kube-api-access-7bz4l\") pod \"redhat-operators-vsv54\" (UID: \"e7c264a8-28d0-487a-872b-c2e5dd245053\") " pod="openshift-marketplace/redhat-operators-vsv54" Jan 27 16:28:15 crc kubenswrapper[4697]: I0127 16:28:15.366427 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e7c264a8-28d0-487a-872b-c2e5dd245053-utilities\") pod \"redhat-operators-vsv54\" (UID: \"e7c264a8-28d0-487a-872b-c2e5dd245053\") " pod="openshift-marketplace/redhat-operators-vsv54" Jan 27 16:28:15 crc kubenswrapper[4697]: I0127 16:28:15.366545 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e7c264a8-28d0-487a-872b-c2e5dd245053-catalog-content\") pod \"redhat-operators-vsv54\" (UID: \"e7c264a8-28d0-487a-872b-c2e5dd245053\") " pod="openshift-marketplace/redhat-operators-vsv54" Jan 27 16:28:15 crc kubenswrapper[4697]: I0127 16:28:15.367139 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e7c264a8-28d0-487a-872b-c2e5dd245053-utilities\") pod \"redhat-operators-vsv54\" (UID: \"e7c264a8-28d0-487a-872b-c2e5dd245053\") " pod="openshift-marketplace/redhat-operators-vsv54" Jan 27 16:28:15 crc kubenswrapper[4697]: I0127 16:28:15.367195 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e7c264a8-28d0-487a-872b-c2e5dd245053-catalog-content\") pod \"redhat-operators-vsv54\" (UID: \"e7c264a8-28d0-487a-872b-c2e5dd245053\") " pod="openshift-marketplace/redhat-operators-vsv54" Jan 27 16:28:15 crc kubenswrapper[4697]: I0127 16:28:15.387120 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7bz4l\" (UniqueName: \"kubernetes.io/projected/e7c264a8-28d0-487a-872b-c2e5dd245053-kube-api-access-7bz4l\") pod \"redhat-operators-vsv54\" (UID: \"e7c264a8-28d0-487a-872b-c2e5dd245053\") " pod="openshift-marketplace/redhat-operators-vsv54" Jan 27 16:28:15 crc kubenswrapper[4697]: I0127 16:28:15.426938 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vsv54" Jan 27 16:28:15 crc kubenswrapper[4697]: I0127 16:28:15.983318 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vsv54"] Jan 27 16:28:16 crc kubenswrapper[4697]: I0127 16:28:16.634220 4697 generic.go:334] "Generic (PLEG): container finished" podID="e7c264a8-28d0-487a-872b-c2e5dd245053" containerID="4a3686c31f79f28da2f16c4a99963e5063847c6328501ad3a5208c1462be06a4" exitCode=0 Jan 27 16:28:16 crc kubenswrapper[4697]: I0127 16:28:16.635538 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vsv54" event={"ID":"e7c264a8-28d0-487a-872b-c2e5dd245053","Type":"ContainerDied","Data":"4a3686c31f79f28da2f16c4a99963e5063847c6328501ad3a5208c1462be06a4"} Jan 27 16:28:16 crc kubenswrapper[4697]: I0127 16:28:16.635588 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vsv54" event={"ID":"e7c264a8-28d0-487a-872b-c2e5dd245053","Type":"ContainerStarted","Data":"7c0a9b0d01407a75ff8d5ea01e3744c23a8e2d71b4ede1524c3bce66094e710d"} Jan 27 16:28:16 crc kubenswrapper[4697]: I0127 16:28:16.637271 4697 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 16:28:17 crc kubenswrapper[4697]: I0127 16:28:17.653108 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vsv54" event={"ID":"e7c264a8-28d0-487a-872b-c2e5dd245053","Type":"ContainerStarted","Data":"23df7ea5890677785e25f3ffcfd5cc4903dc5dfd8b226fec263172b8ac09c130"} Jan 27 16:28:23 crc kubenswrapper[4697]: I0127 16:28:23.709756 4697 generic.go:334] "Generic (PLEG): container finished" podID="e7c264a8-28d0-487a-872b-c2e5dd245053" containerID="23df7ea5890677785e25f3ffcfd5cc4903dc5dfd8b226fec263172b8ac09c130" exitCode=0 Jan 27 16:28:23 crc kubenswrapper[4697]: I0127 16:28:23.710216 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vsv54" event={"ID":"e7c264a8-28d0-487a-872b-c2e5dd245053","Type":"ContainerDied","Data":"23df7ea5890677785e25f3ffcfd5cc4903dc5dfd8b226fec263172b8ac09c130"} Jan 27 16:28:24 crc kubenswrapper[4697]: I0127 16:28:24.725282 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vsv54" event={"ID":"e7c264a8-28d0-487a-872b-c2e5dd245053","Type":"ContainerStarted","Data":"6f852b4c4c7cd2292b643619e9dbff79ea909d271e71346e243374b5ea353079"} Jan 27 16:28:24 crc kubenswrapper[4697]: I0127 16:28:24.753345 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-vsv54" podStartSLOduration=2.012311642 podStartE2EDuration="9.753318599s" podCreationTimestamp="2026-01-27 16:28:15 +0000 UTC" firstStartedPulling="2026-01-27 16:28:16.637069383 +0000 UTC m=+4792.809469164" lastFinishedPulling="2026-01-27 16:28:24.37807634 +0000 UTC m=+4800.550476121" observedRunningTime="2026-01-27 16:28:24.745201111 +0000 UTC m=+4800.917600912" watchObservedRunningTime="2026-01-27 16:28:24.753318599 +0000 UTC m=+4800.925718380" Jan 27 16:28:25 crc kubenswrapper[4697]: I0127 16:28:25.109570 4697 patch_prober.go:28] interesting pod/machine-config-daemon-wz495 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 16:28:25 crc kubenswrapper[4697]: I0127 16:28:25.109641 4697 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 16:28:25 crc kubenswrapper[4697]: I0127 16:28:25.109693 4697 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wz495" Jan 27 16:28:25 crc kubenswrapper[4697]: I0127 16:28:25.110449 4697 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"33ef3249056f5753476b3fb4dab581920d9492912c671892b553ffa873cec697"} pod="openshift-machine-config-operator/machine-config-daemon-wz495" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 16:28:25 crc kubenswrapper[4697]: I0127 16:28:25.110509 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" containerName="machine-config-daemon" containerID="cri-o://33ef3249056f5753476b3fb4dab581920d9492912c671892b553ffa873cec697" gracePeriod=600 Jan 27 16:28:25 crc kubenswrapper[4697]: I0127 16:28:25.427837 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-vsv54" Jan 27 16:28:25 crc kubenswrapper[4697]: I0127 16:28:25.428144 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-vsv54" Jan 27 16:28:25 crc kubenswrapper[4697]: I0127 16:28:25.738200 4697 generic.go:334] "Generic (PLEG): container finished" podID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" containerID="33ef3249056f5753476b3fb4dab581920d9492912c671892b553ffa873cec697" exitCode=0 Jan 27 16:28:25 crc kubenswrapper[4697]: I0127 16:28:25.738315 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wz495" event={"ID":"e9bec8bc-b2a6-4865-83ca-692ae5c022a6","Type":"ContainerDied","Data":"33ef3249056f5753476b3fb4dab581920d9492912c671892b553ffa873cec697"} Jan 27 16:28:25 crc kubenswrapper[4697]: I0127 16:28:25.738384 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wz495" event={"ID":"e9bec8bc-b2a6-4865-83ca-692ae5c022a6","Type":"ContainerStarted","Data":"f5883d34a63778ee686555f75d5c9331c19bb6049a5349238b0b49ff0a60e6ee"} Jan 27 16:28:25 crc kubenswrapper[4697]: I0127 16:28:25.738433 4697 scope.go:117] "RemoveContainer" containerID="1ed28bba8d311f2ce4742176145e237c684162adcd23036e7e9084723cdfe227" Jan 27 16:28:26 crc kubenswrapper[4697]: I0127 16:28:26.497400 4697 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-vsv54" podUID="e7c264a8-28d0-487a-872b-c2e5dd245053" containerName="registry-server" probeResult="failure" output=< Jan 27 16:28:26 crc kubenswrapper[4697]: timeout: failed to connect service ":50051" within 1s Jan 27 16:28:26 crc kubenswrapper[4697]: > Jan 27 16:28:35 crc kubenswrapper[4697]: I0127 16:28:35.495581 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-vsv54" Jan 27 16:28:35 crc kubenswrapper[4697]: I0127 16:28:35.561715 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-vsv54" Jan 27 16:28:35 crc kubenswrapper[4697]: I0127 16:28:35.739347 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vsv54"] Jan 27 16:28:36 crc kubenswrapper[4697]: I0127 16:28:36.843831 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-vsv54" podUID="e7c264a8-28d0-487a-872b-c2e5dd245053" containerName="registry-server" containerID="cri-o://6f852b4c4c7cd2292b643619e9dbff79ea909d271e71346e243374b5ea353079" gracePeriod=2 Jan 27 16:28:37 crc kubenswrapper[4697]: I0127 16:28:37.305926 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vsv54" Jan 27 16:28:37 crc kubenswrapper[4697]: I0127 16:28:37.433015 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e7c264a8-28d0-487a-872b-c2e5dd245053-catalog-content\") pod \"e7c264a8-28d0-487a-872b-c2e5dd245053\" (UID: \"e7c264a8-28d0-487a-872b-c2e5dd245053\") " Jan 27 16:28:37 crc kubenswrapper[4697]: I0127 16:28:37.433096 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e7c264a8-28d0-487a-872b-c2e5dd245053-utilities\") pod \"e7c264a8-28d0-487a-872b-c2e5dd245053\" (UID: \"e7c264a8-28d0-487a-872b-c2e5dd245053\") " Jan 27 16:28:37 crc kubenswrapper[4697]: I0127 16:28:37.433316 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7bz4l\" (UniqueName: \"kubernetes.io/projected/e7c264a8-28d0-487a-872b-c2e5dd245053-kube-api-access-7bz4l\") pod \"e7c264a8-28d0-487a-872b-c2e5dd245053\" (UID: \"e7c264a8-28d0-487a-872b-c2e5dd245053\") " Jan 27 16:28:37 crc kubenswrapper[4697]: I0127 16:28:37.435254 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e7c264a8-28d0-487a-872b-c2e5dd245053-utilities" (OuterVolumeSpecName: "utilities") pod "e7c264a8-28d0-487a-872b-c2e5dd245053" (UID: "e7c264a8-28d0-487a-872b-c2e5dd245053"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:28:37 crc kubenswrapper[4697]: I0127 16:28:37.443473 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7c264a8-28d0-487a-872b-c2e5dd245053-kube-api-access-7bz4l" (OuterVolumeSpecName: "kube-api-access-7bz4l") pod "e7c264a8-28d0-487a-872b-c2e5dd245053" (UID: "e7c264a8-28d0-487a-872b-c2e5dd245053"). InnerVolumeSpecName "kube-api-access-7bz4l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:28:37 crc kubenswrapper[4697]: I0127 16:28:37.535384 4697 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e7c264a8-28d0-487a-872b-c2e5dd245053-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 16:28:37 crc kubenswrapper[4697]: I0127 16:28:37.535414 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7bz4l\" (UniqueName: \"kubernetes.io/projected/e7c264a8-28d0-487a-872b-c2e5dd245053-kube-api-access-7bz4l\") on node \"crc\" DevicePath \"\"" Jan 27 16:28:37 crc kubenswrapper[4697]: I0127 16:28:37.578796 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e7c264a8-28d0-487a-872b-c2e5dd245053-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e7c264a8-28d0-487a-872b-c2e5dd245053" (UID: "e7c264a8-28d0-487a-872b-c2e5dd245053"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:28:37 crc kubenswrapper[4697]: I0127 16:28:37.637391 4697 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e7c264a8-28d0-487a-872b-c2e5dd245053-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 16:28:37 crc kubenswrapper[4697]: I0127 16:28:37.859978 4697 generic.go:334] "Generic (PLEG): container finished" podID="e7c264a8-28d0-487a-872b-c2e5dd245053" containerID="6f852b4c4c7cd2292b643619e9dbff79ea909d271e71346e243374b5ea353079" exitCode=0 Jan 27 16:28:37 crc kubenswrapper[4697]: I0127 16:28:37.860056 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vsv54" Jan 27 16:28:37 crc kubenswrapper[4697]: I0127 16:28:37.860075 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vsv54" event={"ID":"e7c264a8-28d0-487a-872b-c2e5dd245053","Type":"ContainerDied","Data":"6f852b4c4c7cd2292b643619e9dbff79ea909d271e71346e243374b5ea353079"} Jan 27 16:28:37 crc kubenswrapper[4697]: I0127 16:28:37.860127 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vsv54" event={"ID":"e7c264a8-28d0-487a-872b-c2e5dd245053","Type":"ContainerDied","Data":"7c0a9b0d01407a75ff8d5ea01e3744c23a8e2d71b4ede1524c3bce66094e710d"} Jan 27 16:28:37 crc kubenswrapper[4697]: I0127 16:28:37.860163 4697 scope.go:117] "RemoveContainer" containerID="6f852b4c4c7cd2292b643619e9dbff79ea909d271e71346e243374b5ea353079" Jan 27 16:28:37 crc kubenswrapper[4697]: I0127 16:28:37.909497 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vsv54"] Jan 27 16:28:37 crc kubenswrapper[4697]: I0127 16:28:37.913276 4697 scope.go:117] "RemoveContainer" containerID="23df7ea5890677785e25f3ffcfd5cc4903dc5dfd8b226fec263172b8ac09c130" Jan 27 16:28:37 crc kubenswrapper[4697]: I0127 16:28:37.922200 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-vsv54"] Jan 27 16:28:37 crc kubenswrapper[4697]: I0127 16:28:37.946699 4697 scope.go:117] "RemoveContainer" containerID="4a3686c31f79f28da2f16c4a99963e5063847c6328501ad3a5208c1462be06a4" Jan 27 16:28:37 crc kubenswrapper[4697]: I0127 16:28:37.980688 4697 scope.go:117] "RemoveContainer" containerID="6f852b4c4c7cd2292b643619e9dbff79ea909d271e71346e243374b5ea353079" Jan 27 16:28:37 crc kubenswrapper[4697]: E0127 16:28:37.981248 4697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f852b4c4c7cd2292b643619e9dbff79ea909d271e71346e243374b5ea353079\": container with ID starting with 6f852b4c4c7cd2292b643619e9dbff79ea909d271e71346e243374b5ea353079 not found: ID does not exist" containerID="6f852b4c4c7cd2292b643619e9dbff79ea909d271e71346e243374b5ea353079" Jan 27 16:28:37 crc kubenswrapper[4697]: I0127 16:28:37.981289 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f852b4c4c7cd2292b643619e9dbff79ea909d271e71346e243374b5ea353079"} err="failed to get container status \"6f852b4c4c7cd2292b643619e9dbff79ea909d271e71346e243374b5ea353079\": rpc error: code = NotFound desc = could not find container \"6f852b4c4c7cd2292b643619e9dbff79ea909d271e71346e243374b5ea353079\": container with ID starting with 6f852b4c4c7cd2292b643619e9dbff79ea909d271e71346e243374b5ea353079 not found: ID does not exist" Jan 27 16:28:37 crc kubenswrapper[4697]: I0127 16:28:37.981359 4697 scope.go:117] "RemoveContainer" containerID="23df7ea5890677785e25f3ffcfd5cc4903dc5dfd8b226fec263172b8ac09c130" Jan 27 16:28:37 crc kubenswrapper[4697]: E0127 16:28:37.981833 4697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"23df7ea5890677785e25f3ffcfd5cc4903dc5dfd8b226fec263172b8ac09c130\": container with ID starting with 23df7ea5890677785e25f3ffcfd5cc4903dc5dfd8b226fec263172b8ac09c130 not found: ID does not exist" containerID="23df7ea5890677785e25f3ffcfd5cc4903dc5dfd8b226fec263172b8ac09c130" Jan 27 16:28:37 crc kubenswrapper[4697]: I0127 16:28:37.981902 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"23df7ea5890677785e25f3ffcfd5cc4903dc5dfd8b226fec263172b8ac09c130"} err="failed to get container status \"23df7ea5890677785e25f3ffcfd5cc4903dc5dfd8b226fec263172b8ac09c130\": rpc error: code = NotFound desc = could not find container \"23df7ea5890677785e25f3ffcfd5cc4903dc5dfd8b226fec263172b8ac09c130\": container with ID starting with 23df7ea5890677785e25f3ffcfd5cc4903dc5dfd8b226fec263172b8ac09c130 not found: ID does not exist" Jan 27 16:28:37 crc kubenswrapper[4697]: I0127 16:28:37.981950 4697 scope.go:117] "RemoveContainer" containerID="4a3686c31f79f28da2f16c4a99963e5063847c6328501ad3a5208c1462be06a4" Jan 27 16:28:37 crc kubenswrapper[4697]: E0127 16:28:37.982257 4697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4a3686c31f79f28da2f16c4a99963e5063847c6328501ad3a5208c1462be06a4\": container with ID starting with 4a3686c31f79f28da2f16c4a99963e5063847c6328501ad3a5208c1462be06a4 not found: ID does not exist" containerID="4a3686c31f79f28da2f16c4a99963e5063847c6328501ad3a5208c1462be06a4" Jan 27 16:28:37 crc kubenswrapper[4697]: I0127 16:28:37.982290 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a3686c31f79f28da2f16c4a99963e5063847c6328501ad3a5208c1462be06a4"} err="failed to get container status \"4a3686c31f79f28da2f16c4a99963e5063847c6328501ad3a5208c1462be06a4\": rpc error: code = NotFound desc = could not find container \"4a3686c31f79f28da2f16c4a99963e5063847c6328501ad3a5208c1462be06a4\": container with ID starting with 4a3686c31f79f28da2f16c4a99963e5063847c6328501ad3a5208c1462be06a4 not found: ID does not exist" Jan 27 16:28:38 crc kubenswrapper[4697]: I0127 16:28:38.578745 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7c264a8-28d0-487a-872b-c2e5dd245053" path="/var/lib/kubelet/pods/e7c264a8-28d0-487a-872b-c2e5dd245053/volumes" Jan 27 16:30:00 crc kubenswrapper[4697]: I0127 16:30:00.156067 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492190-2zp86"] Jan 27 16:30:00 crc kubenswrapper[4697]: E0127 16:30:00.157070 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7c264a8-28d0-487a-872b-c2e5dd245053" containerName="registry-server" Jan 27 16:30:00 crc kubenswrapper[4697]: I0127 16:30:00.157086 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7c264a8-28d0-487a-872b-c2e5dd245053" containerName="registry-server" Jan 27 16:30:00 crc kubenswrapper[4697]: E0127 16:30:00.157136 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7c264a8-28d0-487a-872b-c2e5dd245053" containerName="extract-content" Jan 27 16:30:00 crc kubenswrapper[4697]: I0127 16:30:00.157147 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7c264a8-28d0-487a-872b-c2e5dd245053" containerName="extract-content" Jan 27 16:30:00 crc kubenswrapper[4697]: E0127 16:30:00.157196 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7c264a8-28d0-487a-872b-c2e5dd245053" containerName="extract-utilities" Jan 27 16:30:00 crc kubenswrapper[4697]: I0127 16:30:00.157205 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7c264a8-28d0-487a-872b-c2e5dd245053" containerName="extract-utilities" Jan 27 16:30:00 crc kubenswrapper[4697]: I0127 16:30:00.157434 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7c264a8-28d0-487a-872b-c2e5dd245053" containerName="registry-server" Jan 27 16:30:00 crc kubenswrapper[4697]: I0127 16:30:00.158218 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492190-2zp86" Jan 27 16:30:00 crc kubenswrapper[4697]: I0127 16:30:00.166736 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 27 16:30:00 crc kubenswrapper[4697]: I0127 16:30:00.166737 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 27 16:30:00 crc kubenswrapper[4697]: I0127 16:30:00.171403 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492190-2zp86"] Jan 27 16:30:00 crc kubenswrapper[4697]: I0127 16:30:00.251817 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5gxn\" (UniqueName: \"kubernetes.io/projected/4cc090a7-1576-4f0e-91f2-3ede5badc83a-kube-api-access-h5gxn\") pod \"collect-profiles-29492190-2zp86\" (UID: \"4cc090a7-1576-4f0e-91f2-3ede5badc83a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492190-2zp86" Jan 27 16:30:00 crc kubenswrapper[4697]: I0127 16:30:00.252126 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4cc090a7-1576-4f0e-91f2-3ede5badc83a-config-volume\") pod \"collect-profiles-29492190-2zp86\" (UID: \"4cc090a7-1576-4f0e-91f2-3ede5badc83a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492190-2zp86" Jan 27 16:30:00 crc kubenswrapper[4697]: I0127 16:30:00.252223 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4cc090a7-1576-4f0e-91f2-3ede5badc83a-secret-volume\") pod \"collect-profiles-29492190-2zp86\" (UID: \"4cc090a7-1576-4f0e-91f2-3ede5badc83a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492190-2zp86" Jan 27 16:30:00 crc kubenswrapper[4697]: I0127 16:30:00.354111 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4cc090a7-1576-4f0e-91f2-3ede5badc83a-config-volume\") pod \"collect-profiles-29492190-2zp86\" (UID: \"4cc090a7-1576-4f0e-91f2-3ede5badc83a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492190-2zp86" Jan 27 16:30:00 crc kubenswrapper[4697]: I0127 16:30:00.354714 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4cc090a7-1576-4f0e-91f2-3ede5badc83a-secret-volume\") pod \"collect-profiles-29492190-2zp86\" (UID: \"4cc090a7-1576-4f0e-91f2-3ede5badc83a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492190-2zp86" Jan 27 16:30:00 crc kubenswrapper[4697]: I0127 16:30:00.355055 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5gxn\" (UniqueName: \"kubernetes.io/projected/4cc090a7-1576-4f0e-91f2-3ede5badc83a-kube-api-access-h5gxn\") pod \"collect-profiles-29492190-2zp86\" (UID: \"4cc090a7-1576-4f0e-91f2-3ede5badc83a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492190-2zp86" Jan 27 16:30:00 crc kubenswrapper[4697]: I0127 16:30:00.355347 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4cc090a7-1576-4f0e-91f2-3ede5badc83a-config-volume\") pod \"collect-profiles-29492190-2zp86\" (UID: \"4cc090a7-1576-4f0e-91f2-3ede5badc83a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492190-2zp86" Jan 27 16:30:00 crc kubenswrapper[4697]: I0127 16:30:00.361794 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4cc090a7-1576-4f0e-91f2-3ede5badc83a-secret-volume\") pod \"collect-profiles-29492190-2zp86\" (UID: \"4cc090a7-1576-4f0e-91f2-3ede5badc83a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492190-2zp86" Jan 27 16:30:00 crc kubenswrapper[4697]: I0127 16:30:00.373997 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5gxn\" (UniqueName: \"kubernetes.io/projected/4cc090a7-1576-4f0e-91f2-3ede5badc83a-kube-api-access-h5gxn\") pod \"collect-profiles-29492190-2zp86\" (UID: \"4cc090a7-1576-4f0e-91f2-3ede5badc83a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492190-2zp86" Jan 27 16:30:00 crc kubenswrapper[4697]: I0127 16:30:00.521865 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492190-2zp86" Jan 27 16:30:01 crc kubenswrapper[4697]: I0127 16:30:01.000794 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492190-2zp86"] Jan 27 16:30:01 crc kubenswrapper[4697]: I0127 16:30:01.639907 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492190-2zp86" event={"ID":"4cc090a7-1576-4f0e-91f2-3ede5badc83a","Type":"ContainerStarted","Data":"0163ece7108852bcaddc7f162760cee6d6fb6c3312023facf8ab781bb2e3e975"} Jan 27 16:30:01 crc kubenswrapper[4697]: I0127 16:30:01.640227 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492190-2zp86" event={"ID":"4cc090a7-1576-4f0e-91f2-3ede5badc83a","Type":"ContainerStarted","Data":"3203952676eb4781b9817fc72be03d70348ecfad96ec065b560da46061bbbbaa"} Jan 27 16:30:01 crc kubenswrapper[4697]: I0127 16:30:01.659833 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29492190-2zp86" podStartSLOduration=1.6598026209999999 podStartE2EDuration="1.659802621s" podCreationTimestamp="2026-01-27 16:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 16:30:01.659530435 +0000 UTC m=+4897.831930216" watchObservedRunningTime="2026-01-27 16:30:01.659802621 +0000 UTC m=+4897.832202402" Jan 27 16:30:02 crc kubenswrapper[4697]: I0127 16:30:02.649656 4697 generic.go:334] "Generic (PLEG): container finished" podID="4cc090a7-1576-4f0e-91f2-3ede5badc83a" containerID="0163ece7108852bcaddc7f162760cee6d6fb6c3312023facf8ab781bb2e3e975" exitCode=0 Jan 27 16:30:02 crc kubenswrapper[4697]: I0127 16:30:02.649833 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492190-2zp86" event={"ID":"4cc090a7-1576-4f0e-91f2-3ede5badc83a","Type":"ContainerDied","Data":"0163ece7108852bcaddc7f162760cee6d6fb6c3312023facf8ab781bb2e3e975"} Jan 27 16:30:04 crc kubenswrapper[4697]: I0127 16:30:04.049626 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492190-2zp86" Jan 27 16:30:04 crc kubenswrapper[4697]: I0127 16:30:04.236030 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h5gxn\" (UniqueName: \"kubernetes.io/projected/4cc090a7-1576-4f0e-91f2-3ede5badc83a-kube-api-access-h5gxn\") pod \"4cc090a7-1576-4f0e-91f2-3ede5badc83a\" (UID: \"4cc090a7-1576-4f0e-91f2-3ede5badc83a\") " Jan 27 16:30:04 crc kubenswrapper[4697]: I0127 16:30:04.236314 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4cc090a7-1576-4f0e-91f2-3ede5badc83a-secret-volume\") pod \"4cc090a7-1576-4f0e-91f2-3ede5badc83a\" (UID: \"4cc090a7-1576-4f0e-91f2-3ede5badc83a\") " Jan 27 16:30:04 crc kubenswrapper[4697]: I0127 16:30:04.236388 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4cc090a7-1576-4f0e-91f2-3ede5badc83a-config-volume\") pod \"4cc090a7-1576-4f0e-91f2-3ede5badc83a\" (UID: \"4cc090a7-1576-4f0e-91f2-3ede5badc83a\") " Jan 27 16:30:04 crc kubenswrapper[4697]: I0127 16:30:04.237304 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4cc090a7-1576-4f0e-91f2-3ede5badc83a-config-volume" (OuterVolumeSpecName: "config-volume") pod "4cc090a7-1576-4f0e-91f2-3ede5badc83a" (UID: "4cc090a7-1576-4f0e-91f2-3ede5badc83a"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:30:04 crc kubenswrapper[4697]: I0127 16:30:04.241979 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4cc090a7-1576-4f0e-91f2-3ede5badc83a-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "4cc090a7-1576-4f0e-91f2-3ede5badc83a" (UID: "4cc090a7-1576-4f0e-91f2-3ede5badc83a"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:30:04 crc kubenswrapper[4697]: I0127 16:30:04.243913 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4cc090a7-1576-4f0e-91f2-3ede5badc83a-kube-api-access-h5gxn" (OuterVolumeSpecName: "kube-api-access-h5gxn") pod "4cc090a7-1576-4f0e-91f2-3ede5badc83a" (UID: "4cc090a7-1576-4f0e-91f2-3ede5badc83a"). InnerVolumeSpecName "kube-api-access-h5gxn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:30:04 crc kubenswrapper[4697]: I0127 16:30:04.339174 4697 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4cc090a7-1576-4f0e-91f2-3ede5badc83a-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 27 16:30:04 crc kubenswrapper[4697]: I0127 16:30:04.339221 4697 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4cc090a7-1576-4f0e-91f2-3ede5badc83a-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 16:30:04 crc kubenswrapper[4697]: I0127 16:30:04.339232 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h5gxn\" (UniqueName: \"kubernetes.io/projected/4cc090a7-1576-4f0e-91f2-3ede5badc83a-kube-api-access-h5gxn\") on node \"crc\" DevicePath \"\"" Jan 27 16:30:04 crc kubenswrapper[4697]: I0127 16:30:04.667760 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492190-2zp86" event={"ID":"4cc090a7-1576-4f0e-91f2-3ede5badc83a","Type":"ContainerDied","Data":"3203952676eb4781b9817fc72be03d70348ecfad96ec065b560da46061bbbbaa"} Jan 27 16:30:04 crc kubenswrapper[4697]: I0127 16:30:04.667814 4697 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3203952676eb4781b9817fc72be03d70348ecfad96ec065b560da46061bbbbaa" Jan 27 16:30:04 crc kubenswrapper[4697]: I0127 16:30:04.667866 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492190-2zp86" Jan 27 16:30:04 crc kubenswrapper[4697]: I0127 16:30:04.739537 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492145-9dp4d"] Jan 27 16:30:04 crc kubenswrapper[4697]: I0127 16:30:04.749549 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492145-9dp4d"] Jan 27 16:30:06 crc kubenswrapper[4697]: I0127 16:30:06.581041 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b0ad28c-9462-4d97-bcc9-da634e079fd2" path="/var/lib/kubelet/pods/5b0ad28c-9462-4d97-bcc9-da634e079fd2/volumes" Jan 27 16:30:06 crc kubenswrapper[4697]: I0127 16:30:06.690636 4697 generic.go:334] "Generic (PLEG): container finished" podID="42dad571-2960-4925-8218-db035c05b9cb" containerID="f3d79fd89f3e5741ad81ab89b3cafd5c401973847770a9d0e653f65dde8239d8" exitCode=0 Jan 27 16:30:06 crc kubenswrapper[4697]: I0127 16:30:06.690728 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-fbgrj/must-gather-hxbrc" event={"ID":"42dad571-2960-4925-8218-db035c05b9cb","Type":"ContainerDied","Data":"f3d79fd89f3e5741ad81ab89b3cafd5c401973847770a9d0e653f65dde8239d8"} Jan 27 16:30:06 crc kubenswrapper[4697]: I0127 16:30:06.691429 4697 scope.go:117] "RemoveContainer" containerID="f3d79fd89f3e5741ad81ab89b3cafd5c401973847770a9d0e653f65dde8239d8" Jan 27 16:30:07 crc kubenswrapper[4697]: I0127 16:30:07.139109 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-fbgrj_must-gather-hxbrc_42dad571-2960-4925-8218-db035c05b9cb/gather/0.log" Jan 27 16:30:15 crc kubenswrapper[4697]: I0127 16:30:15.523748 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-fbgrj/must-gather-hxbrc"] Jan 27 16:30:15 crc kubenswrapper[4697]: I0127 16:30:15.524536 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-fbgrj/must-gather-hxbrc" podUID="42dad571-2960-4925-8218-db035c05b9cb" containerName="copy" containerID="cri-o://e9f961bb0a344467d99e50c256c0a2de495c120a013237ff3b3897ff13041e4c" gracePeriod=2 Jan 27 16:30:15 crc kubenswrapper[4697]: I0127 16:30:15.535572 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-fbgrj/must-gather-hxbrc"] Jan 27 16:30:15 crc kubenswrapper[4697]: I0127 16:30:15.867020 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-fbgrj_must-gather-hxbrc_42dad571-2960-4925-8218-db035c05b9cb/copy/0.log" Jan 27 16:30:15 crc kubenswrapper[4697]: I0127 16:30:15.867649 4697 generic.go:334] "Generic (PLEG): container finished" podID="42dad571-2960-4925-8218-db035c05b9cb" containerID="e9f961bb0a344467d99e50c256c0a2de495c120a013237ff3b3897ff13041e4c" exitCode=143 Jan 27 16:30:16 crc kubenswrapper[4697]: I0127 16:30:16.160356 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-fbgrj_must-gather-hxbrc_42dad571-2960-4925-8218-db035c05b9cb/copy/0.log" Jan 27 16:30:16 crc kubenswrapper[4697]: I0127 16:30:16.161243 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fbgrj/must-gather-hxbrc" Jan 27 16:30:16 crc kubenswrapper[4697]: I0127 16:30:16.179477 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/42dad571-2960-4925-8218-db035c05b9cb-must-gather-output\") pod \"42dad571-2960-4925-8218-db035c05b9cb\" (UID: \"42dad571-2960-4925-8218-db035c05b9cb\") " Jan 27 16:30:16 crc kubenswrapper[4697]: I0127 16:30:16.179615 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-67zxd\" (UniqueName: \"kubernetes.io/projected/42dad571-2960-4925-8218-db035c05b9cb-kube-api-access-67zxd\") pod \"42dad571-2960-4925-8218-db035c05b9cb\" (UID: \"42dad571-2960-4925-8218-db035c05b9cb\") " Jan 27 16:30:16 crc kubenswrapper[4697]: I0127 16:30:16.194693 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42dad571-2960-4925-8218-db035c05b9cb-kube-api-access-67zxd" (OuterVolumeSpecName: "kube-api-access-67zxd") pod "42dad571-2960-4925-8218-db035c05b9cb" (UID: "42dad571-2960-4925-8218-db035c05b9cb"). InnerVolumeSpecName "kube-api-access-67zxd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:30:16 crc kubenswrapper[4697]: I0127 16:30:16.281694 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-67zxd\" (UniqueName: \"kubernetes.io/projected/42dad571-2960-4925-8218-db035c05b9cb-kube-api-access-67zxd\") on node \"crc\" DevicePath \"\"" Jan 27 16:30:16 crc kubenswrapper[4697]: I0127 16:30:16.363504 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/42dad571-2960-4925-8218-db035c05b9cb-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "42dad571-2960-4925-8218-db035c05b9cb" (UID: "42dad571-2960-4925-8218-db035c05b9cb"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:30:16 crc kubenswrapper[4697]: I0127 16:30:16.383944 4697 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/42dad571-2960-4925-8218-db035c05b9cb-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 27 16:30:16 crc kubenswrapper[4697]: I0127 16:30:16.597957 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42dad571-2960-4925-8218-db035c05b9cb" path="/var/lib/kubelet/pods/42dad571-2960-4925-8218-db035c05b9cb/volumes" Jan 27 16:30:16 crc kubenswrapper[4697]: I0127 16:30:16.877429 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-fbgrj_must-gather-hxbrc_42dad571-2960-4925-8218-db035c05b9cb/copy/0.log" Jan 27 16:30:16 crc kubenswrapper[4697]: I0127 16:30:16.878244 4697 scope.go:117] "RemoveContainer" containerID="e9f961bb0a344467d99e50c256c0a2de495c120a013237ff3b3897ff13041e4c" Jan 27 16:30:16 crc kubenswrapper[4697]: I0127 16:30:16.878280 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fbgrj/must-gather-hxbrc" Jan 27 16:30:16 crc kubenswrapper[4697]: I0127 16:30:16.907453 4697 scope.go:117] "RemoveContainer" containerID="f3d79fd89f3e5741ad81ab89b3cafd5c401973847770a9d0e653f65dde8239d8" Jan 27 16:30:25 crc kubenswrapper[4697]: I0127 16:30:25.108650 4697 patch_prober.go:28] interesting pod/machine-config-daemon-wz495 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 16:30:25 crc kubenswrapper[4697]: I0127 16:30:25.109290 4697 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 16:30:49 crc kubenswrapper[4697]: I0127 16:30:49.467898 4697 scope.go:117] "RemoveContainer" containerID="182fe67c72a5584c7524a69f4060233fbd552baf1612123e9feb64d6906c4cd5" Jan 27 16:30:55 crc kubenswrapper[4697]: I0127 16:30:55.108767 4697 patch_prober.go:28] interesting pod/machine-config-daemon-wz495 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 16:30:55 crc kubenswrapper[4697]: I0127 16:30:55.109386 4697 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 16:31:25 crc kubenswrapper[4697]: I0127 16:31:25.109400 4697 patch_prober.go:28] interesting pod/machine-config-daemon-wz495 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 16:31:25 crc kubenswrapper[4697]: I0127 16:31:25.109929 4697 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 16:31:25 crc kubenswrapper[4697]: I0127 16:31:25.109971 4697 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wz495" Jan 27 16:31:25 crc kubenswrapper[4697]: I0127 16:31:25.110672 4697 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f5883d34a63778ee686555f75d5c9331c19bb6049a5349238b0b49ff0a60e6ee"} pod="openshift-machine-config-operator/machine-config-daemon-wz495" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 16:31:25 crc kubenswrapper[4697]: I0127 16:31:25.110724 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" containerName="machine-config-daemon" containerID="cri-o://f5883d34a63778ee686555f75d5c9331c19bb6049a5349238b0b49ff0a60e6ee" gracePeriod=600 Jan 27 16:31:25 crc kubenswrapper[4697]: E0127 16:31:25.312029 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wz495_openshift-machine-config-operator(e9bec8bc-b2a6-4865-83ca-692ae5c022a6)\"" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" Jan 27 16:31:25 crc kubenswrapper[4697]: I0127 16:31:25.505168 4697 generic.go:334] "Generic (PLEG): container finished" podID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" containerID="f5883d34a63778ee686555f75d5c9331c19bb6049a5349238b0b49ff0a60e6ee" exitCode=0 Jan 27 16:31:25 crc kubenswrapper[4697]: I0127 16:31:25.505230 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wz495" event={"ID":"e9bec8bc-b2a6-4865-83ca-692ae5c022a6","Type":"ContainerDied","Data":"f5883d34a63778ee686555f75d5c9331c19bb6049a5349238b0b49ff0a60e6ee"} Jan 27 16:31:25 crc kubenswrapper[4697]: I0127 16:31:25.505272 4697 scope.go:117] "RemoveContainer" containerID="33ef3249056f5753476b3fb4dab581920d9492912c671892b553ffa873cec697" Jan 27 16:31:25 crc kubenswrapper[4697]: I0127 16:31:25.506151 4697 scope.go:117] "RemoveContainer" containerID="f5883d34a63778ee686555f75d5c9331c19bb6049a5349238b0b49ff0a60e6ee" Jan 27 16:31:25 crc kubenswrapper[4697]: E0127 16:31:25.506437 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wz495_openshift-machine-config-operator(e9bec8bc-b2a6-4865-83ca-692ae5c022a6)\"" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" Jan 27 16:31:37 crc kubenswrapper[4697]: I0127 16:31:37.568426 4697 scope.go:117] "RemoveContainer" containerID="f5883d34a63778ee686555f75d5c9331c19bb6049a5349238b0b49ff0a60e6ee" Jan 27 16:31:37 crc kubenswrapper[4697]: E0127 16:31:37.569271 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wz495_openshift-machine-config-operator(e9bec8bc-b2a6-4865-83ca-692ae5c022a6)\"" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" Jan 27 16:31:51 crc kubenswrapper[4697]: I0127 16:31:51.569318 4697 scope.go:117] "RemoveContainer" containerID="f5883d34a63778ee686555f75d5c9331c19bb6049a5349238b0b49ff0a60e6ee" Jan 27 16:31:51 crc kubenswrapper[4697]: E0127 16:31:51.570439 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wz495_openshift-machine-config-operator(e9bec8bc-b2a6-4865-83ca-692ae5c022a6)\"" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" Jan 27 16:32:02 crc kubenswrapper[4697]: I0127 16:32:02.568438 4697 scope.go:117] "RemoveContainer" containerID="f5883d34a63778ee686555f75d5c9331c19bb6049a5349238b0b49ff0a60e6ee" Jan 27 16:32:02 crc kubenswrapper[4697]: E0127 16:32:02.569259 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wz495_openshift-machine-config-operator(e9bec8bc-b2a6-4865-83ca-692ae5c022a6)\"" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" Jan 27 16:32:14 crc kubenswrapper[4697]: I0127 16:32:14.574259 4697 scope.go:117] "RemoveContainer" containerID="f5883d34a63778ee686555f75d5c9331c19bb6049a5349238b0b49ff0a60e6ee" Jan 27 16:32:14 crc kubenswrapper[4697]: E0127 16:32:14.575067 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wz495_openshift-machine-config-operator(e9bec8bc-b2a6-4865-83ca-692ae5c022a6)\"" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" Jan 27 16:32:27 crc kubenswrapper[4697]: I0127 16:32:27.568971 4697 scope.go:117] "RemoveContainer" containerID="f5883d34a63778ee686555f75d5c9331c19bb6049a5349238b0b49ff0a60e6ee" Jan 27 16:32:27 crc kubenswrapper[4697]: E0127 16:32:27.570998 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wz495_openshift-machine-config-operator(e9bec8bc-b2a6-4865-83ca-692ae5c022a6)\"" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" Jan 27 16:32:39 crc kubenswrapper[4697]: I0127 16:32:39.569384 4697 scope.go:117] "RemoveContainer" containerID="f5883d34a63778ee686555f75d5c9331c19bb6049a5349238b0b49ff0a60e6ee" Jan 27 16:32:39 crc kubenswrapper[4697]: E0127 16:32:39.570515 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wz495_openshift-machine-config-operator(e9bec8bc-b2a6-4865-83ca-692ae5c022a6)\"" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" Jan 27 16:32:53 crc kubenswrapper[4697]: I0127 16:32:53.568498 4697 scope.go:117] "RemoveContainer" containerID="f5883d34a63778ee686555f75d5c9331c19bb6049a5349238b0b49ff0a60e6ee" Jan 27 16:32:53 crc kubenswrapper[4697]: E0127 16:32:53.569295 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wz495_openshift-machine-config-operator(e9bec8bc-b2a6-4865-83ca-692ae5c022a6)\"" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" Jan 27 16:33:04 crc kubenswrapper[4697]: I0127 16:33:04.568728 4697 scope.go:117] "RemoveContainer" containerID="f5883d34a63778ee686555f75d5c9331c19bb6049a5349238b0b49ff0a60e6ee" Jan 27 16:33:04 crc kubenswrapper[4697]: E0127 16:33:04.569473 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wz495_openshift-machine-config-operator(e9bec8bc-b2a6-4865-83ca-692ae5c022a6)\"" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" Jan 27 16:33:14 crc kubenswrapper[4697]: I0127 16:33:14.098290 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-w2bwm/must-gather-mwlm9"] Jan 27 16:33:14 crc kubenswrapper[4697]: E0127 16:33:14.099315 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42dad571-2960-4925-8218-db035c05b9cb" containerName="gather" Jan 27 16:33:14 crc kubenswrapper[4697]: I0127 16:33:14.099335 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="42dad571-2960-4925-8218-db035c05b9cb" containerName="gather" Jan 27 16:33:14 crc kubenswrapper[4697]: E0127 16:33:14.099353 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cc090a7-1576-4f0e-91f2-3ede5badc83a" containerName="collect-profiles" Jan 27 16:33:14 crc kubenswrapper[4697]: I0127 16:33:14.099362 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cc090a7-1576-4f0e-91f2-3ede5badc83a" containerName="collect-profiles" Jan 27 16:33:14 crc kubenswrapper[4697]: E0127 16:33:14.099373 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42dad571-2960-4925-8218-db035c05b9cb" containerName="copy" Jan 27 16:33:14 crc kubenswrapper[4697]: I0127 16:33:14.099383 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="42dad571-2960-4925-8218-db035c05b9cb" containerName="copy" Jan 27 16:33:14 crc kubenswrapper[4697]: I0127 16:33:14.099623 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="42dad571-2960-4925-8218-db035c05b9cb" containerName="copy" Jan 27 16:33:14 crc kubenswrapper[4697]: I0127 16:33:14.099653 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="42dad571-2960-4925-8218-db035c05b9cb" containerName="gather" Jan 27 16:33:14 crc kubenswrapper[4697]: I0127 16:33:14.099665 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="4cc090a7-1576-4f0e-91f2-3ede5badc83a" containerName="collect-profiles" Jan 27 16:33:14 crc kubenswrapper[4697]: I0127 16:33:14.100899 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-w2bwm/must-gather-mwlm9" Jan 27 16:33:14 crc kubenswrapper[4697]: I0127 16:33:14.104242 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-w2bwm"/"default-dockercfg-rh4nk" Jan 27 16:33:14 crc kubenswrapper[4697]: I0127 16:33:14.105253 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-w2bwm"/"openshift-service-ca.crt" Jan 27 16:33:14 crc kubenswrapper[4697]: I0127 16:33:14.105692 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-w2bwm"/"kube-root-ca.crt" Jan 27 16:33:14 crc kubenswrapper[4697]: I0127 16:33:14.167642 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4glk7\" (UniqueName: \"kubernetes.io/projected/1f5fb25b-ff4d-4dc7-9f17-5124bce9f729-kube-api-access-4glk7\") pod \"must-gather-mwlm9\" (UID: \"1f5fb25b-ff4d-4dc7-9f17-5124bce9f729\") " pod="openshift-must-gather-w2bwm/must-gather-mwlm9" Jan 27 16:33:14 crc kubenswrapper[4697]: I0127 16:33:14.167690 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/1f5fb25b-ff4d-4dc7-9f17-5124bce9f729-must-gather-output\") pod \"must-gather-mwlm9\" (UID: \"1f5fb25b-ff4d-4dc7-9f17-5124bce9f729\") " pod="openshift-must-gather-w2bwm/must-gather-mwlm9" Jan 27 16:33:14 crc kubenswrapper[4697]: I0127 16:33:14.188592 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-w2bwm/must-gather-mwlm9"] Jan 27 16:33:14 crc kubenswrapper[4697]: I0127 16:33:14.271669 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4glk7\" (UniqueName: \"kubernetes.io/projected/1f5fb25b-ff4d-4dc7-9f17-5124bce9f729-kube-api-access-4glk7\") pod \"must-gather-mwlm9\" (UID: \"1f5fb25b-ff4d-4dc7-9f17-5124bce9f729\") " pod="openshift-must-gather-w2bwm/must-gather-mwlm9" Jan 27 16:33:14 crc kubenswrapper[4697]: I0127 16:33:14.271708 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/1f5fb25b-ff4d-4dc7-9f17-5124bce9f729-must-gather-output\") pod \"must-gather-mwlm9\" (UID: \"1f5fb25b-ff4d-4dc7-9f17-5124bce9f729\") " pod="openshift-must-gather-w2bwm/must-gather-mwlm9" Jan 27 16:33:14 crc kubenswrapper[4697]: I0127 16:33:14.272123 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/1f5fb25b-ff4d-4dc7-9f17-5124bce9f729-must-gather-output\") pod \"must-gather-mwlm9\" (UID: \"1f5fb25b-ff4d-4dc7-9f17-5124bce9f729\") " pod="openshift-must-gather-w2bwm/must-gather-mwlm9" Jan 27 16:33:14 crc kubenswrapper[4697]: I0127 16:33:14.290684 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4glk7\" (UniqueName: \"kubernetes.io/projected/1f5fb25b-ff4d-4dc7-9f17-5124bce9f729-kube-api-access-4glk7\") pod \"must-gather-mwlm9\" (UID: \"1f5fb25b-ff4d-4dc7-9f17-5124bce9f729\") " pod="openshift-must-gather-w2bwm/must-gather-mwlm9" Jan 27 16:33:14 crc kubenswrapper[4697]: I0127 16:33:14.425583 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-w2bwm/must-gather-mwlm9" Jan 27 16:33:15 crc kubenswrapper[4697]: I0127 16:33:15.626993 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-w2bwm/must-gather-mwlm9"] Jan 27 16:33:16 crc kubenswrapper[4697]: I0127 16:33:16.314561 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-w2bwm/must-gather-mwlm9" event={"ID":"1f5fb25b-ff4d-4dc7-9f17-5124bce9f729","Type":"ContainerStarted","Data":"7436ff36f164596e240ba0206ae565af669590c834cb31f03a3c39727a123426"} Jan 27 16:33:16 crc kubenswrapper[4697]: I0127 16:33:16.314829 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-w2bwm/must-gather-mwlm9" event={"ID":"1f5fb25b-ff4d-4dc7-9f17-5124bce9f729","Type":"ContainerStarted","Data":"2c923a1e32a8d931fa6cacdf9ede2a8fb460752e3ffc2d65c7fc276afc51b285"} Jan 27 16:33:16 crc kubenswrapper[4697]: I0127 16:33:16.314840 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-w2bwm/must-gather-mwlm9" event={"ID":"1f5fb25b-ff4d-4dc7-9f17-5124bce9f729","Type":"ContainerStarted","Data":"24bb7a7922d7dd8adff5b7e056ee4bd23986333363b6fe2a91e72ff38fb90bc7"} Jan 27 16:33:18 crc kubenswrapper[4697]: I0127 16:33:18.568919 4697 scope.go:117] "RemoveContainer" containerID="f5883d34a63778ee686555f75d5c9331c19bb6049a5349238b0b49ff0a60e6ee" Jan 27 16:33:18 crc kubenswrapper[4697]: E0127 16:33:18.569589 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wz495_openshift-machine-config-operator(e9bec8bc-b2a6-4865-83ca-692ae5c022a6)\"" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" Jan 27 16:33:20 crc kubenswrapper[4697]: I0127 16:33:20.673955 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-w2bwm/must-gather-mwlm9" podStartSLOduration=6.673936097 podStartE2EDuration="6.673936097s" podCreationTimestamp="2026-01-27 16:33:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 16:33:16.33214653 +0000 UTC m=+5092.504546311" watchObservedRunningTime="2026-01-27 16:33:20.673936097 +0000 UTC m=+5096.846335898" Jan 27 16:33:20 crc kubenswrapper[4697]: I0127 16:33:20.677974 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-w2bwm/crc-debug-f8657"] Jan 27 16:33:20 crc kubenswrapper[4697]: I0127 16:33:20.679651 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-w2bwm/crc-debug-f8657" Jan 27 16:33:20 crc kubenswrapper[4697]: I0127 16:33:20.814090 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/41d0a357-5b78-4b51-a514-8fdff4453c55-host\") pod \"crc-debug-f8657\" (UID: \"41d0a357-5b78-4b51-a514-8fdff4453c55\") " pod="openshift-must-gather-w2bwm/crc-debug-f8657" Jan 27 16:33:20 crc kubenswrapper[4697]: I0127 16:33:20.814149 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mglzp\" (UniqueName: \"kubernetes.io/projected/41d0a357-5b78-4b51-a514-8fdff4453c55-kube-api-access-mglzp\") pod \"crc-debug-f8657\" (UID: \"41d0a357-5b78-4b51-a514-8fdff4453c55\") " pod="openshift-must-gather-w2bwm/crc-debug-f8657" Jan 27 16:33:20 crc kubenswrapper[4697]: I0127 16:33:20.916443 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/41d0a357-5b78-4b51-a514-8fdff4453c55-host\") pod \"crc-debug-f8657\" (UID: \"41d0a357-5b78-4b51-a514-8fdff4453c55\") " pod="openshift-must-gather-w2bwm/crc-debug-f8657" Jan 27 16:33:20 crc kubenswrapper[4697]: I0127 16:33:20.916504 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mglzp\" (UniqueName: \"kubernetes.io/projected/41d0a357-5b78-4b51-a514-8fdff4453c55-kube-api-access-mglzp\") pod \"crc-debug-f8657\" (UID: \"41d0a357-5b78-4b51-a514-8fdff4453c55\") " pod="openshift-must-gather-w2bwm/crc-debug-f8657" Jan 27 16:33:20 crc kubenswrapper[4697]: I0127 16:33:20.916906 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/41d0a357-5b78-4b51-a514-8fdff4453c55-host\") pod \"crc-debug-f8657\" (UID: \"41d0a357-5b78-4b51-a514-8fdff4453c55\") " pod="openshift-must-gather-w2bwm/crc-debug-f8657" Jan 27 16:33:20 crc kubenswrapper[4697]: I0127 16:33:20.937674 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mglzp\" (UniqueName: \"kubernetes.io/projected/41d0a357-5b78-4b51-a514-8fdff4453c55-kube-api-access-mglzp\") pod \"crc-debug-f8657\" (UID: \"41d0a357-5b78-4b51-a514-8fdff4453c55\") " pod="openshift-must-gather-w2bwm/crc-debug-f8657" Jan 27 16:33:20 crc kubenswrapper[4697]: I0127 16:33:20.998622 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-w2bwm/crc-debug-f8657" Jan 27 16:33:21 crc kubenswrapper[4697]: W0127 16:33:21.028032 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod41d0a357_5b78_4b51_a514_8fdff4453c55.slice/crio-1e7d9222ff25794cfd8147d8541b3bcc50d0cb4badf28ba0549829bb83d4162c WatchSource:0}: Error finding container 1e7d9222ff25794cfd8147d8541b3bcc50d0cb4badf28ba0549829bb83d4162c: Status 404 returned error can't find the container with id 1e7d9222ff25794cfd8147d8541b3bcc50d0cb4badf28ba0549829bb83d4162c Jan 27 16:33:21 crc kubenswrapper[4697]: I0127 16:33:21.363841 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-w2bwm/crc-debug-f8657" event={"ID":"41d0a357-5b78-4b51-a514-8fdff4453c55","Type":"ContainerStarted","Data":"379b269d99ad09fcb6205d7a87d5f862ebdfc9117f1edb1b82ad83708146ec7d"} Jan 27 16:33:21 crc kubenswrapper[4697]: I0127 16:33:21.364380 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-w2bwm/crc-debug-f8657" event={"ID":"41d0a357-5b78-4b51-a514-8fdff4453c55","Type":"ContainerStarted","Data":"1e7d9222ff25794cfd8147d8541b3bcc50d0cb4badf28ba0549829bb83d4162c"} Jan 27 16:33:21 crc kubenswrapper[4697]: I0127 16:33:21.385083 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-w2bwm/crc-debug-f8657" podStartSLOduration=1.385060197 podStartE2EDuration="1.385060197s" podCreationTimestamp="2026-01-27 16:33:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 16:33:21.377426882 +0000 UTC m=+5097.549826663" watchObservedRunningTime="2026-01-27 16:33:21.385060197 +0000 UTC m=+5097.557459978" Jan 27 16:33:32 crc kubenswrapper[4697]: I0127 16:33:32.568993 4697 scope.go:117] "RemoveContainer" containerID="f5883d34a63778ee686555f75d5c9331c19bb6049a5349238b0b49ff0a60e6ee" Jan 27 16:33:32 crc kubenswrapper[4697]: E0127 16:33:32.569808 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wz495_openshift-machine-config-operator(e9bec8bc-b2a6-4865-83ca-692ae5c022a6)\"" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" Jan 27 16:33:47 crc kubenswrapper[4697]: I0127 16:33:47.568129 4697 scope.go:117] "RemoveContainer" containerID="f5883d34a63778ee686555f75d5c9331c19bb6049a5349238b0b49ff0a60e6ee" Jan 27 16:33:47 crc kubenswrapper[4697]: E0127 16:33:47.568751 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wz495_openshift-machine-config-operator(e9bec8bc-b2a6-4865-83ca-692ae5c022a6)\"" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" Jan 27 16:33:59 crc kubenswrapper[4697]: I0127 16:33:59.568572 4697 scope.go:117] "RemoveContainer" containerID="f5883d34a63778ee686555f75d5c9331c19bb6049a5349238b0b49ff0a60e6ee" Jan 27 16:33:59 crc kubenswrapper[4697]: E0127 16:33:59.569595 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wz495_openshift-machine-config-operator(e9bec8bc-b2a6-4865-83ca-692ae5c022a6)\"" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" Jan 27 16:34:02 crc kubenswrapper[4697]: I0127 16:34:02.714636 4697 generic.go:334] "Generic (PLEG): container finished" podID="41d0a357-5b78-4b51-a514-8fdff4453c55" containerID="379b269d99ad09fcb6205d7a87d5f862ebdfc9117f1edb1b82ad83708146ec7d" exitCode=0 Jan 27 16:34:02 crc kubenswrapper[4697]: I0127 16:34:02.715517 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-w2bwm/crc-debug-f8657" event={"ID":"41d0a357-5b78-4b51-a514-8fdff4453c55","Type":"ContainerDied","Data":"379b269d99ad09fcb6205d7a87d5f862ebdfc9117f1edb1b82ad83708146ec7d"} Jan 27 16:34:03 crc kubenswrapper[4697]: I0127 16:34:03.840558 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-w2bwm/crc-debug-f8657" Jan 27 16:34:03 crc kubenswrapper[4697]: I0127 16:34:03.882037 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-w2bwm/crc-debug-f8657"] Jan 27 16:34:03 crc kubenswrapper[4697]: I0127 16:34:03.891897 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-w2bwm/crc-debug-f8657"] Jan 27 16:34:04 crc kubenswrapper[4697]: I0127 16:34:04.032752 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/41d0a357-5b78-4b51-a514-8fdff4453c55-host\") pod \"41d0a357-5b78-4b51-a514-8fdff4453c55\" (UID: \"41d0a357-5b78-4b51-a514-8fdff4453c55\") " Jan 27 16:34:04 crc kubenswrapper[4697]: I0127 16:34:04.032870 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/41d0a357-5b78-4b51-a514-8fdff4453c55-host" (OuterVolumeSpecName: "host") pod "41d0a357-5b78-4b51-a514-8fdff4453c55" (UID: "41d0a357-5b78-4b51-a514-8fdff4453c55"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 16:34:04 crc kubenswrapper[4697]: I0127 16:34:04.032949 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mglzp\" (UniqueName: \"kubernetes.io/projected/41d0a357-5b78-4b51-a514-8fdff4453c55-kube-api-access-mglzp\") pod \"41d0a357-5b78-4b51-a514-8fdff4453c55\" (UID: \"41d0a357-5b78-4b51-a514-8fdff4453c55\") " Jan 27 16:34:04 crc kubenswrapper[4697]: I0127 16:34:04.033550 4697 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/41d0a357-5b78-4b51-a514-8fdff4453c55-host\") on node \"crc\" DevicePath \"\"" Jan 27 16:34:04 crc kubenswrapper[4697]: I0127 16:34:04.356765 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41d0a357-5b78-4b51-a514-8fdff4453c55-kube-api-access-mglzp" (OuterVolumeSpecName: "kube-api-access-mglzp") pod "41d0a357-5b78-4b51-a514-8fdff4453c55" (UID: "41d0a357-5b78-4b51-a514-8fdff4453c55"). InnerVolumeSpecName "kube-api-access-mglzp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:34:04 crc kubenswrapper[4697]: I0127 16:34:04.441757 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mglzp\" (UniqueName: \"kubernetes.io/projected/41d0a357-5b78-4b51-a514-8fdff4453c55-kube-api-access-mglzp\") on node \"crc\" DevicePath \"\"" Jan 27 16:34:04 crc kubenswrapper[4697]: I0127 16:34:04.585770 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41d0a357-5b78-4b51-a514-8fdff4453c55" path="/var/lib/kubelet/pods/41d0a357-5b78-4b51-a514-8fdff4453c55/volumes" Jan 27 16:34:04 crc kubenswrapper[4697]: I0127 16:34:04.732873 4697 scope.go:117] "RemoveContainer" containerID="379b269d99ad09fcb6205d7a87d5f862ebdfc9117f1edb1b82ad83708146ec7d" Jan 27 16:34:04 crc kubenswrapper[4697]: I0127 16:34:04.733115 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-w2bwm/crc-debug-f8657" Jan 27 16:34:05 crc kubenswrapper[4697]: I0127 16:34:05.067580 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-w2bwm/crc-debug-g66ql"] Jan 27 16:34:05 crc kubenswrapper[4697]: E0127 16:34:05.084838 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41d0a357-5b78-4b51-a514-8fdff4453c55" containerName="container-00" Jan 27 16:34:05 crc kubenswrapper[4697]: I0127 16:34:05.084884 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="41d0a357-5b78-4b51-a514-8fdff4453c55" containerName="container-00" Jan 27 16:34:05 crc kubenswrapper[4697]: I0127 16:34:05.085460 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="41d0a357-5b78-4b51-a514-8fdff4453c55" containerName="container-00" Jan 27 16:34:05 crc kubenswrapper[4697]: I0127 16:34:05.086531 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-w2bwm/crc-debug-g66ql" Jan 27 16:34:05 crc kubenswrapper[4697]: I0127 16:34:05.255162 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67d88\" (UniqueName: \"kubernetes.io/projected/0c7bc3fc-128f-49f2-9946-a9e41b846030-kube-api-access-67d88\") pod \"crc-debug-g66ql\" (UID: \"0c7bc3fc-128f-49f2-9946-a9e41b846030\") " pod="openshift-must-gather-w2bwm/crc-debug-g66ql" Jan 27 16:34:05 crc kubenswrapper[4697]: I0127 16:34:05.255563 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0c7bc3fc-128f-49f2-9946-a9e41b846030-host\") pod \"crc-debug-g66ql\" (UID: \"0c7bc3fc-128f-49f2-9946-a9e41b846030\") " pod="openshift-must-gather-w2bwm/crc-debug-g66ql" Jan 27 16:34:05 crc kubenswrapper[4697]: I0127 16:34:05.357511 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67d88\" (UniqueName: \"kubernetes.io/projected/0c7bc3fc-128f-49f2-9946-a9e41b846030-kube-api-access-67d88\") pod \"crc-debug-g66ql\" (UID: \"0c7bc3fc-128f-49f2-9946-a9e41b846030\") " pod="openshift-must-gather-w2bwm/crc-debug-g66ql" Jan 27 16:34:05 crc kubenswrapper[4697]: I0127 16:34:05.358083 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0c7bc3fc-128f-49f2-9946-a9e41b846030-host\") pod \"crc-debug-g66ql\" (UID: \"0c7bc3fc-128f-49f2-9946-a9e41b846030\") " pod="openshift-must-gather-w2bwm/crc-debug-g66ql" Jan 27 16:34:05 crc kubenswrapper[4697]: I0127 16:34:05.358146 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0c7bc3fc-128f-49f2-9946-a9e41b846030-host\") pod \"crc-debug-g66ql\" (UID: \"0c7bc3fc-128f-49f2-9946-a9e41b846030\") " pod="openshift-must-gather-w2bwm/crc-debug-g66ql" Jan 27 16:34:05 crc kubenswrapper[4697]: I0127 16:34:05.378425 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67d88\" (UniqueName: \"kubernetes.io/projected/0c7bc3fc-128f-49f2-9946-a9e41b846030-kube-api-access-67d88\") pod \"crc-debug-g66ql\" (UID: \"0c7bc3fc-128f-49f2-9946-a9e41b846030\") " pod="openshift-must-gather-w2bwm/crc-debug-g66ql" Jan 27 16:34:05 crc kubenswrapper[4697]: I0127 16:34:05.406498 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-w2bwm/crc-debug-g66ql" Jan 27 16:34:05 crc kubenswrapper[4697]: I0127 16:34:05.741324 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-w2bwm/crc-debug-g66ql" event={"ID":"0c7bc3fc-128f-49f2-9946-a9e41b846030","Type":"ContainerStarted","Data":"60c567d5de796a5da350c35d5d2deb3fbe50a49a832427e32acf8e88f0ef4a5a"} Jan 27 16:34:05 crc kubenswrapper[4697]: I0127 16:34:05.741618 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-w2bwm/crc-debug-g66ql" event={"ID":"0c7bc3fc-128f-49f2-9946-a9e41b846030","Type":"ContainerStarted","Data":"3f9b6ca234f9a3f7c66deda36fbb8e232650aae5e60ffedf1e65800ca2447f22"} Jan 27 16:34:05 crc kubenswrapper[4697]: I0127 16:34:05.752280 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-w2bwm/crc-debug-g66ql" podStartSLOduration=0.75226123 podStartE2EDuration="752.26123ms" podCreationTimestamp="2026-01-27 16:34:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 16:34:05.751292046 +0000 UTC m=+5141.923691837" watchObservedRunningTime="2026-01-27 16:34:05.75226123 +0000 UTC m=+5141.924661011" Jan 27 16:34:06 crc kubenswrapper[4697]: I0127 16:34:06.761360 4697 generic.go:334] "Generic (PLEG): container finished" podID="0c7bc3fc-128f-49f2-9946-a9e41b846030" containerID="60c567d5de796a5da350c35d5d2deb3fbe50a49a832427e32acf8e88f0ef4a5a" exitCode=0 Jan 27 16:34:06 crc kubenswrapper[4697]: I0127 16:34:06.761445 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-w2bwm/crc-debug-g66ql" event={"ID":"0c7bc3fc-128f-49f2-9946-a9e41b846030","Type":"ContainerDied","Data":"60c567d5de796a5da350c35d5d2deb3fbe50a49a832427e32acf8e88f0ef4a5a"} Jan 27 16:34:07 crc kubenswrapper[4697]: I0127 16:34:07.877889 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-w2bwm/crc-debug-g66ql" Jan 27 16:34:07 crc kubenswrapper[4697]: I0127 16:34:07.912955 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-w2bwm/crc-debug-g66ql"] Jan 27 16:34:07 crc kubenswrapper[4697]: I0127 16:34:07.921057 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-w2bwm/crc-debug-g66ql"] Jan 27 16:34:07 crc kubenswrapper[4697]: I0127 16:34:07.999642 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0c7bc3fc-128f-49f2-9946-a9e41b846030-host\") pod \"0c7bc3fc-128f-49f2-9946-a9e41b846030\" (UID: \"0c7bc3fc-128f-49f2-9946-a9e41b846030\") " Jan 27 16:34:07 crc kubenswrapper[4697]: I0127 16:34:07.999746 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-67d88\" (UniqueName: \"kubernetes.io/projected/0c7bc3fc-128f-49f2-9946-a9e41b846030-kube-api-access-67d88\") pod \"0c7bc3fc-128f-49f2-9946-a9e41b846030\" (UID: \"0c7bc3fc-128f-49f2-9946-a9e41b846030\") " Jan 27 16:34:08 crc kubenswrapper[4697]: I0127 16:34:08.000433 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c7bc3fc-128f-49f2-9946-a9e41b846030-host" (OuterVolumeSpecName: "host") pod "0c7bc3fc-128f-49f2-9946-a9e41b846030" (UID: "0c7bc3fc-128f-49f2-9946-a9e41b846030"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 16:34:08 crc kubenswrapper[4697]: I0127 16:34:08.016129 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c7bc3fc-128f-49f2-9946-a9e41b846030-kube-api-access-67d88" (OuterVolumeSpecName: "kube-api-access-67d88") pod "0c7bc3fc-128f-49f2-9946-a9e41b846030" (UID: "0c7bc3fc-128f-49f2-9946-a9e41b846030"). InnerVolumeSpecName "kube-api-access-67d88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:34:08 crc kubenswrapper[4697]: I0127 16:34:08.101727 4697 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0c7bc3fc-128f-49f2-9946-a9e41b846030-host\") on node \"crc\" DevicePath \"\"" Jan 27 16:34:08 crc kubenswrapper[4697]: I0127 16:34:08.101779 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-67d88\" (UniqueName: \"kubernetes.io/projected/0c7bc3fc-128f-49f2-9946-a9e41b846030-kube-api-access-67d88\") on node \"crc\" DevicePath \"\"" Jan 27 16:34:08 crc kubenswrapper[4697]: I0127 16:34:08.580313 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c7bc3fc-128f-49f2-9946-a9e41b846030" path="/var/lib/kubelet/pods/0c7bc3fc-128f-49f2-9946-a9e41b846030/volumes" Jan 27 16:34:08 crc kubenswrapper[4697]: I0127 16:34:08.777761 4697 scope.go:117] "RemoveContainer" containerID="60c567d5de796a5da350c35d5d2deb3fbe50a49a832427e32acf8e88f0ef4a5a" Jan 27 16:34:08 crc kubenswrapper[4697]: I0127 16:34:08.777803 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-w2bwm/crc-debug-g66ql" Jan 27 16:34:09 crc kubenswrapper[4697]: I0127 16:34:09.106034 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-w2bwm/crc-debug-rjqmz"] Jan 27 16:34:09 crc kubenswrapper[4697]: E0127 16:34:09.107410 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c7bc3fc-128f-49f2-9946-a9e41b846030" containerName="container-00" Jan 27 16:34:09 crc kubenswrapper[4697]: I0127 16:34:09.107499 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c7bc3fc-128f-49f2-9946-a9e41b846030" containerName="container-00" Jan 27 16:34:09 crc kubenswrapper[4697]: I0127 16:34:09.107825 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c7bc3fc-128f-49f2-9946-a9e41b846030" containerName="container-00" Jan 27 16:34:09 crc kubenswrapper[4697]: I0127 16:34:09.108561 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-w2bwm/crc-debug-rjqmz" Jan 27 16:34:09 crc kubenswrapper[4697]: I0127 16:34:09.221103 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7fv6\" (UniqueName: \"kubernetes.io/projected/adba9fe9-9059-4e2f-b016-4adace00bb40-kube-api-access-f7fv6\") pod \"crc-debug-rjqmz\" (UID: \"adba9fe9-9059-4e2f-b016-4adace00bb40\") " pod="openshift-must-gather-w2bwm/crc-debug-rjqmz" Jan 27 16:34:09 crc kubenswrapper[4697]: I0127 16:34:09.221233 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/adba9fe9-9059-4e2f-b016-4adace00bb40-host\") pod \"crc-debug-rjqmz\" (UID: \"adba9fe9-9059-4e2f-b016-4adace00bb40\") " pod="openshift-must-gather-w2bwm/crc-debug-rjqmz" Jan 27 16:34:09 crc kubenswrapper[4697]: I0127 16:34:09.323503 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f7fv6\" (UniqueName: \"kubernetes.io/projected/adba9fe9-9059-4e2f-b016-4adace00bb40-kube-api-access-f7fv6\") pod \"crc-debug-rjqmz\" (UID: \"adba9fe9-9059-4e2f-b016-4adace00bb40\") " pod="openshift-must-gather-w2bwm/crc-debug-rjqmz" Jan 27 16:34:09 crc kubenswrapper[4697]: I0127 16:34:09.323549 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/adba9fe9-9059-4e2f-b016-4adace00bb40-host\") pod \"crc-debug-rjqmz\" (UID: \"adba9fe9-9059-4e2f-b016-4adace00bb40\") " pod="openshift-must-gather-w2bwm/crc-debug-rjqmz" Jan 27 16:34:09 crc kubenswrapper[4697]: I0127 16:34:09.323647 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/adba9fe9-9059-4e2f-b016-4adace00bb40-host\") pod \"crc-debug-rjqmz\" (UID: \"adba9fe9-9059-4e2f-b016-4adace00bb40\") " pod="openshift-must-gather-w2bwm/crc-debug-rjqmz" Jan 27 16:34:09 crc kubenswrapper[4697]: I0127 16:34:09.343303 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f7fv6\" (UniqueName: \"kubernetes.io/projected/adba9fe9-9059-4e2f-b016-4adace00bb40-kube-api-access-f7fv6\") pod \"crc-debug-rjqmz\" (UID: \"adba9fe9-9059-4e2f-b016-4adace00bb40\") " pod="openshift-must-gather-w2bwm/crc-debug-rjqmz" Jan 27 16:34:09 crc kubenswrapper[4697]: I0127 16:34:09.424062 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-w2bwm/crc-debug-rjqmz" Jan 27 16:34:09 crc kubenswrapper[4697]: I0127 16:34:09.787208 4697 generic.go:334] "Generic (PLEG): container finished" podID="adba9fe9-9059-4e2f-b016-4adace00bb40" containerID="b05211027ce14bd913efc8c9c909289b749eab848702abc110a754cc5326c4ee" exitCode=0 Jan 27 16:34:09 crc kubenswrapper[4697]: I0127 16:34:09.787392 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-w2bwm/crc-debug-rjqmz" event={"ID":"adba9fe9-9059-4e2f-b016-4adace00bb40","Type":"ContainerDied","Data":"b05211027ce14bd913efc8c9c909289b749eab848702abc110a754cc5326c4ee"} Jan 27 16:34:09 crc kubenswrapper[4697]: I0127 16:34:09.788019 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-w2bwm/crc-debug-rjqmz" event={"ID":"adba9fe9-9059-4e2f-b016-4adace00bb40","Type":"ContainerStarted","Data":"206f6889912f11b4ab31824737982839a4f2597558fcb91fc94e99a719568147"} Jan 27 16:34:09 crc kubenswrapper[4697]: I0127 16:34:09.819836 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-w2bwm/crc-debug-rjqmz"] Jan 27 16:34:09 crc kubenswrapper[4697]: I0127 16:34:09.829948 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-w2bwm/crc-debug-rjqmz"] Jan 27 16:34:10 crc kubenswrapper[4697]: I0127 16:34:10.898011 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-w2bwm/crc-debug-rjqmz" Jan 27 16:34:11 crc kubenswrapper[4697]: I0127 16:34:11.055768 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/adba9fe9-9059-4e2f-b016-4adace00bb40-host\") pod \"adba9fe9-9059-4e2f-b016-4adace00bb40\" (UID: \"adba9fe9-9059-4e2f-b016-4adace00bb40\") " Jan 27 16:34:11 crc kubenswrapper[4697]: I0127 16:34:11.055874 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f7fv6\" (UniqueName: \"kubernetes.io/projected/adba9fe9-9059-4e2f-b016-4adace00bb40-kube-api-access-f7fv6\") pod \"adba9fe9-9059-4e2f-b016-4adace00bb40\" (UID: \"adba9fe9-9059-4e2f-b016-4adace00bb40\") " Jan 27 16:34:11 crc kubenswrapper[4697]: I0127 16:34:11.056913 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/adba9fe9-9059-4e2f-b016-4adace00bb40-host" (OuterVolumeSpecName: "host") pod "adba9fe9-9059-4e2f-b016-4adace00bb40" (UID: "adba9fe9-9059-4e2f-b016-4adace00bb40"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 16:34:11 crc kubenswrapper[4697]: I0127 16:34:11.076289 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/adba9fe9-9059-4e2f-b016-4adace00bb40-kube-api-access-f7fv6" (OuterVolumeSpecName: "kube-api-access-f7fv6") pod "adba9fe9-9059-4e2f-b016-4adace00bb40" (UID: "adba9fe9-9059-4e2f-b016-4adace00bb40"). InnerVolumeSpecName "kube-api-access-f7fv6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:34:11 crc kubenswrapper[4697]: I0127 16:34:11.158668 4697 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/adba9fe9-9059-4e2f-b016-4adace00bb40-host\") on node \"crc\" DevicePath \"\"" Jan 27 16:34:11 crc kubenswrapper[4697]: I0127 16:34:11.158702 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f7fv6\" (UniqueName: \"kubernetes.io/projected/adba9fe9-9059-4e2f-b016-4adace00bb40-kube-api-access-f7fv6\") on node \"crc\" DevicePath \"\"" Jan 27 16:34:11 crc kubenswrapper[4697]: I0127 16:34:11.807318 4697 scope.go:117] "RemoveContainer" containerID="b05211027ce14bd913efc8c9c909289b749eab848702abc110a754cc5326c4ee" Jan 27 16:34:11 crc kubenswrapper[4697]: I0127 16:34:11.807939 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-w2bwm/crc-debug-rjqmz" Jan 27 16:34:12 crc kubenswrapper[4697]: I0127 16:34:12.581198 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="adba9fe9-9059-4e2f-b016-4adace00bb40" path="/var/lib/kubelet/pods/adba9fe9-9059-4e2f-b016-4adace00bb40/volumes" Jan 27 16:34:13 crc kubenswrapper[4697]: I0127 16:34:13.569646 4697 scope.go:117] "RemoveContainer" containerID="f5883d34a63778ee686555f75d5c9331c19bb6049a5349238b0b49ff0a60e6ee" Jan 27 16:34:13 crc kubenswrapper[4697]: E0127 16:34:13.569933 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wz495_openshift-machine-config-operator(e9bec8bc-b2a6-4865-83ca-692ae5c022a6)\"" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" Jan 27 16:34:27 crc kubenswrapper[4697]: I0127 16:34:27.568586 4697 scope.go:117] "RemoveContainer" containerID="f5883d34a63778ee686555f75d5c9331c19bb6049a5349238b0b49ff0a60e6ee" Jan 27 16:34:27 crc kubenswrapper[4697]: E0127 16:34:27.569274 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wz495_openshift-machine-config-operator(e9bec8bc-b2a6-4865-83ca-692ae5c022a6)\"" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" Jan 27 16:34:41 crc kubenswrapper[4697]: I0127 16:34:41.568465 4697 scope.go:117] "RemoveContainer" containerID="f5883d34a63778ee686555f75d5c9331c19bb6049a5349238b0b49ff0a60e6ee" Jan 27 16:34:41 crc kubenswrapper[4697]: E0127 16:34:41.569306 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wz495_openshift-machine-config-operator(e9bec8bc-b2a6-4865-83ca-692ae5c022a6)\"" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" Jan 27 16:34:56 crc kubenswrapper[4697]: I0127 16:34:56.568402 4697 scope.go:117] "RemoveContainer" containerID="f5883d34a63778ee686555f75d5c9331c19bb6049a5349238b0b49ff0a60e6ee" Jan 27 16:34:56 crc kubenswrapper[4697]: E0127 16:34:56.569329 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wz495_openshift-machine-config-operator(e9bec8bc-b2a6-4865-83ca-692ae5c022a6)\"" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" Jan 27 16:34:59 crc kubenswrapper[4697]: I0127 16:34:59.209247 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rgz7r"] Jan 27 16:34:59 crc kubenswrapper[4697]: E0127 16:34:59.213150 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="adba9fe9-9059-4e2f-b016-4adace00bb40" containerName="container-00" Jan 27 16:34:59 crc kubenswrapper[4697]: I0127 16:34:59.213350 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="adba9fe9-9059-4e2f-b016-4adace00bb40" containerName="container-00" Jan 27 16:34:59 crc kubenswrapper[4697]: I0127 16:34:59.213806 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="adba9fe9-9059-4e2f-b016-4adace00bb40" containerName="container-00" Jan 27 16:34:59 crc kubenswrapper[4697]: I0127 16:34:59.215954 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rgz7r" Jan 27 16:34:59 crc kubenswrapper[4697]: I0127 16:34:59.229711 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rgz7r"] Jan 27 16:34:59 crc kubenswrapper[4697]: I0127 16:34:59.261253 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cdaa8c43-c902-4835-8338-aef9a1f77e19-catalog-content\") pod \"redhat-marketplace-rgz7r\" (UID: \"cdaa8c43-c902-4835-8338-aef9a1f77e19\") " pod="openshift-marketplace/redhat-marketplace-rgz7r" Jan 27 16:34:59 crc kubenswrapper[4697]: I0127 16:34:59.261925 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcszr\" (UniqueName: \"kubernetes.io/projected/cdaa8c43-c902-4835-8338-aef9a1f77e19-kube-api-access-jcszr\") pod \"redhat-marketplace-rgz7r\" (UID: \"cdaa8c43-c902-4835-8338-aef9a1f77e19\") " pod="openshift-marketplace/redhat-marketplace-rgz7r" Jan 27 16:34:59 crc kubenswrapper[4697]: I0127 16:34:59.262116 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cdaa8c43-c902-4835-8338-aef9a1f77e19-utilities\") pod \"redhat-marketplace-rgz7r\" (UID: \"cdaa8c43-c902-4835-8338-aef9a1f77e19\") " pod="openshift-marketplace/redhat-marketplace-rgz7r" Jan 27 16:34:59 crc kubenswrapper[4697]: I0127 16:34:59.364007 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cdaa8c43-c902-4835-8338-aef9a1f77e19-catalog-content\") pod \"redhat-marketplace-rgz7r\" (UID: \"cdaa8c43-c902-4835-8338-aef9a1f77e19\") " pod="openshift-marketplace/redhat-marketplace-rgz7r" Jan 27 16:34:59 crc kubenswrapper[4697]: I0127 16:34:59.364093 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jcszr\" (UniqueName: \"kubernetes.io/projected/cdaa8c43-c902-4835-8338-aef9a1f77e19-kube-api-access-jcszr\") pod \"redhat-marketplace-rgz7r\" (UID: \"cdaa8c43-c902-4835-8338-aef9a1f77e19\") " pod="openshift-marketplace/redhat-marketplace-rgz7r" Jan 27 16:34:59 crc kubenswrapper[4697]: I0127 16:34:59.364157 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cdaa8c43-c902-4835-8338-aef9a1f77e19-utilities\") pod \"redhat-marketplace-rgz7r\" (UID: \"cdaa8c43-c902-4835-8338-aef9a1f77e19\") " pod="openshift-marketplace/redhat-marketplace-rgz7r" Jan 27 16:34:59 crc kubenswrapper[4697]: I0127 16:34:59.364578 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cdaa8c43-c902-4835-8338-aef9a1f77e19-utilities\") pod \"redhat-marketplace-rgz7r\" (UID: \"cdaa8c43-c902-4835-8338-aef9a1f77e19\") " pod="openshift-marketplace/redhat-marketplace-rgz7r" Jan 27 16:34:59 crc kubenswrapper[4697]: I0127 16:34:59.364994 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cdaa8c43-c902-4835-8338-aef9a1f77e19-catalog-content\") pod \"redhat-marketplace-rgz7r\" (UID: \"cdaa8c43-c902-4835-8338-aef9a1f77e19\") " pod="openshift-marketplace/redhat-marketplace-rgz7r" Jan 27 16:34:59 crc kubenswrapper[4697]: I0127 16:34:59.393860 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jcszr\" (UniqueName: \"kubernetes.io/projected/cdaa8c43-c902-4835-8338-aef9a1f77e19-kube-api-access-jcszr\") pod \"redhat-marketplace-rgz7r\" (UID: \"cdaa8c43-c902-4835-8338-aef9a1f77e19\") " pod="openshift-marketplace/redhat-marketplace-rgz7r" Jan 27 16:34:59 crc kubenswrapper[4697]: I0127 16:34:59.536089 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rgz7r" Jan 27 16:35:00 crc kubenswrapper[4697]: W0127 16:35:00.136981 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcdaa8c43_c902_4835_8338_aef9a1f77e19.slice/crio-aae6006b42823a949aa93adb4da02e28de30148803c919caeb2bf5bc51f09dfd WatchSource:0}: Error finding container aae6006b42823a949aa93adb4da02e28de30148803c919caeb2bf5bc51f09dfd: Status 404 returned error can't find the container with id aae6006b42823a949aa93adb4da02e28de30148803c919caeb2bf5bc51f09dfd Jan 27 16:35:00 crc kubenswrapper[4697]: I0127 16:35:00.139624 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rgz7r"] Jan 27 16:35:00 crc kubenswrapper[4697]: I0127 16:35:00.425526 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rgz7r" event={"ID":"cdaa8c43-c902-4835-8338-aef9a1f77e19","Type":"ContainerStarted","Data":"8064d65031300de34feb52c1e86db8f87b6499bbf729343ca15b2621cc8a7cbd"} Jan 27 16:35:00 crc kubenswrapper[4697]: I0127 16:35:00.425892 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rgz7r" event={"ID":"cdaa8c43-c902-4835-8338-aef9a1f77e19","Type":"ContainerStarted","Data":"aae6006b42823a949aa93adb4da02e28de30148803c919caeb2bf5bc51f09dfd"} Jan 27 16:35:01 crc kubenswrapper[4697]: I0127 16:35:01.434064 4697 generic.go:334] "Generic (PLEG): container finished" podID="cdaa8c43-c902-4835-8338-aef9a1f77e19" containerID="8064d65031300de34feb52c1e86db8f87b6499bbf729343ca15b2621cc8a7cbd" exitCode=0 Jan 27 16:35:01 crc kubenswrapper[4697]: I0127 16:35:01.434226 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rgz7r" event={"ID":"cdaa8c43-c902-4835-8338-aef9a1f77e19","Type":"ContainerDied","Data":"8064d65031300de34feb52c1e86db8f87b6499bbf729343ca15b2621cc8a7cbd"} Jan 27 16:35:01 crc kubenswrapper[4697]: I0127 16:35:01.436588 4697 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 16:35:03 crc kubenswrapper[4697]: I0127 16:35:03.454593 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rgz7r" event={"ID":"cdaa8c43-c902-4835-8338-aef9a1f77e19","Type":"ContainerStarted","Data":"58a19a2db02abea5e0c11cef1174b0f46b27bfff404641c1fbb0fb7c104b28fc"} Jan 27 16:35:04 crc kubenswrapper[4697]: I0127 16:35:04.466578 4697 generic.go:334] "Generic (PLEG): container finished" podID="cdaa8c43-c902-4835-8338-aef9a1f77e19" containerID="58a19a2db02abea5e0c11cef1174b0f46b27bfff404641c1fbb0fb7c104b28fc" exitCode=0 Jan 27 16:35:04 crc kubenswrapper[4697]: I0127 16:35:04.466657 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rgz7r" event={"ID":"cdaa8c43-c902-4835-8338-aef9a1f77e19","Type":"ContainerDied","Data":"58a19a2db02abea5e0c11cef1174b0f46b27bfff404641c1fbb0fb7c104b28fc"} Jan 27 16:35:05 crc kubenswrapper[4697]: I0127 16:35:05.476903 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rgz7r" event={"ID":"cdaa8c43-c902-4835-8338-aef9a1f77e19","Type":"ContainerStarted","Data":"12440b341db5bca7c083cbea38cd918a3dc607a0d43b4589c08960bd5633ef1c"} Jan 27 16:35:05 crc kubenswrapper[4697]: I0127 16:35:05.494856 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rgz7r" podStartSLOduration=3.018597818 podStartE2EDuration="6.49484109s" podCreationTimestamp="2026-01-27 16:34:59 +0000 UTC" firstStartedPulling="2026-01-27 16:35:01.436225525 +0000 UTC m=+5197.608625306" lastFinishedPulling="2026-01-27 16:35:04.912468797 +0000 UTC m=+5201.084868578" observedRunningTime="2026-01-27 16:35:05.494516853 +0000 UTC m=+5201.666916634" watchObservedRunningTime="2026-01-27 16:35:05.49484109 +0000 UTC m=+5201.667240871" Jan 27 16:35:05 crc kubenswrapper[4697]: I0127 16:35:05.786025 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-cbf684fd-9bzgt_94a26d25-9d4f-4d9e-becb-5fef1852a9cc/barbican-api/0.log" Jan 27 16:35:06 crc kubenswrapper[4697]: I0127 16:35:06.051437 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-cbf684fd-9bzgt_94a26d25-9d4f-4d9e-becb-5fef1852a9cc/barbican-api-log/0.log" Jan 27 16:35:06 crc kubenswrapper[4697]: I0127 16:35:06.197999 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-6b5997ff6-w2vq4_4ab6ee6b-e923-4905-8d8d-56f96e3bd471/barbican-keystone-listener/0.log" Jan 27 16:35:06 crc kubenswrapper[4697]: I0127 16:35:06.254551 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-6b5997ff6-w2vq4_4ab6ee6b-e923-4905-8d8d-56f96e3bd471/barbican-keystone-listener-log/0.log" Jan 27 16:35:06 crc kubenswrapper[4697]: I0127 16:35:06.519059 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-8c986997f-97nkx_c283033b-665a-4e84-b347-5ab724df37be/barbican-worker/0.log" Jan 27 16:35:06 crc kubenswrapper[4697]: I0127 16:35:06.523839 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-8c986997f-97nkx_c283033b-665a-4e84-b347-5ab724df37be/barbican-worker-log/0.log" Jan 27 16:35:06 crc kubenswrapper[4697]: I0127 16:35:06.836980 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-rm5t2_e6db178e-d462-4895-84e2-10695b0df557/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 16:35:06 crc kubenswrapper[4697]: I0127 16:35:06.900019 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_43946a66-4e74-47e4-bfd3-63256993e153/ceilometer-central-agent/0.log" Jan 27 16:35:06 crc kubenswrapper[4697]: I0127 16:35:06.926216 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_43946a66-4e74-47e4-bfd3-63256993e153/ceilometer-notification-agent/0.log" Jan 27 16:35:07 crc kubenswrapper[4697]: I0127 16:35:07.124030 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_43946a66-4e74-47e4-bfd3-63256993e153/proxy-httpd/0.log" Jan 27 16:35:07 crc kubenswrapper[4697]: I0127 16:35:07.126526 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_43946a66-4e74-47e4-bfd3-63256993e153/sg-core/0.log" Jan 27 16:35:07 crc kubenswrapper[4697]: I0127 16:35:07.239667 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_6c37afd4-a5ce-450f-8d51-231aba899e23/cinder-api/0.log" Jan 27 16:35:07 crc kubenswrapper[4697]: I0127 16:35:07.341410 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_6c37afd4-a5ce-450f-8d51-231aba899e23/cinder-api-log/0.log" Jan 27 16:35:07 crc kubenswrapper[4697]: I0127 16:35:07.444046 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_b69b7a05-c4c4-48a4-a4fa-0cc140a18080/cinder-scheduler/0.log" Jan 27 16:35:07 crc kubenswrapper[4697]: I0127 16:35:07.532598 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_b69b7a05-c4c4-48a4-a4fa-0cc140a18080/probe/0.log" Jan 27 16:35:07 crc kubenswrapper[4697]: I0127 16:35:07.758300 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-fpd2r_ed55f221-f5eb-421e-88b3-682ff73202dc/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 16:35:07 crc kubenswrapper[4697]: I0127 16:35:07.921965 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-666b2_e3f4f826-3a5f-4eb5-a34b-c1c0ff66d4e3/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 16:35:08 crc kubenswrapper[4697]: I0127 16:35:08.032568 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6ff66b85ff-jdz29_56c582a3-145c-4300-8680-1720a7581f60/init/0.log" Jan 27 16:35:08 crc kubenswrapper[4697]: I0127 16:35:08.291005 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-67tjj_0b244a0a-7ccb-49be-bcef-497d3b0f99be/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 16:35:08 crc kubenswrapper[4697]: I0127 16:35:08.344908 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6ff66b85ff-jdz29_56c582a3-145c-4300-8680-1720a7581f60/init/0.log" Jan 27 16:35:08 crc kubenswrapper[4697]: I0127 16:35:08.482149 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6ff66b85ff-jdz29_56c582a3-145c-4300-8680-1720a7581f60/dnsmasq-dns/0.log" Jan 27 16:35:08 crc kubenswrapper[4697]: I0127 16:35:08.550334 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_8397e2ec-7b94-4690-b567-716eae78b6d0/glance-log/0.log" Jan 27 16:35:08 crc kubenswrapper[4697]: I0127 16:35:08.866087 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_8397e2ec-7b94-4690-b567-716eae78b6d0/glance-httpd/0.log" Jan 27 16:35:09 crc kubenswrapper[4697]: I0127 16:35:09.011420 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_31e0b520-a0d8-4d9c-a53a-dbc75c401f4f/glance-httpd/0.log" Jan 27 16:35:09 crc kubenswrapper[4697]: I0127 16:35:09.025607 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_31e0b520-a0d8-4d9c-a53a-dbc75c401f4f/glance-log/0.log" Jan 27 16:35:09 crc kubenswrapper[4697]: I0127 16:35:09.272305 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-5b9dc56b78-cpxnx_ca5e937a-90cf-44e0-bf5c-bcb75c95a2f4/horizon/2.log" Jan 27 16:35:09 crc kubenswrapper[4697]: I0127 16:35:09.333669 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-5b9dc56b78-cpxnx_ca5e937a-90cf-44e0-bf5c-bcb75c95a2f4/horizon/1.log" Jan 27 16:35:09 crc kubenswrapper[4697]: I0127 16:35:09.536263 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rgz7r" Jan 27 16:35:09 crc kubenswrapper[4697]: I0127 16:35:09.536883 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rgz7r" Jan 27 16:35:09 crc kubenswrapper[4697]: I0127 16:35:09.558183 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-j646z_59c9a20e-f30b-44c1-86ff-fc751969cb24/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 16:35:09 crc kubenswrapper[4697]: I0127 16:35:09.587352 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rgz7r" Jan 27 16:35:09 crc kubenswrapper[4697]: I0127 16:35:09.739761 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-5b9dc56b78-cpxnx_ca5e937a-90cf-44e0-bf5c-bcb75c95a2f4/horizon-log/0.log" Jan 27 16:35:09 crc kubenswrapper[4697]: I0127 16:35:09.922450 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-vfmvd_91be9d7e-7513-4b5f-a897-9bb94f9d7649/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 16:35:10 crc kubenswrapper[4697]: I0127 16:35:10.199884 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29492161-pqk7t_02b729a6-604c-42d7-94d9-0d39bfcaf203/keystone-cron/0.log" Jan 27 16:35:10 crc kubenswrapper[4697]: I0127 16:35:10.279356 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_48bb37d7-5e93-4523-8526-b8b664997fb3/kube-state-metrics/0.log" Jan 27 16:35:10 crc kubenswrapper[4697]: I0127 16:35:10.500267 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-59df5b454d-5c7dx_d505d1b9-c72c-4515-8f3f-f543d0276487/keystone-api/0.log" Jan 27 16:35:10 crc kubenswrapper[4697]: I0127 16:35:10.581614 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-4cl49_cb80b572-758d-4bd1-b54a-eb5b40cce9db/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 16:35:10 crc kubenswrapper[4697]: I0127 16:35:10.590338 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rgz7r" Jan 27 16:35:10 crc kubenswrapper[4697]: I0127 16:35:10.639533 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rgz7r"] Jan 27 16:35:11 crc kubenswrapper[4697]: I0127 16:35:11.137278 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-gtfnf_38a907af-3d24-434c-a097-3b3635db95d3/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 16:35:11 crc kubenswrapper[4697]: I0127 16:35:11.223919 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-dff8b9f65-4b4q2_f67513b8-77d5-4a24-b1ee-ce73e70cb72d/neutron-httpd/0.log" Jan 27 16:35:11 crc kubenswrapper[4697]: I0127 16:35:11.358615 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-dff8b9f65-4b4q2_f67513b8-77d5-4a24-b1ee-ce73e70cb72d/neutron-api/0.log" Jan 27 16:35:11 crc kubenswrapper[4697]: I0127 16:35:11.569063 4697 scope.go:117] "RemoveContainer" containerID="f5883d34a63778ee686555f75d5c9331c19bb6049a5349238b0b49ff0a60e6ee" Jan 27 16:35:11 crc kubenswrapper[4697]: E0127 16:35:11.569538 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wz495_openshift-machine-config-operator(e9bec8bc-b2a6-4865-83ca-692ae5c022a6)\"" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" Jan 27 16:35:12 crc kubenswrapper[4697]: I0127 16:35:12.294940 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_42230c5d-4496-4618-bd71-9b11d49bde9b/nova-cell0-conductor-conductor/0.log" Jan 27 16:35:12 crc kubenswrapper[4697]: I0127 16:35:12.550170 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-rgz7r" podUID="cdaa8c43-c902-4835-8338-aef9a1f77e19" containerName="registry-server" containerID="cri-o://12440b341db5bca7c083cbea38cd918a3dc607a0d43b4589c08960bd5633ef1c" gracePeriod=2 Jan 27 16:35:12 crc kubenswrapper[4697]: I0127 16:35:12.793874 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_41233cf8-f273-4cae-a02d-9e0fb56b2f1d/nova-cell1-conductor-conductor/0.log" Jan 27 16:35:13 crc kubenswrapper[4697]: I0127 16:35:13.106828 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_6ac3c287-657e-4e2a-be91-50e9fbce6ea0/nova-cell1-novncproxy-novncproxy/0.log" Jan 27 16:35:13 crc kubenswrapper[4697]: I0127 16:35:13.108046 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rgz7r" Jan 27 16:35:13 crc kubenswrapper[4697]: I0127 16:35:13.219586 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_f7a25f76-cbe2-44a4-911d-40b875d2f934/nova-api-log/0.log" Jan 27 16:35:13 crc kubenswrapper[4697]: I0127 16:35:13.296546 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jcszr\" (UniqueName: \"kubernetes.io/projected/cdaa8c43-c902-4835-8338-aef9a1f77e19-kube-api-access-jcszr\") pod \"cdaa8c43-c902-4835-8338-aef9a1f77e19\" (UID: \"cdaa8c43-c902-4835-8338-aef9a1f77e19\") " Jan 27 16:35:13 crc kubenswrapper[4697]: I0127 16:35:13.296660 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cdaa8c43-c902-4835-8338-aef9a1f77e19-catalog-content\") pod \"cdaa8c43-c902-4835-8338-aef9a1f77e19\" (UID: \"cdaa8c43-c902-4835-8338-aef9a1f77e19\") " Jan 27 16:35:13 crc kubenswrapper[4697]: I0127 16:35:13.296745 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cdaa8c43-c902-4835-8338-aef9a1f77e19-utilities\") pod \"cdaa8c43-c902-4835-8338-aef9a1f77e19\" (UID: \"cdaa8c43-c902-4835-8338-aef9a1f77e19\") " Jan 27 16:35:13 crc kubenswrapper[4697]: I0127 16:35:13.298630 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cdaa8c43-c902-4835-8338-aef9a1f77e19-utilities" (OuterVolumeSpecName: "utilities") pod "cdaa8c43-c902-4835-8338-aef9a1f77e19" (UID: "cdaa8c43-c902-4835-8338-aef9a1f77e19"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:35:13 crc kubenswrapper[4697]: I0127 16:35:13.303399 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cdaa8c43-c902-4835-8338-aef9a1f77e19-kube-api-access-jcszr" (OuterVolumeSpecName: "kube-api-access-jcszr") pod "cdaa8c43-c902-4835-8338-aef9a1f77e19" (UID: "cdaa8c43-c902-4835-8338-aef9a1f77e19"). InnerVolumeSpecName "kube-api-access-jcszr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:35:13 crc kubenswrapper[4697]: I0127 16:35:13.332503 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cdaa8c43-c902-4835-8338-aef9a1f77e19-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cdaa8c43-c902-4835-8338-aef9a1f77e19" (UID: "cdaa8c43-c902-4835-8338-aef9a1f77e19"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:35:13 crc kubenswrapper[4697]: I0127 16:35:13.399037 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jcszr\" (UniqueName: \"kubernetes.io/projected/cdaa8c43-c902-4835-8338-aef9a1f77e19-kube-api-access-jcszr\") on node \"crc\" DevicePath \"\"" Jan 27 16:35:13 crc kubenswrapper[4697]: I0127 16:35:13.399097 4697 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cdaa8c43-c902-4835-8338-aef9a1f77e19-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 16:35:13 crc kubenswrapper[4697]: I0127 16:35:13.399114 4697 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cdaa8c43-c902-4835-8338-aef9a1f77e19-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 16:35:13 crc kubenswrapper[4697]: I0127 16:35:13.491666 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-jk5zz_8b858060-b802-452d-aa2a-1be4f38efe74/nova-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 16:35:13 crc kubenswrapper[4697]: I0127 16:35:13.563349 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_3c1b720b-0b31-4c5d-9306-ca65e780dc12/nova-metadata-log/0.log" Jan 27 16:35:13 crc kubenswrapper[4697]: I0127 16:35:13.565363 4697 generic.go:334] "Generic (PLEG): container finished" podID="cdaa8c43-c902-4835-8338-aef9a1f77e19" containerID="12440b341db5bca7c083cbea38cd918a3dc607a0d43b4589c08960bd5633ef1c" exitCode=0 Jan 27 16:35:13 crc kubenswrapper[4697]: I0127 16:35:13.565400 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rgz7r" event={"ID":"cdaa8c43-c902-4835-8338-aef9a1f77e19","Type":"ContainerDied","Data":"12440b341db5bca7c083cbea38cd918a3dc607a0d43b4589c08960bd5633ef1c"} Jan 27 16:35:13 crc kubenswrapper[4697]: I0127 16:35:13.565424 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rgz7r" event={"ID":"cdaa8c43-c902-4835-8338-aef9a1f77e19","Type":"ContainerDied","Data":"aae6006b42823a949aa93adb4da02e28de30148803c919caeb2bf5bc51f09dfd"} Jan 27 16:35:13 crc kubenswrapper[4697]: I0127 16:35:13.565441 4697 scope.go:117] "RemoveContainer" containerID="12440b341db5bca7c083cbea38cd918a3dc607a0d43b4589c08960bd5633ef1c" Jan 27 16:35:13 crc kubenswrapper[4697]: I0127 16:35:13.565565 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rgz7r" Jan 27 16:35:13 crc kubenswrapper[4697]: I0127 16:35:13.586186 4697 scope.go:117] "RemoveContainer" containerID="58a19a2db02abea5e0c11cef1174b0f46b27bfff404641c1fbb0fb7c104b28fc" Jan 27 16:35:13 crc kubenswrapper[4697]: I0127 16:35:13.608888 4697 scope.go:117] "RemoveContainer" containerID="8064d65031300de34feb52c1e86db8f87b6499bbf729343ca15b2621cc8a7cbd" Jan 27 16:35:13 crc kubenswrapper[4697]: I0127 16:35:13.622562 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rgz7r"] Jan 27 16:35:13 crc kubenswrapper[4697]: I0127 16:35:13.632110 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-rgz7r"] Jan 27 16:35:13 crc kubenswrapper[4697]: I0127 16:35:13.665538 4697 scope.go:117] "RemoveContainer" containerID="12440b341db5bca7c083cbea38cd918a3dc607a0d43b4589c08960bd5633ef1c" Jan 27 16:35:13 crc kubenswrapper[4697]: E0127 16:35:13.669911 4697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"12440b341db5bca7c083cbea38cd918a3dc607a0d43b4589c08960bd5633ef1c\": container with ID starting with 12440b341db5bca7c083cbea38cd918a3dc607a0d43b4589c08960bd5633ef1c not found: ID does not exist" containerID="12440b341db5bca7c083cbea38cd918a3dc607a0d43b4589c08960bd5633ef1c" Jan 27 16:35:13 crc kubenswrapper[4697]: I0127 16:35:13.669953 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12440b341db5bca7c083cbea38cd918a3dc607a0d43b4589c08960bd5633ef1c"} err="failed to get container status \"12440b341db5bca7c083cbea38cd918a3dc607a0d43b4589c08960bd5633ef1c\": rpc error: code = NotFound desc = could not find container \"12440b341db5bca7c083cbea38cd918a3dc607a0d43b4589c08960bd5633ef1c\": container with ID starting with 12440b341db5bca7c083cbea38cd918a3dc607a0d43b4589c08960bd5633ef1c not found: ID does not exist" Jan 27 16:35:13 crc kubenswrapper[4697]: I0127 16:35:13.669981 4697 scope.go:117] "RemoveContainer" containerID="58a19a2db02abea5e0c11cef1174b0f46b27bfff404641c1fbb0fb7c104b28fc" Jan 27 16:35:13 crc kubenswrapper[4697]: E0127 16:35:13.670614 4697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58a19a2db02abea5e0c11cef1174b0f46b27bfff404641c1fbb0fb7c104b28fc\": container with ID starting with 58a19a2db02abea5e0c11cef1174b0f46b27bfff404641c1fbb0fb7c104b28fc not found: ID does not exist" containerID="58a19a2db02abea5e0c11cef1174b0f46b27bfff404641c1fbb0fb7c104b28fc" Jan 27 16:35:13 crc kubenswrapper[4697]: I0127 16:35:13.670667 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58a19a2db02abea5e0c11cef1174b0f46b27bfff404641c1fbb0fb7c104b28fc"} err="failed to get container status \"58a19a2db02abea5e0c11cef1174b0f46b27bfff404641c1fbb0fb7c104b28fc\": rpc error: code = NotFound desc = could not find container \"58a19a2db02abea5e0c11cef1174b0f46b27bfff404641c1fbb0fb7c104b28fc\": container with ID starting with 58a19a2db02abea5e0c11cef1174b0f46b27bfff404641c1fbb0fb7c104b28fc not found: ID does not exist" Jan 27 16:35:13 crc kubenswrapper[4697]: I0127 16:35:13.670701 4697 scope.go:117] "RemoveContainer" containerID="8064d65031300de34feb52c1e86db8f87b6499bbf729343ca15b2621cc8a7cbd" Jan 27 16:35:13 crc kubenswrapper[4697]: E0127 16:35:13.671910 4697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8064d65031300de34feb52c1e86db8f87b6499bbf729343ca15b2621cc8a7cbd\": container with ID starting with 8064d65031300de34feb52c1e86db8f87b6499bbf729343ca15b2621cc8a7cbd not found: ID does not exist" containerID="8064d65031300de34feb52c1e86db8f87b6499bbf729343ca15b2621cc8a7cbd" Jan 27 16:35:13 crc kubenswrapper[4697]: I0127 16:35:13.672171 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8064d65031300de34feb52c1e86db8f87b6499bbf729343ca15b2621cc8a7cbd"} err="failed to get container status \"8064d65031300de34feb52c1e86db8f87b6499bbf729343ca15b2621cc8a7cbd\": rpc error: code = NotFound desc = could not find container \"8064d65031300de34feb52c1e86db8f87b6499bbf729343ca15b2621cc8a7cbd\": container with ID starting with 8064d65031300de34feb52c1e86db8f87b6499bbf729343ca15b2621cc8a7cbd not found: ID does not exist" Jan 27 16:35:13 crc kubenswrapper[4697]: I0127 16:35:13.804104 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_f7a25f76-cbe2-44a4-911d-40b875d2f934/nova-api-api/0.log" Jan 27 16:35:14 crc kubenswrapper[4697]: I0127 16:35:14.155834 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_a07684ab-be65-430a-89ff-7e3503304f07/mysql-bootstrap/0.log" Jan 27 16:35:14 crc kubenswrapper[4697]: I0127 16:35:14.374247 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_a07684ab-be65-430a-89ff-7e3503304f07/mysql-bootstrap/0.log" Jan 27 16:35:14 crc kubenswrapper[4697]: I0127 16:35:14.385681 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_0aac0bcf-d6ae-4188-b597-e42935d81d0e/nova-scheduler-scheduler/0.log" Jan 27 16:35:14 crc kubenswrapper[4697]: I0127 16:35:14.475414 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_a07684ab-be65-430a-89ff-7e3503304f07/galera/0.log" Jan 27 16:35:14 crc kubenswrapper[4697]: I0127 16:35:14.582011 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cdaa8c43-c902-4835-8338-aef9a1f77e19" path="/var/lib/kubelet/pods/cdaa8c43-c902-4835-8338-aef9a1f77e19/volumes" Jan 27 16:35:14 crc kubenswrapper[4697]: I0127 16:35:14.640060 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_b1f75076-2324-44ff-9a33-e083e3de3c02/mysql-bootstrap/0.log" Jan 27 16:35:14 crc kubenswrapper[4697]: I0127 16:35:14.836515 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_b1f75076-2324-44ff-9a33-e083e3de3c02/mysql-bootstrap/0.log" Jan 27 16:35:14 crc kubenswrapper[4697]: I0127 16:35:14.896936 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_b1f75076-2324-44ff-9a33-e083e3de3c02/galera/0.log" Jan 27 16:35:15 crc kubenswrapper[4697]: I0127 16:35:15.046705 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_3d176f9c-9152-4162-b723-1f6e8330118a/openstackclient/0.log" Jan 27 16:35:15 crc kubenswrapper[4697]: I0127 16:35:15.201294 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-6sgqx_72f31a1f-c388-4fed-9842-13f65cf91e9b/ovn-controller/0.log" Jan 27 16:35:15 crc kubenswrapper[4697]: I0127 16:35:15.506000 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-mfkdn_e7853fb5-d995-44ae-b1b5-c4c38fcadbd2/openstack-network-exporter/0.log" Jan 27 16:35:15 crc kubenswrapper[4697]: I0127 16:35:15.625924 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_3c1b720b-0b31-4c5d-9306-ca65e780dc12/nova-metadata-metadata/0.log" Jan 27 16:35:15 crc kubenswrapper[4697]: I0127 16:35:15.647298 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-p278q_9cb679cc-394c-4c45-8712-058fad1090e7/ovsdb-server-init/0.log" Jan 27 16:35:15 crc kubenswrapper[4697]: I0127 16:35:15.999058 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_39bb5256-76f5-4ada-8803-c88ee4ccd881/memcached/0.log" Jan 27 16:35:16 crc kubenswrapper[4697]: I0127 16:35:16.062252 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-p278q_9cb679cc-394c-4c45-8712-058fad1090e7/ovs-vswitchd/0.log" Jan 27 16:35:16 crc kubenswrapper[4697]: I0127 16:35:16.094264 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-p278q_9cb679cc-394c-4c45-8712-058fad1090e7/ovsdb-server/0.log" Jan 27 16:35:16 crc kubenswrapper[4697]: I0127 16:35:16.138006 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-p278q_9cb679cc-394c-4c45-8712-058fad1090e7/ovsdb-server-init/0.log" Jan 27 16:35:16 crc kubenswrapper[4697]: I0127 16:35:16.274864 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-ljqck_679f5e04-5c46-49e5-9216-f850ca38d84d/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 16:35:16 crc kubenswrapper[4697]: I0127 16:35:16.356646 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_93a06b1b-be54-4517-a12a-83c9a4f91367/openstack-network-exporter/0.log" Jan 27 16:35:16 crc kubenswrapper[4697]: I0127 16:35:16.356813 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_93a06b1b-be54-4517-a12a-83c9a4f91367/ovn-northd/0.log" Jan 27 16:35:16 crc kubenswrapper[4697]: I0127 16:35:16.500204 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_0e5111c0-346b-4994-822c-c86f4ee166bc/ovsdbserver-nb/0.log" Jan 27 16:35:16 crc kubenswrapper[4697]: I0127 16:35:16.503912 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_0e5111c0-346b-4994-822c-c86f4ee166bc/openstack-network-exporter/0.log" Jan 27 16:35:16 crc kubenswrapper[4697]: I0127 16:35:16.598524 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_0c5d2358-058b-4d32-86b7-20228aff9677/openstack-network-exporter/0.log" Jan 27 16:35:16 crc kubenswrapper[4697]: I0127 16:35:16.696673 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_0c5d2358-058b-4d32-86b7-20228aff9677/ovsdbserver-sb/0.log" Jan 27 16:35:16 crc kubenswrapper[4697]: I0127 16:35:16.862380 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-85b9bd5db8-9x55q_5663a40f-33b6-4e0b-9f94-94aecd69e3af/placement-api/0.log" Jan 27 16:35:16 crc kubenswrapper[4697]: I0127 16:35:16.987810 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_e1aa709a-61ff-458d-a4b9-ca6d06bc537c/setup-container/0.log" Jan 27 16:35:16 crc kubenswrapper[4697]: I0127 16:35:16.989127 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-85b9bd5db8-9x55q_5663a40f-33b6-4e0b-9f94-94aecd69e3af/placement-log/0.log" Jan 27 16:35:17 crc kubenswrapper[4697]: I0127 16:35:17.156047 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_e1aa709a-61ff-458d-a4b9-ca6d06bc537c/setup-container/0.log" Jan 27 16:35:17 crc kubenswrapper[4697]: I0127 16:35:17.173724 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_e1aa709a-61ff-458d-a4b9-ca6d06bc537c/rabbitmq/0.log" Jan 27 16:35:17 crc kubenswrapper[4697]: I0127 16:35:17.192140 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_b9b87d14-1e98-448a-9b9c-3c47e4782ede/setup-container/0.log" Jan 27 16:35:17 crc kubenswrapper[4697]: I0127 16:35:17.368131 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_b9b87d14-1e98-448a-9b9c-3c47e4782ede/setup-container/0.log" Jan 27 16:35:17 crc kubenswrapper[4697]: I0127 16:35:17.382944 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-2cdg9_59725918-a0f4-46fb-afcf-393ee1d4d22b/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 16:35:17 crc kubenswrapper[4697]: I0127 16:35:17.425047 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_b9b87d14-1e98-448a-9b9c-3c47e4782ede/rabbitmq/0.log" Jan 27 16:35:17 crc kubenswrapper[4697]: I0127 16:35:17.648705 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-bhvqs_e7fe5183-36d1-4594-859b-b999146707ad/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 16:35:17 crc kubenswrapper[4697]: I0127 16:35:17.685166 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-45gtt_6eb281af-668c-4872-8100-3a9db4eb4c5a/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 16:35:17 crc kubenswrapper[4697]: I0127 16:35:17.739925 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-4tcrw_f15a6662-a671-40da-9473-59daaedbe07c/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 16:35:17 crc kubenswrapper[4697]: I0127 16:35:17.995394 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-nkk45_707d2908-c632-4cb5-9a3f-8d44f79aedcb/ssh-known-hosts-edpm-deployment/0.log" Jan 27 16:35:18 crc kubenswrapper[4697]: I0127 16:35:18.034921 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-8bdd65479-mrv2d_8f6bc9e4-3f3f-4e33-a648-4381818937f1/proxy-server/0.log" Jan 27 16:35:18 crc kubenswrapper[4697]: I0127 16:35:18.108393 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-8bdd65479-mrv2d_8f6bc9e4-3f3f-4e33-a648-4381818937f1/proxy-httpd/0.log" Jan 27 16:35:18 crc kubenswrapper[4697]: I0127 16:35:18.229644 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-w2t78_afa66008-cd63-46fa-8ac6-622e2b465eec/swift-ring-rebalance/0.log" Jan 27 16:35:18 crc kubenswrapper[4697]: I0127 16:35:18.282342 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c4c66cac-c142-4579-9d13-053d43983229/account-auditor/0.log" Jan 27 16:35:18 crc kubenswrapper[4697]: I0127 16:35:18.353021 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c4c66cac-c142-4579-9d13-053d43983229/account-reaper/0.log" Jan 27 16:35:18 crc kubenswrapper[4697]: I0127 16:35:18.455624 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c4c66cac-c142-4579-9d13-053d43983229/account-server/0.log" Jan 27 16:35:18 crc kubenswrapper[4697]: I0127 16:35:18.457080 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c4c66cac-c142-4579-9d13-053d43983229/account-replicator/0.log" Jan 27 16:35:18 crc kubenswrapper[4697]: I0127 16:35:18.518357 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c4c66cac-c142-4579-9d13-053d43983229/container-auditor/0.log" Jan 27 16:35:18 crc kubenswrapper[4697]: I0127 16:35:18.709355 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c4c66cac-c142-4579-9d13-053d43983229/object-expirer/0.log" Jan 27 16:35:18 crc kubenswrapper[4697]: I0127 16:35:18.710578 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c4c66cac-c142-4579-9d13-053d43983229/container-server/0.log" Jan 27 16:35:18 crc kubenswrapper[4697]: I0127 16:35:18.737938 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c4c66cac-c142-4579-9d13-053d43983229/container-replicator/0.log" Jan 27 16:35:18 crc kubenswrapper[4697]: I0127 16:35:18.770364 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c4c66cac-c142-4579-9d13-053d43983229/container-updater/0.log" Jan 27 16:35:18 crc kubenswrapper[4697]: I0127 16:35:18.827653 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c4c66cac-c142-4579-9d13-053d43983229/object-auditor/0.log" Jan 27 16:35:18 crc kubenswrapper[4697]: I0127 16:35:18.933417 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c4c66cac-c142-4579-9d13-053d43983229/object-updater/0.log" Jan 27 16:35:18 crc kubenswrapper[4697]: I0127 16:35:18.960668 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c4c66cac-c142-4579-9d13-053d43983229/object-replicator/0.log" Jan 27 16:35:19 crc kubenswrapper[4697]: I0127 16:35:19.008760 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c4c66cac-c142-4579-9d13-053d43983229/rsync/0.log" Jan 27 16:35:19 crc kubenswrapper[4697]: I0127 16:35:19.158818 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c4c66cac-c142-4579-9d13-053d43983229/object-server/0.log" Jan 27 16:35:19 crc kubenswrapper[4697]: I0127 16:35:19.286309 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c4c66cac-c142-4579-9d13-053d43983229/swift-recon-cron/0.log" Jan 27 16:35:19 crc kubenswrapper[4697]: I0127 16:35:19.425624 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-4hmkx_2b30d2e1-f8ce-4e50-9476-eb2d454bc1ce/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 16:35:19 crc kubenswrapper[4697]: I0127 16:35:19.503752 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_76805ce8-13c7-4d04-83c6-b70eaf33b9d8/tempest-tests-tempest-tests-runner/0.log" Jan 27 16:35:19 crc kubenswrapper[4697]: I0127 16:35:19.641452 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_adb48667-7dff-4826-858e-5825e64dfd59/test-operator-logs-container/0.log" Jan 27 16:35:19 crc kubenswrapper[4697]: I0127 16:35:19.710653 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-h6skx_dba3e49a-c1cb-4006-b821-a341645c7fba/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 16:35:24 crc kubenswrapper[4697]: I0127 16:35:24.597523 4697 scope.go:117] "RemoveContainer" containerID="f5883d34a63778ee686555f75d5c9331c19bb6049a5349238b0b49ff0a60e6ee" Jan 27 16:35:24 crc kubenswrapper[4697]: E0127 16:35:24.604418 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wz495_openshift-machine-config-operator(e9bec8bc-b2a6-4865-83ca-692ae5c022a6)\"" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" Jan 27 16:35:37 crc kubenswrapper[4697]: I0127 16:35:37.568057 4697 scope.go:117] "RemoveContainer" containerID="f5883d34a63778ee686555f75d5c9331c19bb6049a5349238b0b49ff0a60e6ee" Jan 27 16:35:37 crc kubenswrapper[4697]: E0127 16:35:37.568974 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wz495_openshift-machine-config-operator(e9bec8bc-b2a6-4865-83ca-692ae5c022a6)\"" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" Jan 27 16:35:41 crc kubenswrapper[4697]: I0127 16:35:41.896277 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-pxv7h"] Jan 27 16:35:41 crc kubenswrapper[4697]: E0127 16:35:41.897125 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cdaa8c43-c902-4835-8338-aef9a1f77e19" containerName="extract-content" Jan 27 16:35:41 crc kubenswrapper[4697]: I0127 16:35:41.897136 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdaa8c43-c902-4835-8338-aef9a1f77e19" containerName="extract-content" Jan 27 16:35:41 crc kubenswrapper[4697]: E0127 16:35:41.897178 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cdaa8c43-c902-4835-8338-aef9a1f77e19" containerName="extract-utilities" Jan 27 16:35:41 crc kubenswrapper[4697]: I0127 16:35:41.897184 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdaa8c43-c902-4835-8338-aef9a1f77e19" containerName="extract-utilities" Jan 27 16:35:41 crc kubenswrapper[4697]: E0127 16:35:41.897208 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cdaa8c43-c902-4835-8338-aef9a1f77e19" containerName="registry-server" Jan 27 16:35:41 crc kubenswrapper[4697]: I0127 16:35:41.897215 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdaa8c43-c902-4835-8338-aef9a1f77e19" containerName="registry-server" Jan 27 16:35:41 crc kubenswrapper[4697]: I0127 16:35:41.897384 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="cdaa8c43-c902-4835-8338-aef9a1f77e19" containerName="registry-server" Jan 27 16:35:41 crc kubenswrapper[4697]: I0127 16:35:41.898569 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pxv7h" Jan 27 16:35:41 crc kubenswrapper[4697]: I0127 16:35:41.917933 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pxv7h"] Jan 27 16:35:42 crc kubenswrapper[4697]: I0127 16:35:42.032358 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e45570b1-6a10-4572-9ccd-abde6b453160-utilities\") pod \"community-operators-pxv7h\" (UID: \"e45570b1-6a10-4572-9ccd-abde6b453160\") " pod="openshift-marketplace/community-operators-pxv7h" Jan 27 16:35:42 crc kubenswrapper[4697]: I0127 16:35:42.032710 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e45570b1-6a10-4572-9ccd-abde6b453160-catalog-content\") pod \"community-operators-pxv7h\" (UID: \"e45570b1-6a10-4572-9ccd-abde6b453160\") " pod="openshift-marketplace/community-operators-pxv7h" Jan 27 16:35:42 crc kubenswrapper[4697]: I0127 16:35:42.032776 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2ghg\" (UniqueName: \"kubernetes.io/projected/e45570b1-6a10-4572-9ccd-abde6b453160-kube-api-access-t2ghg\") pod \"community-operators-pxv7h\" (UID: \"e45570b1-6a10-4572-9ccd-abde6b453160\") " pod="openshift-marketplace/community-operators-pxv7h" Jan 27 16:35:42 crc kubenswrapper[4697]: I0127 16:35:42.134443 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e45570b1-6a10-4572-9ccd-abde6b453160-catalog-content\") pod \"community-operators-pxv7h\" (UID: \"e45570b1-6a10-4572-9ccd-abde6b453160\") " pod="openshift-marketplace/community-operators-pxv7h" Jan 27 16:35:42 crc kubenswrapper[4697]: I0127 16:35:42.134535 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t2ghg\" (UniqueName: \"kubernetes.io/projected/e45570b1-6a10-4572-9ccd-abde6b453160-kube-api-access-t2ghg\") pod \"community-operators-pxv7h\" (UID: \"e45570b1-6a10-4572-9ccd-abde6b453160\") " pod="openshift-marketplace/community-operators-pxv7h" Jan 27 16:35:42 crc kubenswrapper[4697]: I0127 16:35:42.134580 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e45570b1-6a10-4572-9ccd-abde6b453160-utilities\") pod \"community-operators-pxv7h\" (UID: \"e45570b1-6a10-4572-9ccd-abde6b453160\") " pod="openshift-marketplace/community-operators-pxv7h" Jan 27 16:35:42 crc kubenswrapper[4697]: I0127 16:35:42.135026 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e45570b1-6a10-4572-9ccd-abde6b453160-utilities\") pod \"community-operators-pxv7h\" (UID: \"e45570b1-6a10-4572-9ccd-abde6b453160\") " pod="openshift-marketplace/community-operators-pxv7h" Jan 27 16:35:42 crc kubenswrapper[4697]: I0127 16:35:42.135237 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e45570b1-6a10-4572-9ccd-abde6b453160-catalog-content\") pod \"community-operators-pxv7h\" (UID: \"e45570b1-6a10-4572-9ccd-abde6b453160\") " pod="openshift-marketplace/community-operators-pxv7h" Jan 27 16:35:42 crc kubenswrapper[4697]: I0127 16:35:42.155603 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2ghg\" (UniqueName: \"kubernetes.io/projected/e45570b1-6a10-4572-9ccd-abde6b453160-kube-api-access-t2ghg\") pod \"community-operators-pxv7h\" (UID: \"e45570b1-6a10-4572-9ccd-abde6b453160\") " pod="openshift-marketplace/community-operators-pxv7h" Jan 27 16:35:42 crc kubenswrapper[4697]: I0127 16:35:42.232174 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pxv7h" Jan 27 16:35:42 crc kubenswrapper[4697]: I0127 16:35:42.800698 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pxv7h"] Jan 27 16:35:42 crc kubenswrapper[4697]: W0127 16:35:42.803160 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode45570b1_6a10_4572_9ccd_abde6b453160.slice/crio-4dfa04b36651447ea1a7ee939f053860bf0651444bd7462fa9e3139d4b23e04e WatchSource:0}: Error finding container 4dfa04b36651447ea1a7ee939f053860bf0651444bd7462fa9e3139d4b23e04e: Status 404 returned error can't find the container with id 4dfa04b36651447ea1a7ee939f053860bf0651444bd7462fa9e3139d4b23e04e Jan 27 16:35:42 crc kubenswrapper[4697]: I0127 16:35:42.929583 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pxv7h" event={"ID":"e45570b1-6a10-4572-9ccd-abde6b453160","Type":"ContainerStarted","Data":"4dfa04b36651447ea1a7ee939f053860bf0651444bd7462fa9e3139d4b23e04e"} Jan 27 16:35:43 crc kubenswrapper[4697]: I0127 16:35:43.937907 4697 generic.go:334] "Generic (PLEG): container finished" podID="e45570b1-6a10-4572-9ccd-abde6b453160" containerID="ba6a64220f54d7403062d2fad02df4c1dfdd020afc557ce9c08497bff9e6bd99" exitCode=0 Jan 27 16:35:43 crc kubenswrapper[4697]: I0127 16:35:43.937967 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pxv7h" event={"ID":"e45570b1-6a10-4572-9ccd-abde6b453160","Type":"ContainerDied","Data":"ba6a64220f54d7403062d2fad02df4c1dfdd020afc557ce9c08497bff9e6bd99"} Jan 27 16:35:44 crc kubenswrapper[4697]: I0127 16:35:44.951161 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pxv7h" event={"ID":"e45570b1-6a10-4572-9ccd-abde6b453160","Type":"ContainerStarted","Data":"92ea2dcd276b696fc1bb6aa62d5565574002ff0e3fec07722aeef326091c95d4"} Jan 27 16:35:46 crc kubenswrapper[4697]: I0127 16:35:46.981313 4697 generic.go:334] "Generic (PLEG): container finished" podID="e45570b1-6a10-4572-9ccd-abde6b453160" containerID="92ea2dcd276b696fc1bb6aa62d5565574002ff0e3fec07722aeef326091c95d4" exitCode=0 Jan 27 16:35:46 crc kubenswrapper[4697]: I0127 16:35:46.981584 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pxv7h" event={"ID":"e45570b1-6a10-4572-9ccd-abde6b453160","Type":"ContainerDied","Data":"92ea2dcd276b696fc1bb6aa62d5565574002ff0e3fec07722aeef326091c95d4"} Jan 27 16:35:47 crc kubenswrapper[4697]: I0127 16:35:47.148263 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_6ebbdeb42ee59bc46cd5a9affeefe7a428e186e004b54bc44478e0857bm8tsj_e2327960-3adb-4edf-97cb-ffb7cbe0db07/util/0.log" Jan 27 16:35:47 crc kubenswrapper[4697]: I0127 16:35:47.393652 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_6ebbdeb42ee59bc46cd5a9affeefe7a428e186e004b54bc44478e0857bm8tsj_e2327960-3adb-4edf-97cb-ffb7cbe0db07/pull/0.log" Jan 27 16:35:47 crc kubenswrapper[4697]: I0127 16:35:47.438503 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_6ebbdeb42ee59bc46cd5a9affeefe7a428e186e004b54bc44478e0857bm8tsj_e2327960-3adb-4edf-97cb-ffb7cbe0db07/pull/0.log" Jan 27 16:35:47 crc kubenswrapper[4697]: I0127 16:35:47.450527 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_6ebbdeb42ee59bc46cd5a9affeefe7a428e186e004b54bc44478e0857bm8tsj_e2327960-3adb-4edf-97cb-ffb7cbe0db07/util/0.log" Jan 27 16:35:47 crc kubenswrapper[4697]: I0127 16:35:47.660745 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_6ebbdeb42ee59bc46cd5a9affeefe7a428e186e004b54bc44478e0857bm8tsj_e2327960-3adb-4edf-97cb-ffb7cbe0db07/pull/0.log" Jan 27 16:35:47 crc kubenswrapper[4697]: I0127 16:35:47.670241 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_6ebbdeb42ee59bc46cd5a9affeefe7a428e186e004b54bc44478e0857bm8tsj_e2327960-3adb-4edf-97cb-ffb7cbe0db07/util/0.log" Jan 27 16:35:47 crc kubenswrapper[4697]: I0127 16:35:47.695132 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_6ebbdeb42ee59bc46cd5a9affeefe7a428e186e004b54bc44478e0857bm8tsj_e2327960-3adb-4edf-97cb-ffb7cbe0db07/extract/0.log" Jan 27 16:35:47 crc kubenswrapper[4697]: I0127 16:35:47.990047 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pxv7h" event={"ID":"e45570b1-6a10-4572-9ccd-abde6b453160","Type":"ContainerStarted","Data":"ef931ebceb89e43f3bdece0f619ec974db36317d9b0e632a370867a524a1b453"} Jan 27 16:35:48 crc kubenswrapper[4697]: I0127 16:35:48.014370 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-pxv7h" podStartSLOduration=3.499151189 podStartE2EDuration="7.014353051s" podCreationTimestamp="2026-01-27 16:35:41 +0000 UTC" firstStartedPulling="2026-01-27 16:35:43.940057674 +0000 UTC m=+5240.112457455" lastFinishedPulling="2026-01-27 16:35:47.455259536 +0000 UTC m=+5243.627659317" observedRunningTime="2026-01-27 16:35:48.007460233 +0000 UTC m=+5244.179860014" watchObservedRunningTime="2026-01-27 16:35:48.014353051 +0000 UTC m=+5244.186752832" Jan 27 16:35:48 crc kubenswrapper[4697]: I0127 16:35:48.023645 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-65ff799cfd-666rh_349690fb-f1d2-4848-8424-01e794dc6317/manager/0.log" Jan 27 16:35:48 crc kubenswrapper[4697]: I0127 16:35:48.111550 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-655bf9cfbb-wppqr_6be24454-9d04-4e38-a00e-d6f62e156bd0/manager/0.log" Jan 27 16:35:48 crc kubenswrapper[4697]: I0127 16:35:48.254611 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-77554cdc5c-s4rdx_d930a939-ecb8-4955-88bf-274d35ed9e6a/manager/0.log" Jan 27 16:35:48 crc kubenswrapper[4697]: I0127 16:35:48.511155 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-67dd55ff59-hv8n2_71562cb6-5243-4433-bd90-07c45cf11203/manager/0.log" Jan 27 16:35:48 crc kubenswrapper[4697]: I0127 16:35:48.546194 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-575ffb885b-5h569_ab1c79ce-8e28-4565-9760-5fd20ddf47eb/manager/0.log" Jan 27 16:35:48 crc kubenswrapper[4697]: I0127 16:35:48.752255 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-9nsp6_88db0cc4-3d70-47be-83e1-e5d2d3f3ff24/manager/0.log" Jan 27 16:35:49 crc kubenswrapper[4697]: I0127 16:35:49.146679 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-768b776ffb-9qhk4_ea0ee1bf-fe8d-4c6d-bf66-bb6b4b632ccf/manager/0.log" Jan 27 16:35:49 crc kubenswrapper[4697]: I0127 16:35:49.260369 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-7d75bc88d5-2zk5c_d26a6673-d71e-4f0a-a8f6-e87866dafa6a/manager/0.log" Jan 27 16:35:49 crc kubenswrapper[4697]: I0127 16:35:49.411925 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-55f684fd56-zppcc_39770161-132e-4037-aec7-9db6d10d17d8/manager/0.log" Jan 27 16:35:49 crc kubenswrapper[4697]: I0127 16:35:49.599594 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-849fcfbb6b-5frlr_a068f004-7f2c-4c3d-8bfe-98fbc4b65a73/manager/0.log" Jan 27 16:35:49 crc kubenswrapper[4697]: I0127 16:35:49.676077 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6b9fb5fdcb-s2mqs_b23f7e1b-6141-4dc3-bf18-70732ae7889a/manager/0.log" Jan 27 16:35:49 crc kubenswrapper[4697]: I0127 16:35:49.820202 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-7ffd8d76d4-7w8b9_42edadff-8683-4551-b634-33e4ad590fb1/manager/0.log" Jan 27 16:35:50 crc kubenswrapper[4697]: I0127 16:35:50.025421 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-ddcbfd695-nx7cr_c3d1f921-6d2e-4c30-9f75-14f206a1fb7e/manager/0.log" Jan 27 16:35:50 crc kubenswrapper[4697]: I0127 16:35:50.076706 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7875d7675-kvp8m_66cf11a2-77ca-44a8-ade8-610d02430a2d/manager/0.log" Jan 27 16:35:50 crc kubenswrapper[4697]: I0127 16:35:50.381299 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b854dgsgx_ee7cb913-d3ef-459b-bd70-d6a2aea9ace3/manager/0.log" Jan 27 16:35:50 crc kubenswrapper[4697]: I0127 16:35:50.425082 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-6fb647f7d4-299rw_de629115-105c-4dac-b1d9-ce37c3cf02b2/operator/0.log" Jan 27 16:35:50 crc kubenswrapper[4697]: I0127 16:35:50.663222 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-mvqgc_779425e2-ee9e-45ea-b8c9-07df5c5278b2/registry-server/0.log" Jan 27 16:35:51 crc kubenswrapper[4697]: I0127 16:35:51.013977 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-79d5ccc684-44hkp_a484e650-0a10-44e5-8b88-0f4157293d48/manager/0.log" Jan 27 16:35:51 crc kubenswrapper[4697]: I0127 16:35:51.085612 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-6f75f45d54-ql7xq_cb062e69-364e-4798-9a7e-4cfb1b1ca571/manager/0.log" Jan 27 16:35:51 crc kubenswrapper[4697]: I0127 16:35:51.303775 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-4wpgd_c74a171d-554d-4e80-ae59-cc340cad54be/operator/0.log" Jan 27 16:35:51 crc kubenswrapper[4697]: I0127 16:35:51.518111 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-547cbdb99f-6hdkv_eae7ff28-7cf8-4e7e-bb04-3e75bb4156ec/manager/0.log" Jan 27 16:35:51 crc kubenswrapper[4697]: I0127 16:35:51.565839 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-ff554fc88-js46k_a0f8d486-5d8d-4ae1-9d4c-02f4ab128ede/manager/0.log" Jan 27 16:35:51 crc kubenswrapper[4697]: I0127 16:35:51.718527 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-799bc87c89-bzmfz_386961d6-c4f3-48c7-a03f-768c470daee4/manager/0.log" Jan 27 16:35:51 crc kubenswrapper[4697]: I0127 16:35:51.797323 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-69797bbcbd-bkw8p_081ab885-5c5c-41c5-a1ca-69ab3e0b5b45/manager/0.log" Jan 27 16:35:51 crc kubenswrapper[4697]: I0127 16:35:51.946943 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-6c9bb4b66c-xktdh_89a02bfb-edab-48f6-8c52-6d5f56541057/manager/0.log" Jan 27 16:35:52 crc kubenswrapper[4697]: I0127 16:35:52.232469 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-pxv7h" Jan 27 16:35:52 crc kubenswrapper[4697]: I0127 16:35:52.233611 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-pxv7h" Jan 27 16:35:52 crc kubenswrapper[4697]: I0127 16:35:52.284475 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-pxv7h" Jan 27 16:35:52 crc kubenswrapper[4697]: I0127 16:35:52.572723 4697 scope.go:117] "RemoveContainer" containerID="f5883d34a63778ee686555f75d5c9331c19bb6049a5349238b0b49ff0a60e6ee" Jan 27 16:35:52 crc kubenswrapper[4697]: E0127 16:35:52.572956 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wz495_openshift-machine-config-operator(e9bec8bc-b2a6-4865-83ca-692ae5c022a6)\"" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" Jan 27 16:35:53 crc kubenswrapper[4697]: I0127 16:35:53.070904 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-pxv7h" Jan 27 16:35:53 crc kubenswrapper[4697]: I0127 16:35:53.118235 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pxv7h"] Jan 27 16:35:55 crc kubenswrapper[4697]: I0127 16:35:55.055404 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-pxv7h" podUID="e45570b1-6a10-4572-9ccd-abde6b453160" containerName="registry-server" containerID="cri-o://ef931ebceb89e43f3bdece0f619ec974db36317d9b0e632a370867a524a1b453" gracePeriod=2 Jan 27 16:35:55 crc kubenswrapper[4697]: I0127 16:35:55.487870 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pxv7h" Jan 27 16:35:55 crc kubenswrapper[4697]: I0127 16:35:55.602036 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e45570b1-6a10-4572-9ccd-abde6b453160-utilities\") pod \"e45570b1-6a10-4572-9ccd-abde6b453160\" (UID: \"e45570b1-6a10-4572-9ccd-abde6b453160\") " Jan 27 16:35:55 crc kubenswrapper[4697]: I0127 16:35:55.602103 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e45570b1-6a10-4572-9ccd-abde6b453160-catalog-content\") pod \"e45570b1-6a10-4572-9ccd-abde6b453160\" (UID: \"e45570b1-6a10-4572-9ccd-abde6b453160\") " Jan 27 16:35:55 crc kubenswrapper[4697]: I0127 16:35:55.602158 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t2ghg\" (UniqueName: \"kubernetes.io/projected/e45570b1-6a10-4572-9ccd-abde6b453160-kube-api-access-t2ghg\") pod \"e45570b1-6a10-4572-9ccd-abde6b453160\" (UID: \"e45570b1-6a10-4572-9ccd-abde6b453160\") " Jan 27 16:35:55 crc kubenswrapper[4697]: I0127 16:35:55.602759 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e45570b1-6a10-4572-9ccd-abde6b453160-utilities" (OuterVolumeSpecName: "utilities") pod "e45570b1-6a10-4572-9ccd-abde6b453160" (UID: "e45570b1-6a10-4572-9ccd-abde6b453160"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:35:55 crc kubenswrapper[4697]: I0127 16:35:55.607302 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e45570b1-6a10-4572-9ccd-abde6b453160-kube-api-access-t2ghg" (OuterVolumeSpecName: "kube-api-access-t2ghg") pod "e45570b1-6a10-4572-9ccd-abde6b453160" (UID: "e45570b1-6a10-4572-9ccd-abde6b453160"). InnerVolumeSpecName "kube-api-access-t2ghg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:35:55 crc kubenswrapper[4697]: I0127 16:35:55.658114 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e45570b1-6a10-4572-9ccd-abde6b453160-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e45570b1-6a10-4572-9ccd-abde6b453160" (UID: "e45570b1-6a10-4572-9ccd-abde6b453160"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:35:55 crc kubenswrapper[4697]: I0127 16:35:55.704672 4697 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e45570b1-6a10-4572-9ccd-abde6b453160-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 16:35:55 crc kubenswrapper[4697]: I0127 16:35:55.704711 4697 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e45570b1-6a10-4572-9ccd-abde6b453160-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 16:35:55 crc kubenswrapper[4697]: I0127 16:35:55.704726 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t2ghg\" (UniqueName: \"kubernetes.io/projected/e45570b1-6a10-4572-9ccd-abde6b453160-kube-api-access-t2ghg\") on node \"crc\" DevicePath \"\"" Jan 27 16:35:56 crc kubenswrapper[4697]: I0127 16:35:56.072718 4697 generic.go:334] "Generic (PLEG): container finished" podID="e45570b1-6a10-4572-9ccd-abde6b453160" containerID="ef931ebceb89e43f3bdece0f619ec974db36317d9b0e632a370867a524a1b453" exitCode=0 Jan 27 16:35:56 crc kubenswrapper[4697]: I0127 16:35:56.072767 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pxv7h" event={"ID":"e45570b1-6a10-4572-9ccd-abde6b453160","Type":"ContainerDied","Data":"ef931ebceb89e43f3bdece0f619ec974db36317d9b0e632a370867a524a1b453"} Jan 27 16:35:56 crc kubenswrapper[4697]: I0127 16:35:56.072810 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pxv7h" event={"ID":"e45570b1-6a10-4572-9ccd-abde6b453160","Type":"ContainerDied","Data":"4dfa04b36651447ea1a7ee939f053860bf0651444bd7462fa9e3139d4b23e04e"} Jan 27 16:35:56 crc kubenswrapper[4697]: I0127 16:35:56.072838 4697 scope.go:117] "RemoveContainer" containerID="ef931ebceb89e43f3bdece0f619ec974db36317d9b0e632a370867a524a1b453" Jan 27 16:35:56 crc kubenswrapper[4697]: I0127 16:35:56.073016 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pxv7h" Jan 27 16:35:56 crc kubenswrapper[4697]: I0127 16:35:56.108732 4697 scope.go:117] "RemoveContainer" containerID="92ea2dcd276b696fc1bb6aa62d5565574002ff0e3fec07722aeef326091c95d4" Jan 27 16:35:56 crc kubenswrapper[4697]: I0127 16:35:56.111458 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pxv7h"] Jan 27 16:35:56 crc kubenswrapper[4697]: I0127 16:35:56.119650 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-pxv7h"] Jan 27 16:35:56 crc kubenswrapper[4697]: I0127 16:35:56.142713 4697 scope.go:117] "RemoveContainer" containerID="ba6a64220f54d7403062d2fad02df4c1dfdd020afc557ce9c08497bff9e6bd99" Jan 27 16:35:56 crc kubenswrapper[4697]: I0127 16:35:56.177447 4697 scope.go:117] "RemoveContainer" containerID="ef931ebceb89e43f3bdece0f619ec974db36317d9b0e632a370867a524a1b453" Jan 27 16:35:56 crc kubenswrapper[4697]: E0127 16:35:56.177914 4697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef931ebceb89e43f3bdece0f619ec974db36317d9b0e632a370867a524a1b453\": container with ID starting with ef931ebceb89e43f3bdece0f619ec974db36317d9b0e632a370867a524a1b453 not found: ID does not exist" containerID="ef931ebceb89e43f3bdece0f619ec974db36317d9b0e632a370867a524a1b453" Jan 27 16:35:56 crc kubenswrapper[4697]: I0127 16:35:56.177966 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef931ebceb89e43f3bdece0f619ec974db36317d9b0e632a370867a524a1b453"} err="failed to get container status \"ef931ebceb89e43f3bdece0f619ec974db36317d9b0e632a370867a524a1b453\": rpc error: code = NotFound desc = could not find container \"ef931ebceb89e43f3bdece0f619ec974db36317d9b0e632a370867a524a1b453\": container with ID starting with ef931ebceb89e43f3bdece0f619ec974db36317d9b0e632a370867a524a1b453 not found: ID does not exist" Jan 27 16:35:56 crc kubenswrapper[4697]: I0127 16:35:56.178002 4697 scope.go:117] "RemoveContainer" containerID="92ea2dcd276b696fc1bb6aa62d5565574002ff0e3fec07722aeef326091c95d4" Jan 27 16:35:56 crc kubenswrapper[4697]: E0127 16:35:56.178410 4697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"92ea2dcd276b696fc1bb6aa62d5565574002ff0e3fec07722aeef326091c95d4\": container with ID starting with 92ea2dcd276b696fc1bb6aa62d5565574002ff0e3fec07722aeef326091c95d4 not found: ID does not exist" containerID="92ea2dcd276b696fc1bb6aa62d5565574002ff0e3fec07722aeef326091c95d4" Jan 27 16:35:56 crc kubenswrapper[4697]: I0127 16:35:56.178442 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92ea2dcd276b696fc1bb6aa62d5565574002ff0e3fec07722aeef326091c95d4"} err="failed to get container status \"92ea2dcd276b696fc1bb6aa62d5565574002ff0e3fec07722aeef326091c95d4\": rpc error: code = NotFound desc = could not find container \"92ea2dcd276b696fc1bb6aa62d5565574002ff0e3fec07722aeef326091c95d4\": container with ID starting with 92ea2dcd276b696fc1bb6aa62d5565574002ff0e3fec07722aeef326091c95d4 not found: ID does not exist" Jan 27 16:35:56 crc kubenswrapper[4697]: I0127 16:35:56.178462 4697 scope.go:117] "RemoveContainer" containerID="ba6a64220f54d7403062d2fad02df4c1dfdd020afc557ce9c08497bff9e6bd99" Jan 27 16:35:56 crc kubenswrapper[4697]: E0127 16:35:56.178906 4697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba6a64220f54d7403062d2fad02df4c1dfdd020afc557ce9c08497bff9e6bd99\": container with ID starting with ba6a64220f54d7403062d2fad02df4c1dfdd020afc557ce9c08497bff9e6bd99 not found: ID does not exist" containerID="ba6a64220f54d7403062d2fad02df4c1dfdd020afc557ce9c08497bff9e6bd99" Jan 27 16:35:56 crc kubenswrapper[4697]: I0127 16:35:56.179016 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba6a64220f54d7403062d2fad02df4c1dfdd020afc557ce9c08497bff9e6bd99"} err="failed to get container status \"ba6a64220f54d7403062d2fad02df4c1dfdd020afc557ce9c08497bff9e6bd99\": rpc error: code = NotFound desc = could not find container \"ba6a64220f54d7403062d2fad02df4c1dfdd020afc557ce9c08497bff9e6bd99\": container with ID starting with ba6a64220f54d7403062d2fad02df4c1dfdd020afc557ce9c08497bff9e6bd99 not found: ID does not exist" Jan 27 16:35:56 crc kubenswrapper[4697]: I0127 16:35:56.581404 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e45570b1-6a10-4572-9ccd-abde6b453160" path="/var/lib/kubelet/pods/e45570b1-6a10-4572-9ccd-abde6b453160/volumes" Jan 27 16:36:06 crc kubenswrapper[4697]: I0127 16:36:06.568400 4697 scope.go:117] "RemoveContainer" containerID="f5883d34a63778ee686555f75d5c9331c19bb6049a5349238b0b49ff0a60e6ee" Jan 27 16:36:06 crc kubenswrapper[4697]: E0127 16:36:06.569342 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wz495_openshift-machine-config-operator(e9bec8bc-b2a6-4865-83ca-692ae5c022a6)\"" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" Jan 27 16:36:12 crc kubenswrapper[4697]: I0127 16:36:12.363482 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-4tnq9_49dd977b-6315-4446-8804-242e7e94a375/control-plane-machine-set-operator/0.log" Jan 27 16:36:12 crc kubenswrapper[4697]: I0127 16:36:12.443331 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-9p5q5_cbd9208d-08ed-47af-a7cf-b9ee3973b964/kube-rbac-proxy/0.log" Jan 27 16:36:12 crc kubenswrapper[4697]: I0127 16:36:12.514573 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-9p5q5_cbd9208d-08ed-47af-a7cf-b9ee3973b964/machine-api-operator/0.log" Jan 27 16:36:19 crc kubenswrapper[4697]: I0127 16:36:19.569598 4697 scope.go:117] "RemoveContainer" containerID="f5883d34a63778ee686555f75d5c9331c19bb6049a5349238b0b49ff0a60e6ee" Jan 27 16:36:19 crc kubenswrapper[4697]: E0127 16:36:19.570612 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wz495_openshift-machine-config-operator(e9bec8bc-b2a6-4865-83ca-692ae5c022a6)\"" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" Jan 27 16:36:20 crc kubenswrapper[4697]: I0127 16:36:20.751818 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-6vrv8"] Jan 27 16:36:20 crc kubenswrapper[4697]: E0127 16:36:20.752389 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e45570b1-6a10-4572-9ccd-abde6b453160" containerName="extract-content" Jan 27 16:36:20 crc kubenswrapper[4697]: I0127 16:36:20.752408 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="e45570b1-6a10-4572-9ccd-abde6b453160" containerName="extract-content" Jan 27 16:36:20 crc kubenswrapper[4697]: E0127 16:36:20.752428 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e45570b1-6a10-4572-9ccd-abde6b453160" containerName="extract-utilities" Jan 27 16:36:20 crc kubenswrapper[4697]: I0127 16:36:20.752436 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="e45570b1-6a10-4572-9ccd-abde6b453160" containerName="extract-utilities" Jan 27 16:36:20 crc kubenswrapper[4697]: E0127 16:36:20.752459 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e45570b1-6a10-4572-9ccd-abde6b453160" containerName="registry-server" Jan 27 16:36:20 crc kubenswrapper[4697]: I0127 16:36:20.752468 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="e45570b1-6a10-4572-9ccd-abde6b453160" containerName="registry-server" Jan 27 16:36:20 crc kubenswrapper[4697]: I0127 16:36:20.752680 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="e45570b1-6a10-4572-9ccd-abde6b453160" containerName="registry-server" Jan 27 16:36:20 crc kubenswrapper[4697]: I0127 16:36:20.754709 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6vrv8" Jan 27 16:36:20 crc kubenswrapper[4697]: I0127 16:36:20.775156 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6vrv8"] Jan 27 16:36:20 crc kubenswrapper[4697]: I0127 16:36:20.826509 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15cecddc-3aaf-4f2e-9f4a-a3de77ea2854-catalog-content\") pod \"certified-operators-6vrv8\" (UID: \"15cecddc-3aaf-4f2e-9f4a-a3de77ea2854\") " pod="openshift-marketplace/certified-operators-6vrv8" Jan 27 16:36:20 crc kubenswrapper[4697]: I0127 16:36:20.826606 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15cecddc-3aaf-4f2e-9f4a-a3de77ea2854-utilities\") pod \"certified-operators-6vrv8\" (UID: \"15cecddc-3aaf-4f2e-9f4a-a3de77ea2854\") " pod="openshift-marketplace/certified-operators-6vrv8" Jan 27 16:36:20 crc kubenswrapper[4697]: I0127 16:36:20.826658 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpslk\" (UniqueName: \"kubernetes.io/projected/15cecddc-3aaf-4f2e-9f4a-a3de77ea2854-kube-api-access-mpslk\") pod \"certified-operators-6vrv8\" (UID: \"15cecddc-3aaf-4f2e-9f4a-a3de77ea2854\") " pod="openshift-marketplace/certified-operators-6vrv8" Jan 27 16:36:20 crc kubenswrapper[4697]: I0127 16:36:20.928468 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15cecddc-3aaf-4f2e-9f4a-a3de77ea2854-catalog-content\") pod \"certified-operators-6vrv8\" (UID: \"15cecddc-3aaf-4f2e-9f4a-a3de77ea2854\") " pod="openshift-marketplace/certified-operators-6vrv8" Jan 27 16:36:20 crc kubenswrapper[4697]: I0127 16:36:20.928807 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15cecddc-3aaf-4f2e-9f4a-a3de77ea2854-utilities\") pod \"certified-operators-6vrv8\" (UID: \"15cecddc-3aaf-4f2e-9f4a-a3de77ea2854\") " pod="openshift-marketplace/certified-operators-6vrv8" Jan 27 16:36:20 crc kubenswrapper[4697]: I0127 16:36:20.928983 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mpslk\" (UniqueName: \"kubernetes.io/projected/15cecddc-3aaf-4f2e-9f4a-a3de77ea2854-kube-api-access-mpslk\") pod \"certified-operators-6vrv8\" (UID: \"15cecddc-3aaf-4f2e-9f4a-a3de77ea2854\") " pod="openshift-marketplace/certified-operators-6vrv8" Jan 27 16:36:20 crc kubenswrapper[4697]: I0127 16:36:20.929065 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15cecddc-3aaf-4f2e-9f4a-a3de77ea2854-catalog-content\") pod \"certified-operators-6vrv8\" (UID: \"15cecddc-3aaf-4f2e-9f4a-a3de77ea2854\") " pod="openshift-marketplace/certified-operators-6vrv8" Jan 27 16:36:20 crc kubenswrapper[4697]: I0127 16:36:20.929315 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15cecddc-3aaf-4f2e-9f4a-a3de77ea2854-utilities\") pod \"certified-operators-6vrv8\" (UID: \"15cecddc-3aaf-4f2e-9f4a-a3de77ea2854\") " pod="openshift-marketplace/certified-operators-6vrv8" Jan 27 16:36:20 crc kubenswrapper[4697]: I0127 16:36:20.960193 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mpslk\" (UniqueName: \"kubernetes.io/projected/15cecddc-3aaf-4f2e-9f4a-a3de77ea2854-kube-api-access-mpslk\") pod \"certified-operators-6vrv8\" (UID: \"15cecddc-3aaf-4f2e-9f4a-a3de77ea2854\") " pod="openshift-marketplace/certified-operators-6vrv8" Jan 27 16:36:21 crc kubenswrapper[4697]: I0127 16:36:21.115731 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6vrv8" Jan 27 16:36:21 crc kubenswrapper[4697]: I0127 16:36:21.614894 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6vrv8"] Jan 27 16:36:21 crc kubenswrapper[4697]: W0127 16:36:21.627236 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod15cecddc_3aaf_4f2e_9f4a_a3de77ea2854.slice/crio-7d359aaa4af9a78e2b0ec52e11c742becdb943826dd0f89e4580a29c1d59de41 WatchSource:0}: Error finding container 7d359aaa4af9a78e2b0ec52e11c742becdb943826dd0f89e4580a29c1d59de41: Status 404 returned error can't find the container with id 7d359aaa4af9a78e2b0ec52e11c742becdb943826dd0f89e4580a29c1d59de41 Jan 27 16:36:22 crc kubenswrapper[4697]: I0127 16:36:22.402381 4697 generic.go:334] "Generic (PLEG): container finished" podID="15cecddc-3aaf-4f2e-9f4a-a3de77ea2854" containerID="58a936fae44dd3efb12accca89dc118ed50d1800b5fbf3b782c8419893153de9" exitCode=0 Jan 27 16:36:22 crc kubenswrapper[4697]: I0127 16:36:22.402468 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6vrv8" event={"ID":"15cecddc-3aaf-4f2e-9f4a-a3de77ea2854","Type":"ContainerDied","Data":"58a936fae44dd3efb12accca89dc118ed50d1800b5fbf3b782c8419893153de9"} Jan 27 16:36:22 crc kubenswrapper[4697]: I0127 16:36:22.403745 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6vrv8" event={"ID":"15cecddc-3aaf-4f2e-9f4a-a3de77ea2854","Type":"ContainerStarted","Data":"7d359aaa4af9a78e2b0ec52e11c742becdb943826dd0f89e4580a29c1d59de41"} Jan 27 16:36:23 crc kubenswrapper[4697]: I0127 16:36:23.413357 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6vrv8" event={"ID":"15cecddc-3aaf-4f2e-9f4a-a3de77ea2854","Type":"ContainerStarted","Data":"944a408b77afe0e9845908ee01811e4b6642ccae8de38de06799f56e46393897"} Jan 27 16:36:25 crc kubenswrapper[4697]: I0127 16:36:25.437267 4697 generic.go:334] "Generic (PLEG): container finished" podID="15cecddc-3aaf-4f2e-9f4a-a3de77ea2854" containerID="944a408b77afe0e9845908ee01811e4b6642ccae8de38de06799f56e46393897" exitCode=0 Jan 27 16:36:25 crc kubenswrapper[4697]: I0127 16:36:25.437378 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6vrv8" event={"ID":"15cecddc-3aaf-4f2e-9f4a-a3de77ea2854","Type":"ContainerDied","Data":"944a408b77afe0e9845908ee01811e4b6642ccae8de38de06799f56e46393897"} Jan 27 16:36:26 crc kubenswrapper[4697]: I0127 16:36:26.450065 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6vrv8" event={"ID":"15cecddc-3aaf-4f2e-9f4a-a3de77ea2854","Type":"ContainerStarted","Data":"0afac68af3ac4569505364d942a8760397ed695bbbbda87d3e7472e672b1ef5e"} Jan 27 16:36:26 crc kubenswrapper[4697]: I0127 16:36:26.477864 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-6vrv8" podStartSLOduration=3.029993281 podStartE2EDuration="6.477844041s" podCreationTimestamp="2026-01-27 16:36:20 +0000 UTC" firstStartedPulling="2026-01-27 16:36:22.405704796 +0000 UTC m=+5278.578104587" lastFinishedPulling="2026-01-27 16:36:25.853555566 +0000 UTC m=+5282.025955347" observedRunningTime="2026-01-27 16:36:26.473444894 +0000 UTC m=+5282.645844675" watchObservedRunningTime="2026-01-27 16:36:26.477844041 +0000 UTC m=+5282.650243822" Jan 27 16:36:27 crc kubenswrapper[4697]: I0127 16:36:27.210190 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-z8rp7_276bf9e3-2608-4096-bc3a-fff69d9dfc64/cert-manager-controller/0.log" Jan 27 16:36:27 crc kubenswrapper[4697]: I0127 16:36:27.406188 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-2vqhk_4cf2332b-1a6a-460c-a3a8-d7110b0960a2/cert-manager-cainjector/0.log" Jan 27 16:36:27 crc kubenswrapper[4697]: I0127 16:36:27.438962 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-8n4gj_a29b72d6-fcd5-4a5a-b779-437cfc4c8365/cert-manager-webhook/0.log" Jan 27 16:36:31 crc kubenswrapper[4697]: I0127 16:36:31.116535 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-6vrv8" Jan 27 16:36:31 crc kubenswrapper[4697]: I0127 16:36:31.116880 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-6vrv8" Jan 27 16:36:31 crc kubenswrapper[4697]: I0127 16:36:31.161738 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-6vrv8" Jan 27 16:36:31 crc kubenswrapper[4697]: I0127 16:36:31.564001 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-6vrv8" Jan 27 16:36:33 crc kubenswrapper[4697]: I0127 16:36:33.926167 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6vrv8"] Jan 27 16:36:33 crc kubenswrapper[4697]: I0127 16:36:33.926885 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-6vrv8" podUID="15cecddc-3aaf-4f2e-9f4a-a3de77ea2854" containerName="registry-server" containerID="cri-o://0afac68af3ac4569505364d942a8760397ed695bbbbda87d3e7472e672b1ef5e" gracePeriod=2 Jan 27 16:36:34 crc kubenswrapper[4697]: I0127 16:36:34.395688 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6vrv8" Jan 27 16:36:34 crc kubenswrapper[4697]: I0127 16:36:34.422014 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15cecddc-3aaf-4f2e-9f4a-a3de77ea2854-utilities\") pod \"15cecddc-3aaf-4f2e-9f4a-a3de77ea2854\" (UID: \"15cecddc-3aaf-4f2e-9f4a-a3de77ea2854\") " Jan 27 16:36:34 crc kubenswrapper[4697]: I0127 16:36:34.422089 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mpslk\" (UniqueName: \"kubernetes.io/projected/15cecddc-3aaf-4f2e-9f4a-a3de77ea2854-kube-api-access-mpslk\") pod \"15cecddc-3aaf-4f2e-9f4a-a3de77ea2854\" (UID: \"15cecddc-3aaf-4f2e-9f4a-a3de77ea2854\") " Jan 27 16:36:34 crc kubenswrapper[4697]: I0127 16:36:34.422155 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15cecddc-3aaf-4f2e-9f4a-a3de77ea2854-catalog-content\") pod \"15cecddc-3aaf-4f2e-9f4a-a3de77ea2854\" (UID: \"15cecddc-3aaf-4f2e-9f4a-a3de77ea2854\") " Jan 27 16:36:34 crc kubenswrapper[4697]: I0127 16:36:34.423286 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/15cecddc-3aaf-4f2e-9f4a-a3de77ea2854-utilities" (OuterVolumeSpecName: "utilities") pod "15cecddc-3aaf-4f2e-9f4a-a3de77ea2854" (UID: "15cecddc-3aaf-4f2e-9f4a-a3de77ea2854"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:36:34 crc kubenswrapper[4697]: I0127 16:36:34.450140 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15cecddc-3aaf-4f2e-9f4a-a3de77ea2854-kube-api-access-mpslk" (OuterVolumeSpecName: "kube-api-access-mpslk") pod "15cecddc-3aaf-4f2e-9f4a-a3de77ea2854" (UID: "15cecddc-3aaf-4f2e-9f4a-a3de77ea2854"). InnerVolumeSpecName "kube-api-access-mpslk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:36:34 crc kubenswrapper[4697]: I0127 16:36:34.472906 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/15cecddc-3aaf-4f2e-9f4a-a3de77ea2854-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "15cecddc-3aaf-4f2e-9f4a-a3de77ea2854" (UID: "15cecddc-3aaf-4f2e-9f4a-a3de77ea2854"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:36:34 crc kubenswrapper[4697]: I0127 16:36:34.524262 4697 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15cecddc-3aaf-4f2e-9f4a-a3de77ea2854-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 16:36:34 crc kubenswrapper[4697]: I0127 16:36:34.524548 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mpslk\" (UniqueName: \"kubernetes.io/projected/15cecddc-3aaf-4f2e-9f4a-a3de77ea2854-kube-api-access-mpslk\") on node \"crc\" DevicePath \"\"" Jan 27 16:36:34 crc kubenswrapper[4697]: I0127 16:36:34.524559 4697 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15cecddc-3aaf-4f2e-9f4a-a3de77ea2854-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 16:36:34 crc kubenswrapper[4697]: I0127 16:36:34.536105 4697 generic.go:334] "Generic (PLEG): container finished" podID="15cecddc-3aaf-4f2e-9f4a-a3de77ea2854" containerID="0afac68af3ac4569505364d942a8760397ed695bbbbda87d3e7472e672b1ef5e" exitCode=0 Jan 27 16:36:34 crc kubenswrapper[4697]: I0127 16:36:34.536138 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6vrv8" event={"ID":"15cecddc-3aaf-4f2e-9f4a-a3de77ea2854","Type":"ContainerDied","Data":"0afac68af3ac4569505364d942a8760397ed695bbbbda87d3e7472e672b1ef5e"} Jan 27 16:36:34 crc kubenswrapper[4697]: I0127 16:36:34.536163 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6vrv8" event={"ID":"15cecddc-3aaf-4f2e-9f4a-a3de77ea2854","Type":"ContainerDied","Data":"7d359aaa4af9a78e2b0ec52e11c742becdb943826dd0f89e4580a29c1d59de41"} Jan 27 16:36:34 crc kubenswrapper[4697]: I0127 16:36:34.536179 4697 scope.go:117] "RemoveContainer" containerID="0afac68af3ac4569505364d942a8760397ed695bbbbda87d3e7472e672b1ef5e" Jan 27 16:36:34 crc kubenswrapper[4697]: I0127 16:36:34.536280 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6vrv8" Jan 27 16:36:34 crc kubenswrapper[4697]: I0127 16:36:34.563590 4697 scope.go:117] "RemoveContainer" containerID="944a408b77afe0e9845908ee01811e4b6642ccae8de38de06799f56e46393897" Jan 27 16:36:34 crc kubenswrapper[4697]: I0127 16:36:34.567521 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6vrv8"] Jan 27 16:36:34 crc kubenswrapper[4697]: I0127 16:36:34.577750 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-6vrv8"] Jan 27 16:36:34 crc kubenswrapper[4697]: I0127 16:36:34.579887 4697 scope.go:117] "RemoveContainer" containerID="f5883d34a63778ee686555f75d5c9331c19bb6049a5349238b0b49ff0a60e6ee" Jan 27 16:36:34 crc kubenswrapper[4697]: I0127 16:36:34.589120 4697 scope.go:117] "RemoveContainer" containerID="58a936fae44dd3efb12accca89dc118ed50d1800b5fbf3b782c8419893153de9" Jan 27 16:36:34 crc kubenswrapper[4697]: I0127 16:36:34.639208 4697 scope.go:117] "RemoveContainer" containerID="0afac68af3ac4569505364d942a8760397ed695bbbbda87d3e7472e672b1ef5e" Jan 27 16:36:34 crc kubenswrapper[4697]: E0127 16:36:34.639660 4697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0afac68af3ac4569505364d942a8760397ed695bbbbda87d3e7472e672b1ef5e\": container with ID starting with 0afac68af3ac4569505364d942a8760397ed695bbbbda87d3e7472e672b1ef5e not found: ID does not exist" containerID="0afac68af3ac4569505364d942a8760397ed695bbbbda87d3e7472e672b1ef5e" Jan 27 16:36:34 crc kubenswrapper[4697]: I0127 16:36:34.639700 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0afac68af3ac4569505364d942a8760397ed695bbbbda87d3e7472e672b1ef5e"} err="failed to get container status \"0afac68af3ac4569505364d942a8760397ed695bbbbda87d3e7472e672b1ef5e\": rpc error: code = NotFound desc = could not find container \"0afac68af3ac4569505364d942a8760397ed695bbbbda87d3e7472e672b1ef5e\": container with ID starting with 0afac68af3ac4569505364d942a8760397ed695bbbbda87d3e7472e672b1ef5e not found: ID does not exist" Jan 27 16:36:34 crc kubenswrapper[4697]: I0127 16:36:34.639727 4697 scope.go:117] "RemoveContainer" containerID="944a408b77afe0e9845908ee01811e4b6642ccae8de38de06799f56e46393897" Jan 27 16:36:34 crc kubenswrapper[4697]: E0127 16:36:34.640225 4697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"944a408b77afe0e9845908ee01811e4b6642ccae8de38de06799f56e46393897\": container with ID starting with 944a408b77afe0e9845908ee01811e4b6642ccae8de38de06799f56e46393897 not found: ID does not exist" containerID="944a408b77afe0e9845908ee01811e4b6642ccae8de38de06799f56e46393897" Jan 27 16:36:34 crc kubenswrapper[4697]: I0127 16:36:34.640256 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"944a408b77afe0e9845908ee01811e4b6642ccae8de38de06799f56e46393897"} err="failed to get container status \"944a408b77afe0e9845908ee01811e4b6642ccae8de38de06799f56e46393897\": rpc error: code = NotFound desc = could not find container \"944a408b77afe0e9845908ee01811e4b6642ccae8de38de06799f56e46393897\": container with ID starting with 944a408b77afe0e9845908ee01811e4b6642ccae8de38de06799f56e46393897 not found: ID does not exist" Jan 27 16:36:34 crc kubenswrapper[4697]: I0127 16:36:34.640274 4697 scope.go:117] "RemoveContainer" containerID="58a936fae44dd3efb12accca89dc118ed50d1800b5fbf3b782c8419893153de9" Jan 27 16:36:34 crc kubenswrapper[4697]: E0127 16:36:34.640518 4697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58a936fae44dd3efb12accca89dc118ed50d1800b5fbf3b782c8419893153de9\": container with ID starting with 58a936fae44dd3efb12accca89dc118ed50d1800b5fbf3b782c8419893153de9 not found: ID does not exist" containerID="58a936fae44dd3efb12accca89dc118ed50d1800b5fbf3b782c8419893153de9" Jan 27 16:36:34 crc kubenswrapper[4697]: I0127 16:36:34.640560 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58a936fae44dd3efb12accca89dc118ed50d1800b5fbf3b782c8419893153de9"} err="failed to get container status \"58a936fae44dd3efb12accca89dc118ed50d1800b5fbf3b782c8419893153de9\": rpc error: code = NotFound desc = could not find container \"58a936fae44dd3efb12accca89dc118ed50d1800b5fbf3b782c8419893153de9\": container with ID starting with 58a936fae44dd3efb12accca89dc118ed50d1800b5fbf3b782c8419893153de9 not found: ID does not exist" Jan 27 16:36:35 crc kubenswrapper[4697]: I0127 16:36:35.555900 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wz495" event={"ID":"e9bec8bc-b2a6-4865-83ca-692ae5c022a6","Type":"ContainerStarted","Data":"812f5dcbebfe4bb3abef8c2d768692d4d04f2a2e0f075a585dfa370157a13a72"} Jan 27 16:36:36 crc kubenswrapper[4697]: I0127 16:36:36.581105 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15cecddc-3aaf-4f2e-9f4a-a3de77ea2854" path="/var/lib/kubelet/pods/15cecddc-3aaf-4f2e-9f4a-a3de77ea2854/volumes" Jan 27 16:36:40 crc kubenswrapper[4697]: I0127 16:36:40.905610 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-7brtn_a10fdbc2-63e2-4b0b-afee-5ce01520801e/nmstate-console-plugin/0.log" Jan 27 16:36:41 crc kubenswrapper[4697]: I0127 16:36:41.055047 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-nlwcd_0cb7d58a-50bd-4ae2-9e83-5c689667726d/nmstate-handler/0.log" Jan 27 16:36:41 crc kubenswrapper[4697]: I0127 16:36:41.102211 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-hkj82_c1ae0702-73b7-45df-88fb-4e93ab7f6496/kube-rbac-proxy/0.log" Jan 27 16:36:41 crc kubenswrapper[4697]: I0127 16:36:41.130368 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-hkj82_c1ae0702-73b7-45df-88fb-4e93ab7f6496/nmstate-metrics/0.log" Jan 27 16:36:41 crc kubenswrapper[4697]: I0127 16:36:41.276881 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-9ztbp_a886f00e-2d21-4e80-81d0-06650c1e178f/nmstate-operator/0.log" Jan 27 16:36:41 crc kubenswrapper[4697]: I0127 16:36:41.330716 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-vt4ml_0133bab9-91e4-4ff6-8dc1-cf282e197dd0/nmstate-webhook/0.log" Jan 27 16:37:09 crc kubenswrapper[4697]: I0127 16:37:09.434763 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-shgkw_0ecbc291-e00b-42be-b1dc-fd53bcb5256a/kube-rbac-proxy/0.log" Jan 27 16:37:09 crc kubenswrapper[4697]: I0127 16:37:09.498608 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-shgkw_0ecbc291-e00b-42be-b1dc-fd53bcb5256a/controller/0.log" Jan 27 16:37:09 crc kubenswrapper[4697]: I0127 16:37:09.600802 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-g49qd_de0040d7-f7cb-4a80-ba9a-bbc8898365e1/cp-frr-files/0.log" Jan 27 16:37:09 crc kubenswrapper[4697]: I0127 16:37:09.780911 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-g49qd_de0040d7-f7cb-4a80-ba9a-bbc8898365e1/cp-frr-files/0.log" Jan 27 16:37:09 crc kubenswrapper[4697]: I0127 16:37:09.821739 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-g49qd_de0040d7-f7cb-4a80-ba9a-bbc8898365e1/cp-reloader/0.log" Jan 27 16:37:09 crc kubenswrapper[4697]: I0127 16:37:09.826294 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-g49qd_de0040d7-f7cb-4a80-ba9a-bbc8898365e1/cp-reloader/0.log" Jan 27 16:37:09 crc kubenswrapper[4697]: I0127 16:37:09.855920 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-g49qd_de0040d7-f7cb-4a80-ba9a-bbc8898365e1/cp-metrics/0.log" Jan 27 16:37:10 crc kubenswrapper[4697]: I0127 16:37:10.039457 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-g49qd_de0040d7-f7cb-4a80-ba9a-bbc8898365e1/cp-frr-files/0.log" Jan 27 16:37:10 crc kubenswrapper[4697]: I0127 16:37:10.081115 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-g49qd_de0040d7-f7cb-4a80-ba9a-bbc8898365e1/cp-metrics/0.log" Jan 27 16:37:10 crc kubenswrapper[4697]: I0127 16:37:10.110562 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-g49qd_de0040d7-f7cb-4a80-ba9a-bbc8898365e1/cp-reloader/0.log" Jan 27 16:37:10 crc kubenswrapper[4697]: I0127 16:37:10.158032 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-g49qd_de0040d7-f7cb-4a80-ba9a-bbc8898365e1/cp-metrics/0.log" Jan 27 16:37:10 crc kubenswrapper[4697]: I0127 16:37:10.379593 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-g49qd_de0040d7-f7cb-4a80-ba9a-bbc8898365e1/cp-frr-files/0.log" Jan 27 16:37:10 crc kubenswrapper[4697]: I0127 16:37:10.380173 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-g49qd_de0040d7-f7cb-4a80-ba9a-bbc8898365e1/cp-metrics/0.log" Jan 27 16:37:10 crc kubenswrapper[4697]: I0127 16:37:10.392219 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-g49qd_de0040d7-f7cb-4a80-ba9a-bbc8898365e1/controller/0.log" Jan 27 16:37:10 crc kubenswrapper[4697]: I0127 16:37:10.422523 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-g49qd_de0040d7-f7cb-4a80-ba9a-bbc8898365e1/cp-reloader/0.log" Jan 27 16:37:10 crc kubenswrapper[4697]: I0127 16:37:10.585163 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-g49qd_de0040d7-f7cb-4a80-ba9a-bbc8898365e1/kube-rbac-proxy/0.log" Jan 27 16:37:10 crc kubenswrapper[4697]: I0127 16:37:10.614294 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-g49qd_de0040d7-f7cb-4a80-ba9a-bbc8898365e1/frr-metrics/0.log" Jan 27 16:37:10 crc kubenswrapper[4697]: I0127 16:37:10.688111 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-g49qd_de0040d7-f7cb-4a80-ba9a-bbc8898365e1/kube-rbac-proxy-frr/0.log" Jan 27 16:37:10 crc kubenswrapper[4697]: I0127 16:37:10.901493 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-g49qd_de0040d7-f7cb-4a80-ba9a-bbc8898365e1/reloader/0.log" Jan 27 16:37:10 crc kubenswrapper[4697]: I0127 16:37:10.935342 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-kk5jh_cb6be63b-c3fd-4e21-a1b3-ffc11357a98f/frr-k8s-webhook-server/0.log" Jan 27 16:37:11 crc kubenswrapper[4697]: I0127 16:37:11.248877 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-976dcb485-6tnr7_fc55dd19-5186-4ee0-b54d-0fec0c93f30a/manager/0.log" Jan 27 16:37:11 crc kubenswrapper[4697]: I0127 16:37:11.694353 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-5476f886c6-mrv5l_4779f8a7-b446-4128-8800-0b6420fda6d8/webhook-server/0.log" Jan 27 16:37:11 crc kubenswrapper[4697]: I0127 16:37:11.723140 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-g49qd_de0040d7-f7cb-4a80-ba9a-bbc8898365e1/frr/0.log" Jan 27 16:37:11 crc kubenswrapper[4697]: I0127 16:37:11.943371 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-8stft_18479ade-7486-4889-b313-79c6598cc773/kube-rbac-proxy/0.log" Jan 27 16:37:12 crc kubenswrapper[4697]: I0127 16:37:12.152324 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-8stft_18479ade-7486-4889-b313-79c6598cc773/speaker/0.log" Jan 27 16:37:27 crc kubenswrapper[4697]: I0127 16:37:27.007368 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6kvwz_a5c419f2-da90-4ed6-8155-03cba6840bc7/util/0.log" Jan 27 16:37:27 crc kubenswrapper[4697]: I0127 16:37:27.128531 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6kvwz_a5c419f2-da90-4ed6-8155-03cba6840bc7/pull/0.log" Jan 27 16:37:27 crc kubenswrapper[4697]: I0127 16:37:27.170174 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6kvwz_a5c419f2-da90-4ed6-8155-03cba6840bc7/util/0.log" Jan 27 16:37:27 crc kubenswrapper[4697]: I0127 16:37:27.192960 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6kvwz_a5c419f2-da90-4ed6-8155-03cba6840bc7/pull/0.log" Jan 27 16:37:27 crc kubenswrapper[4697]: I0127 16:37:27.348945 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6kvwz_a5c419f2-da90-4ed6-8155-03cba6840bc7/pull/0.log" Jan 27 16:37:27 crc kubenswrapper[4697]: I0127 16:37:27.350116 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6kvwz_a5c419f2-da90-4ed6-8155-03cba6840bc7/util/0.log" Jan 27 16:37:27 crc kubenswrapper[4697]: I0127 16:37:27.409290 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6kvwz_a5c419f2-da90-4ed6-8155-03cba6840bc7/extract/0.log" Jan 27 16:37:27 crc kubenswrapper[4697]: I0127 16:37:27.544096 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136h4fd_efe31ae7-f928-4690-b47c-57c996d20817/util/0.log" Jan 27 16:37:27 crc kubenswrapper[4697]: I0127 16:37:27.718648 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136h4fd_efe31ae7-f928-4690-b47c-57c996d20817/pull/0.log" Jan 27 16:37:27 crc kubenswrapper[4697]: I0127 16:37:27.731013 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136h4fd_efe31ae7-f928-4690-b47c-57c996d20817/util/0.log" Jan 27 16:37:27 crc kubenswrapper[4697]: I0127 16:37:27.765130 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136h4fd_efe31ae7-f928-4690-b47c-57c996d20817/pull/0.log" Jan 27 16:37:27 crc kubenswrapper[4697]: I0127 16:37:27.925178 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136h4fd_efe31ae7-f928-4690-b47c-57c996d20817/extract/0.log" Jan 27 16:37:27 crc kubenswrapper[4697]: I0127 16:37:27.965197 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136h4fd_efe31ae7-f928-4690-b47c-57c996d20817/pull/0.log" Jan 27 16:37:27 crc kubenswrapper[4697]: I0127 16:37:27.968265 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136h4fd_efe31ae7-f928-4690-b47c-57c996d20817/util/0.log" Jan 27 16:37:28 crc kubenswrapper[4697]: I0127 16:37:28.127659 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-sc5n9_238b6b97-1f60-4c86-a041-351dba477c64/extract-utilities/0.log" Jan 27 16:37:28 crc kubenswrapper[4697]: I0127 16:37:28.290163 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-sc5n9_238b6b97-1f60-4c86-a041-351dba477c64/extract-utilities/0.log" Jan 27 16:37:28 crc kubenswrapper[4697]: I0127 16:37:28.302814 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-sc5n9_238b6b97-1f60-4c86-a041-351dba477c64/extract-content/0.log" Jan 27 16:37:28 crc kubenswrapper[4697]: I0127 16:37:28.315341 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-sc5n9_238b6b97-1f60-4c86-a041-351dba477c64/extract-content/0.log" Jan 27 16:37:28 crc kubenswrapper[4697]: I0127 16:37:28.534377 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-sc5n9_238b6b97-1f60-4c86-a041-351dba477c64/extract-utilities/0.log" Jan 27 16:37:28 crc kubenswrapper[4697]: I0127 16:37:28.542208 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-sc5n9_238b6b97-1f60-4c86-a041-351dba477c64/extract-content/0.log" Jan 27 16:37:28 crc kubenswrapper[4697]: I0127 16:37:28.753963 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-6d8lm_53f86999-4825-49b5-8652-a6b6bcc1dc5e/extract-utilities/0.log" Jan 27 16:37:29 crc kubenswrapper[4697]: I0127 16:37:29.142257 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-sc5n9_238b6b97-1f60-4c86-a041-351dba477c64/registry-server/0.log" Jan 27 16:37:29 crc kubenswrapper[4697]: I0127 16:37:29.301103 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-6d8lm_53f86999-4825-49b5-8652-a6b6bcc1dc5e/extract-content/0.log" Jan 27 16:37:29 crc kubenswrapper[4697]: I0127 16:37:29.317973 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-6d8lm_53f86999-4825-49b5-8652-a6b6bcc1dc5e/extract-utilities/0.log" Jan 27 16:37:29 crc kubenswrapper[4697]: I0127 16:37:29.337624 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-6d8lm_53f86999-4825-49b5-8652-a6b6bcc1dc5e/extract-content/0.log" Jan 27 16:37:29 crc kubenswrapper[4697]: I0127 16:37:29.461976 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-6d8lm_53f86999-4825-49b5-8652-a6b6bcc1dc5e/extract-utilities/0.log" Jan 27 16:37:29 crc kubenswrapper[4697]: I0127 16:37:29.478956 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-6d8lm_53f86999-4825-49b5-8652-a6b6bcc1dc5e/extract-content/0.log" Jan 27 16:37:29 crc kubenswrapper[4697]: I0127 16:37:29.674618 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-hwq4c_e4c801e2-39ef-4230-8bb0-fed36eccba1a/marketplace-operator/0.log" Jan 27 16:37:29 crc kubenswrapper[4697]: I0127 16:37:29.972550 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-6d8lm_53f86999-4825-49b5-8652-a6b6bcc1dc5e/registry-server/0.log" Jan 27 16:37:29 crc kubenswrapper[4697]: I0127 16:37:29.988750 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-q9lwr_24c17e72-9143-4da4-8b8f-0a777f568dfc/extract-utilities/0.log" Jan 27 16:37:30 crc kubenswrapper[4697]: I0127 16:37:30.153505 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-q9lwr_24c17e72-9143-4da4-8b8f-0a777f568dfc/extract-utilities/0.log" Jan 27 16:37:30 crc kubenswrapper[4697]: I0127 16:37:30.213631 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-q9lwr_24c17e72-9143-4da4-8b8f-0a777f568dfc/extract-content/0.log" Jan 27 16:37:30 crc kubenswrapper[4697]: I0127 16:37:30.214885 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-q9lwr_24c17e72-9143-4da4-8b8f-0a777f568dfc/extract-content/0.log" Jan 27 16:37:30 crc kubenswrapper[4697]: I0127 16:37:30.445750 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-q9lwr_24c17e72-9143-4da4-8b8f-0a777f568dfc/extract-content/0.log" Jan 27 16:37:30 crc kubenswrapper[4697]: I0127 16:37:30.446886 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-q9lwr_24c17e72-9143-4da4-8b8f-0a777f568dfc/extract-utilities/0.log" Jan 27 16:37:30 crc kubenswrapper[4697]: I0127 16:37:30.675271 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-q9lwr_24c17e72-9143-4da4-8b8f-0a777f568dfc/registry-server/0.log" Jan 27 16:37:30 crc kubenswrapper[4697]: I0127 16:37:30.680752 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-cbj7r_52ecc276-9ad2-4527-9e59-a4e19c63d851/extract-utilities/0.log" Jan 27 16:37:30 crc kubenswrapper[4697]: I0127 16:37:30.913294 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-cbj7r_52ecc276-9ad2-4527-9e59-a4e19c63d851/extract-content/0.log" Jan 27 16:37:30 crc kubenswrapper[4697]: I0127 16:37:30.942149 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-cbj7r_52ecc276-9ad2-4527-9e59-a4e19c63d851/extract-content/0.log" Jan 27 16:37:31 crc kubenswrapper[4697]: I0127 16:37:31.011485 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-cbj7r_52ecc276-9ad2-4527-9e59-a4e19c63d851/extract-utilities/0.log" Jan 27 16:37:31 crc kubenswrapper[4697]: I0127 16:37:31.170148 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-cbj7r_52ecc276-9ad2-4527-9e59-a4e19c63d851/extract-utilities/0.log" Jan 27 16:37:31 crc kubenswrapper[4697]: I0127 16:37:31.198949 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-cbj7r_52ecc276-9ad2-4527-9e59-a4e19c63d851/extract-content/0.log" Jan 27 16:37:31 crc kubenswrapper[4697]: I0127 16:37:31.673886 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-cbj7r_52ecc276-9ad2-4527-9e59-a4e19c63d851/registry-server/0.log" Jan 27 16:38:55 crc kubenswrapper[4697]: I0127 16:38:55.108879 4697 patch_prober.go:28] interesting pod/machine-config-daemon-wz495 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 16:38:55 crc kubenswrapper[4697]: I0127 16:38:55.109495 4697 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 16:38:57 crc kubenswrapper[4697]: I0127 16:38:57.743050 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-8rzgx"] Jan 27 16:38:57 crc kubenswrapper[4697]: E0127 16:38:57.743955 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15cecddc-3aaf-4f2e-9f4a-a3de77ea2854" containerName="extract-content" Jan 27 16:38:57 crc kubenswrapper[4697]: I0127 16:38:57.743969 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="15cecddc-3aaf-4f2e-9f4a-a3de77ea2854" containerName="extract-content" Jan 27 16:38:57 crc kubenswrapper[4697]: E0127 16:38:57.743986 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15cecddc-3aaf-4f2e-9f4a-a3de77ea2854" containerName="extract-utilities" Jan 27 16:38:57 crc kubenswrapper[4697]: I0127 16:38:57.743996 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="15cecddc-3aaf-4f2e-9f4a-a3de77ea2854" containerName="extract-utilities" Jan 27 16:38:57 crc kubenswrapper[4697]: E0127 16:38:57.744046 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15cecddc-3aaf-4f2e-9f4a-a3de77ea2854" containerName="registry-server" Jan 27 16:38:57 crc kubenswrapper[4697]: I0127 16:38:57.744054 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="15cecddc-3aaf-4f2e-9f4a-a3de77ea2854" containerName="registry-server" Jan 27 16:38:57 crc kubenswrapper[4697]: I0127 16:38:57.744271 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="15cecddc-3aaf-4f2e-9f4a-a3de77ea2854" containerName="registry-server" Jan 27 16:38:57 crc kubenswrapper[4697]: I0127 16:38:57.745882 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8rzgx" Jan 27 16:38:57 crc kubenswrapper[4697]: I0127 16:38:57.785351 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8rzgx"] Jan 27 16:38:57 crc kubenswrapper[4697]: I0127 16:38:57.864901 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b6a16b1-eda0-482c-b66c-38c3591fb56c-utilities\") pod \"redhat-operators-8rzgx\" (UID: \"8b6a16b1-eda0-482c-b66c-38c3591fb56c\") " pod="openshift-marketplace/redhat-operators-8rzgx" Jan 27 16:38:57 crc kubenswrapper[4697]: I0127 16:38:57.865573 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8kjgf\" (UniqueName: \"kubernetes.io/projected/8b6a16b1-eda0-482c-b66c-38c3591fb56c-kube-api-access-8kjgf\") pod \"redhat-operators-8rzgx\" (UID: \"8b6a16b1-eda0-482c-b66c-38c3591fb56c\") " pod="openshift-marketplace/redhat-operators-8rzgx" Jan 27 16:38:57 crc kubenswrapper[4697]: I0127 16:38:57.865722 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b6a16b1-eda0-482c-b66c-38c3591fb56c-catalog-content\") pod \"redhat-operators-8rzgx\" (UID: \"8b6a16b1-eda0-482c-b66c-38c3591fb56c\") " pod="openshift-marketplace/redhat-operators-8rzgx" Jan 27 16:38:57 crc kubenswrapper[4697]: I0127 16:38:57.967859 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b6a16b1-eda0-482c-b66c-38c3591fb56c-catalog-content\") pod \"redhat-operators-8rzgx\" (UID: \"8b6a16b1-eda0-482c-b66c-38c3591fb56c\") " pod="openshift-marketplace/redhat-operators-8rzgx" Jan 27 16:38:57 crc kubenswrapper[4697]: I0127 16:38:57.968522 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b6a16b1-eda0-482c-b66c-38c3591fb56c-catalog-content\") pod \"redhat-operators-8rzgx\" (UID: \"8b6a16b1-eda0-482c-b66c-38c3591fb56c\") " pod="openshift-marketplace/redhat-operators-8rzgx" Jan 27 16:38:57 crc kubenswrapper[4697]: I0127 16:38:57.968705 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b6a16b1-eda0-482c-b66c-38c3591fb56c-utilities\") pod \"redhat-operators-8rzgx\" (UID: \"8b6a16b1-eda0-482c-b66c-38c3591fb56c\") " pod="openshift-marketplace/redhat-operators-8rzgx" Jan 27 16:38:57 crc kubenswrapper[4697]: I0127 16:38:57.969099 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b6a16b1-eda0-482c-b66c-38c3591fb56c-utilities\") pod \"redhat-operators-8rzgx\" (UID: \"8b6a16b1-eda0-482c-b66c-38c3591fb56c\") " pod="openshift-marketplace/redhat-operators-8rzgx" Jan 27 16:38:57 crc kubenswrapper[4697]: I0127 16:38:57.969311 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8kjgf\" (UniqueName: \"kubernetes.io/projected/8b6a16b1-eda0-482c-b66c-38c3591fb56c-kube-api-access-8kjgf\") pod \"redhat-operators-8rzgx\" (UID: \"8b6a16b1-eda0-482c-b66c-38c3591fb56c\") " pod="openshift-marketplace/redhat-operators-8rzgx" Jan 27 16:38:57 crc kubenswrapper[4697]: I0127 16:38:57.990519 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8kjgf\" (UniqueName: \"kubernetes.io/projected/8b6a16b1-eda0-482c-b66c-38c3591fb56c-kube-api-access-8kjgf\") pod \"redhat-operators-8rzgx\" (UID: \"8b6a16b1-eda0-482c-b66c-38c3591fb56c\") " pod="openshift-marketplace/redhat-operators-8rzgx" Jan 27 16:38:58 crc kubenswrapper[4697]: I0127 16:38:58.071603 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8rzgx" Jan 27 16:38:58 crc kubenswrapper[4697]: I0127 16:38:58.586917 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8rzgx"] Jan 27 16:38:58 crc kubenswrapper[4697]: I0127 16:38:58.956684 4697 generic.go:334] "Generic (PLEG): container finished" podID="8b6a16b1-eda0-482c-b66c-38c3591fb56c" containerID="afdae3d5c9a823d81c3c72e5f549ccdcc691ea41d4f88756a7506c892ab34b0d" exitCode=0 Jan 27 16:38:58 crc kubenswrapper[4697]: I0127 16:38:58.956729 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8rzgx" event={"ID":"8b6a16b1-eda0-482c-b66c-38c3591fb56c","Type":"ContainerDied","Data":"afdae3d5c9a823d81c3c72e5f549ccdcc691ea41d4f88756a7506c892ab34b0d"} Jan 27 16:38:58 crc kubenswrapper[4697]: I0127 16:38:58.956753 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8rzgx" event={"ID":"8b6a16b1-eda0-482c-b66c-38c3591fb56c","Type":"ContainerStarted","Data":"2f29cd2446e0d1c3afa5d848e9488c83329979daa30b7a0f876a4ea5059be7c5"} Jan 27 16:38:59 crc kubenswrapper[4697]: I0127 16:38:59.967823 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8rzgx" event={"ID":"8b6a16b1-eda0-482c-b66c-38c3591fb56c","Type":"ContainerStarted","Data":"9105e7213461e610544e15f57f4450c71b623b7f131ea30d26cd007fedd5d033"} Jan 27 16:39:06 crc kubenswrapper[4697]: I0127 16:39:06.028838 4697 generic.go:334] "Generic (PLEG): container finished" podID="8b6a16b1-eda0-482c-b66c-38c3591fb56c" containerID="9105e7213461e610544e15f57f4450c71b623b7f131ea30d26cd007fedd5d033" exitCode=0 Jan 27 16:39:06 crc kubenswrapper[4697]: I0127 16:39:06.028918 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8rzgx" event={"ID":"8b6a16b1-eda0-482c-b66c-38c3591fb56c","Type":"ContainerDied","Data":"9105e7213461e610544e15f57f4450c71b623b7f131ea30d26cd007fedd5d033"} Jan 27 16:39:07 crc kubenswrapper[4697]: I0127 16:39:07.039906 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8rzgx" event={"ID":"8b6a16b1-eda0-482c-b66c-38c3591fb56c","Type":"ContainerStarted","Data":"6ff20eb390af31c51271f47ff52a25d37ba1582fcd294a3032957f20d0341e84"} Jan 27 16:39:07 crc kubenswrapper[4697]: I0127 16:39:07.061887 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-8rzgx" podStartSLOduration=2.54688588 podStartE2EDuration="10.06176801s" podCreationTimestamp="2026-01-27 16:38:57 +0000 UTC" firstStartedPulling="2026-01-27 16:38:58.958607432 +0000 UTC m=+5435.131007213" lastFinishedPulling="2026-01-27 16:39:06.473489552 +0000 UTC m=+5442.645889343" observedRunningTime="2026-01-27 16:39:07.059665889 +0000 UTC m=+5443.232065670" watchObservedRunningTime="2026-01-27 16:39:07.06176801 +0000 UTC m=+5443.234167791" Jan 27 16:39:08 crc kubenswrapper[4697]: I0127 16:39:08.072167 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-8rzgx" Jan 27 16:39:08 crc kubenswrapper[4697]: I0127 16:39:08.073346 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-8rzgx" Jan 27 16:39:09 crc kubenswrapper[4697]: I0127 16:39:09.115871 4697 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8rzgx" podUID="8b6a16b1-eda0-482c-b66c-38c3591fb56c" containerName="registry-server" probeResult="failure" output=< Jan 27 16:39:09 crc kubenswrapper[4697]: timeout: failed to connect service ":50051" within 1s Jan 27 16:39:09 crc kubenswrapper[4697]: > Jan 27 16:39:18 crc kubenswrapper[4697]: I0127 16:39:18.124356 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-8rzgx" Jan 27 16:39:18 crc kubenswrapper[4697]: I0127 16:39:18.192286 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-8rzgx" Jan 27 16:39:18 crc kubenswrapper[4697]: I0127 16:39:18.368214 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8rzgx"] Jan 27 16:39:20 crc kubenswrapper[4697]: I0127 16:39:20.147376 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-8rzgx" podUID="8b6a16b1-eda0-482c-b66c-38c3591fb56c" containerName="registry-server" containerID="cri-o://6ff20eb390af31c51271f47ff52a25d37ba1582fcd294a3032957f20d0341e84" gracePeriod=2 Jan 27 16:39:20 crc kubenswrapper[4697]: I0127 16:39:20.615834 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8rzgx" Jan 27 16:39:20 crc kubenswrapper[4697]: I0127 16:39:20.793431 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b6a16b1-eda0-482c-b66c-38c3591fb56c-catalog-content\") pod \"8b6a16b1-eda0-482c-b66c-38c3591fb56c\" (UID: \"8b6a16b1-eda0-482c-b66c-38c3591fb56c\") " Jan 27 16:39:20 crc kubenswrapper[4697]: I0127 16:39:20.793534 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8kjgf\" (UniqueName: \"kubernetes.io/projected/8b6a16b1-eda0-482c-b66c-38c3591fb56c-kube-api-access-8kjgf\") pod \"8b6a16b1-eda0-482c-b66c-38c3591fb56c\" (UID: \"8b6a16b1-eda0-482c-b66c-38c3591fb56c\") " Jan 27 16:39:20 crc kubenswrapper[4697]: I0127 16:39:20.793632 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b6a16b1-eda0-482c-b66c-38c3591fb56c-utilities\") pod \"8b6a16b1-eda0-482c-b66c-38c3591fb56c\" (UID: \"8b6a16b1-eda0-482c-b66c-38c3591fb56c\") " Jan 27 16:39:20 crc kubenswrapper[4697]: I0127 16:39:20.794555 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8b6a16b1-eda0-482c-b66c-38c3591fb56c-utilities" (OuterVolumeSpecName: "utilities") pod "8b6a16b1-eda0-482c-b66c-38c3591fb56c" (UID: "8b6a16b1-eda0-482c-b66c-38c3591fb56c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:39:20 crc kubenswrapper[4697]: I0127 16:39:20.801866 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b6a16b1-eda0-482c-b66c-38c3591fb56c-kube-api-access-8kjgf" (OuterVolumeSpecName: "kube-api-access-8kjgf") pod "8b6a16b1-eda0-482c-b66c-38c3591fb56c" (UID: "8b6a16b1-eda0-482c-b66c-38c3591fb56c"). InnerVolumeSpecName "kube-api-access-8kjgf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:39:20 crc kubenswrapper[4697]: I0127 16:39:20.896180 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8kjgf\" (UniqueName: \"kubernetes.io/projected/8b6a16b1-eda0-482c-b66c-38c3591fb56c-kube-api-access-8kjgf\") on node \"crc\" DevicePath \"\"" Jan 27 16:39:20 crc kubenswrapper[4697]: I0127 16:39:20.896397 4697 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b6a16b1-eda0-482c-b66c-38c3591fb56c-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 16:39:20 crc kubenswrapper[4697]: I0127 16:39:20.905327 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8b6a16b1-eda0-482c-b66c-38c3591fb56c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8b6a16b1-eda0-482c-b66c-38c3591fb56c" (UID: "8b6a16b1-eda0-482c-b66c-38c3591fb56c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:39:20 crc kubenswrapper[4697]: I0127 16:39:20.998118 4697 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b6a16b1-eda0-482c-b66c-38c3591fb56c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 16:39:21 crc kubenswrapper[4697]: I0127 16:39:21.155975 4697 generic.go:334] "Generic (PLEG): container finished" podID="8b6a16b1-eda0-482c-b66c-38c3591fb56c" containerID="6ff20eb390af31c51271f47ff52a25d37ba1582fcd294a3032957f20d0341e84" exitCode=0 Jan 27 16:39:21 crc kubenswrapper[4697]: I0127 16:39:21.156011 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8rzgx" event={"ID":"8b6a16b1-eda0-482c-b66c-38c3591fb56c","Type":"ContainerDied","Data":"6ff20eb390af31c51271f47ff52a25d37ba1582fcd294a3032957f20d0341e84"} Jan 27 16:39:21 crc kubenswrapper[4697]: I0127 16:39:21.156034 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8rzgx" event={"ID":"8b6a16b1-eda0-482c-b66c-38c3591fb56c","Type":"ContainerDied","Data":"2f29cd2446e0d1c3afa5d848e9488c83329979daa30b7a0f876a4ea5059be7c5"} Jan 27 16:39:21 crc kubenswrapper[4697]: I0127 16:39:21.156050 4697 scope.go:117] "RemoveContainer" containerID="6ff20eb390af31c51271f47ff52a25d37ba1582fcd294a3032957f20d0341e84" Jan 27 16:39:21 crc kubenswrapper[4697]: I0127 16:39:21.156068 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8rzgx" Jan 27 16:39:21 crc kubenswrapper[4697]: I0127 16:39:21.192652 4697 scope.go:117] "RemoveContainer" containerID="9105e7213461e610544e15f57f4450c71b623b7f131ea30d26cd007fedd5d033" Jan 27 16:39:21 crc kubenswrapper[4697]: I0127 16:39:21.198064 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8rzgx"] Jan 27 16:39:21 crc kubenswrapper[4697]: I0127 16:39:21.207210 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-8rzgx"] Jan 27 16:39:21 crc kubenswrapper[4697]: I0127 16:39:21.222529 4697 scope.go:117] "RemoveContainer" containerID="afdae3d5c9a823d81c3c72e5f549ccdcc691ea41d4f88756a7506c892ab34b0d" Jan 27 16:39:21 crc kubenswrapper[4697]: I0127 16:39:21.255749 4697 scope.go:117] "RemoveContainer" containerID="6ff20eb390af31c51271f47ff52a25d37ba1582fcd294a3032957f20d0341e84" Jan 27 16:39:21 crc kubenswrapper[4697]: E0127 16:39:21.256096 4697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ff20eb390af31c51271f47ff52a25d37ba1582fcd294a3032957f20d0341e84\": container with ID starting with 6ff20eb390af31c51271f47ff52a25d37ba1582fcd294a3032957f20d0341e84 not found: ID does not exist" containerID="6ff20eb390af31c51271f47ff52a25d37ba1582fcd294a3032957f20d0341e84" Jan 27 16:39:21 crc kubenswrapper[4697]: I0127 16:39:21.256130 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ff20eb390af31c51271f47ff52a25d37ba1582fcd294a3032957f20d0341e84"} err="failed to get container status \"6ff20eb390af31c51271f47ff52a25d37ba1582fcd294a3032957f20d0341e84\": rpc error: code = NotFound desc = could not find container \"6ff20eb390af31c51271f47ff52a25d37ba1582fcd294a3032957f20d0341e84\": container with ID starting with 6ff20eb390af31c51271f47ff52a25d37ba1582fcd294a3032957f20d0341e84 not found: ID does not exist" Jan 27 16:39:21 crc kubenswrapper[4697]: I0127 16:39:21.256153 4697 scope.go:117] "RemoveContainer" containerID="9105e7213461e610544e15f57f4450c71b623b7f131ea30d26cd007fedd5d033" Jan 27 16:39:21 crc kubenswrapper[4697]: E0127 16:39:21.256444 4697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9105e7213461e610544e15f57f4450c71b623b7f131ea30d26cd007fedd5d033\": container with ID starting with 9105e7213461e610544e15f57f4450c71b623b7f131ea30d26cd007fedd5d033 not found: ID does not exist" containerID="9105e7213461e610544e15f57f4450c71b623b7f131ea30d26cd007fedd5d033" Jan 27 16:39:21 crc kubenswrapper[4697]: I0127 16:39:21.256472 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9105e7213461e610544e15f57f4450c71b623b7f131ea30d26cd007fedd5d033"} err="failed to get container status \"9105e7213461e610544e15f57f4450c71b623b7f131ea30d26cd007fedd5d033\": rpc error: code = NotFound desc = could not find container \"9105e7213461e610544e15f57f4450c71b623b7f131ea30d26cd007fedd5d033\": container with ID starting with 9105e7213461e610544e15f57f4450c71b623b7f131ea30d26cd007fedd5d033 not found: ID does not exist" Jan 27 16:39:21 crc kubenswrapper[4697]: I0127 16:39:21.256487 4697 scope.go:117] "RemoveContainer" containerID="afdae3d5c9a823d81c3c72e5f549ccdcc691ea41d4f88756a7506c892ab34b0d" Jan 27 16:39:21 crc kubenswrapper[4697]: E0127 16:39:21.256716 4697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"afdae3d5c9a823d81c3c72e5f549ccdcc691ea41d4f88756a7506c892ab34b0d\": container with ID starting with afdae3d5c9a823d81c3c72e5f549ccdcc691ea41d4f88756a7506c892ab34b0d not found: ID does not exist" containerID="afdae3d5c9a823d81c3c72e5f549ccdcc691ea41d4f88756a7506c892ab34b0d" Jan 27 16:39:21 crc kubenswrapper[4697]: I0127 16:39:21.256743 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"afdae3d5c9a823d81c3c72e5f549ccdcc691ea41d4f88756a7506c892ab34b0d"} err="failed to get container status \"afdae3d5c9a823d81c3c72e5f549ccdcc691ea41d4f88756a7506c892ab34b0d\": rpc error: code = NotFound desc = could not find container \"afdae3d5c9a823d81c3c72e5f549ccdcc691ea41d4f88756a7506c892ab34b0d\": container with ID starting with afdae3d5c9a823d81c3c72e5f549ccdcc691ea41d4f88756a7506c892ab34b0d not found: ID does not exist" Jan 27 16:39:22 crc kubenswrapper[4697]: I0127 16:39:22.584161 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b6a16b1-eda0-482c-b66c-38c3591fb56c" path="/var/lib/kubelet/pods/8b6a16b1-eda0-482c-b66c-38c3591fb56c/volumes" Jan 27 16:39:25 crc kubenswrapper[4697]: I0127 16:39:25.108799 4697 patch_prober.go:28] interesting pod/machine-config-daemon-wz495 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 16:39:25 crc kubenswrapper[4697]: I0127 16:39:25.109132 4697 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 16:39:53 crc kubenswrapper[4697]: I0127 16:39:53.458825 4697 generic.go:334] "Generic (PLEG): container finished" podID="1f5fb25b-ff4d-4dc7-9f17-5124bce9f729" containerID="2c923a1e32a8d931fa6cacdf9ede2a8fb460752e3ffc2d65c7fc276afc51b285" exitCode=0 Jan 27 16:39:53 crc kubenswrapper[4697]: I0127 16:39:53.458956 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-w2bwm/must-gather-mwlm9" event={"ID":"1f5fb25b-ff4d-4dc7-9f17-5124bce9f729","Type":"ContainerDied","Data":"2c923a1e32a8d931fa6cacdf9ede2a8fb460752e3ffc2d65c7fc276afc51b285"} Jan 27 16:39:53 crc kubenswrapper[4697]: I0127 16:39:53.462103 4697 scope.go:117] "RemoveContainer" containerID="2c923a1e32a8d931fa6cacdf9ede2a8fb460752e3ffc2d65c7fc276afc51b285" Jan 27 16:39:53 crc kubenswrapper[4697]: I0127 16:39:53.872015 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-w2bwm_must-gather-mwlm9_1f5fb25b-ff4d-4dc7-9f17-5124bce9f729/gather/0.log" Jan 27 16:39:55 crc kubenswrapper[4697]: I0127 16:39:55.108457 4697 patch_prober.go:28] interesting pod/machine-config-daemon-wz495 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 16:39:55 crc kubenswrapper[4697]: I0127 16:39:55.108836 4697 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 16:39:55 crc kubenswrapper[4697]: I0127 16:39:55.108896 4697 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wz495" Jan 27 16:39:55 crc kubenswrapper[4697]: I0127 16:39:55.109667 4697 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"812f5dcbebfe4bb3abef8c2d768692d4d04f2a2e0f075a585dfa370157a13a72"} pod="openshift-machine-config-operator/machine-config-daemon-wz495" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 16:39:55 crc kubenswrapper[4697]: I0127 16:39:55.109754 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" containerName="machine-config-daemon" containerID="cri-o://812f5dcbebfe4bb3abef8c2d768692d4d04f2a2e0f075a585dfa370157a13a72" gracePeriod=600 Jan 27 16:39:55 crc kubenswrapper[4697]: I0127 16:39:55.482821 4697 generic.go:334] "Generic (PLEG): container finished" podID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" containerID="812f5dcbebfe4bb3abef8c2d768692d4d04f2a2e0f075a585dfa370157a13a72" exitCode=0 Jan 27 16:39:55 crc kubenswrapper[4697]: I0127 16:39:55.482904 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wz495" event={"ID":"e9bec8bc-b2a6-4865-83ca-692ae5c022a6","Type":"ContainerDied","Data":"812f5dcbebfe4bb3abef8c2d768692d4d04f2a2e0f075a585dfa370157a13a72"} Jan 27 16:39:55 crc kubenswrapper[4697]: I0127 16:39:55.483223 4697 scope.go:117] "RemoveContainer" containerID="f5883d34a63778ee686555f75d5c9331c19bb6049a5349238b0b49ff0a60e6ee" Jan 27 16:39:56 crc kubenswrapper[4697]: I0127 16:39:56.500970 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wz495" event={"ID":"e9bec8bc-b2a6-4865-83ca-692ae5c022a6","Type":"ContainerStarted","Data":"add051617a9897f248e471e015c912e5b8fe8fa86f0c6b5bc2f096d54ccdc048"} Jan 27 16:40:09 crc kubenswrapper[4697]: I0127 16:40:09.276664 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-w2bwm/must-gather-mwlm9"] Jan 27 16:40:09 crc kubenswrapper[4697]: I0127 16:40:09.277720 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-w2bwm/must-gather-mwlm9" podUID="1f5fb25b-ff4d-4dc7-9f17-5124bce9f729" containerName="copy" containerID="cri-o://7436ff36f164596e240ba0206ae565af669590c834cb31f03a3c39727a123426" gracePeriod=2 Jan 27 16:40:09 crc kubenswrapper[4697]: I0127 16:40:09.291496 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-w2bwm/must-gather-mwlm9"] Jan 27 16:40:09 crc kubenswrapper[4697]: I0127 16:40:09.638002 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-w2bwm_must-gather-mwlm9_1f5fb25b-ff4d-4dc7-9f17-5124bce9f729/copy/0.log" Jan 27 16:40:09 crc kubenswrapper[4697]: I0127 16:40:09.638896 4697 generic.go:334] "Generic (PLEG): container finished" podID="1f5fb25b-ff4d-4dc7-9f17-5124bce9f729" containerID="7436ff36f164596e240ba0206ae565af669590c834cb31f03a3c39727a123426" exitCode=143 Jan 27 16:40:09 crc kubenswrapper[4697]: I0127 16:40:09.738142 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-w2bwm_must-gather-mwlm9_1f5fb25b-ff4d-4dc7-9f17-5124bce9f729/copy/0.log" Jan 27 16:40:09 crc kubenswrapper[4697]: I0127 16:40:09.743282 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-w2bwm/must-gather-mwlm9" Jan 27 16:40:09 crc kubenswrapper[4697]: I0127 16:40:09.817602 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/1f5fb25b-ff4d-4dc7-9f17-5124bce9f729-must-gather-output\") pod \"1f5fb25b-ff4d-4dc7-9f17-5124bce9f729\" (UID: \"1f5fb25b-ff4d-4dc7-9f17-5124bce9f729\") " Jan 27 16:40:09 crc kubenswrapper[4697]: I0127 16:40:09.817692 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4glk7\" (UniqueName: \"kubernetes.io/projected/1f5fb25b-ff4d-4dc7-9f17-5124bce9f729-kube-api-access-4glk7\") pod \"1f5fb25b-ff4d-4dc7-9f17-5124bce9f729\" (UID: \"1f5fb25b-ff4d-4dc7-9f17-5124bce9f729\") " Jan 27 16:40:09 crc kubenswrapper[4697]: I0127 16:40:09.833067 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f5fb25b-ff4d-4dc7-9f17-5124bce9f729-kube-api-access-4glk7" (OuterVolumeSpecName: "kube-api-access-4glk7") pod "1f5fb25b-ff4d-4dc7-9f17-5124bce9f729" (UID: "1f5fb25b-ff4d-4dc7-9f17-5124bce9f729"). InnerVolumeSpecName "kube-api-access-4glk7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:40:09 crc kubenswrapper[4697]: I0127 16:40:09.919957 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4glk7\" (UniqueName: \"kubernetes.io/projected/1f5fb25b-ff4d-4dc7-9f17-5124bce9f729-kube-api-access-4glk7\") on node \"crc\" DevicePath \"\"" Jan 27 16:40:09 crc kubenswrapper[4697]: I0127 16:40:09.987168 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f5fb25b-ff4d-4dc7-9f17-5124bce9f729-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "1f5fb25b-ff4d-4dc7-9f17-5124bce9f729" (UID: "1f5fb25b-ff4d-4dc7-9f17-5124bce9f729"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:40:10 crc kubenswrapper[4697]: I0127 16:40:10.022031 4697 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/1f5fb25b-ff4d-4dc7-9f17-5124bce9f729-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 27 16:40:10 crc kubenswrapper[4697]: I0127 16:40:10.585541 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f5fb25b-ff4d-4dc7-9f17-5124bce9f729" path="/var/lib/kubelet/pods/1f5fb25b-ff4d-4dc7-9f17-5124bce9f729/volumes" Jan 27 16:40:10 crc kubenswrapper[4697]: I0127 16:40:10.654415 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-w2bwm_must-gather-mwlm9_1f5fb25b-ff4d-4dc7-9f17-5124bce9f729/copy/0.log" Jan 27 16:40:10 crc kubenswrapper[4697]: I0127 16:40:10.658285 4697 scope.go:117] "RemoveContainer" containerID="7436ff36f164596e240ba0206ae565af669590c834cb31f03a3c39727a123426" Jan 27 16:40:10 crc kubenswrapper[4697]: I0127 16:40:10.658396 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-w2bwm/must-gather-mwlm9" Jan 27 16:40:10 crc kubenswrapper[4697]: I0127 16:40:10.691266 4697 scope.go:117] "RemoveContainer" containerID="2c923a1e32a8d931fa6cacdf9ede2a8fb460752e3ffc2d65c7fc276afc51b285" Jan 27 16:41:55 crc kubenswrapper[4697]: I0127 16:41:55.109036 4697 patch_prober.go:28] interesting pod/machine-config-daemon-wz495 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 16:41:55 crc kubenswrapper[4697]: I0127 16:41:55.109676 4697 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 16:42:25 crc kubenswrapper[4697]: I0127 16:42:25.109079 4697 patch_prober.go:28] interesting pod/machine-config-daemon-wz495 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 16:42:25 crc kubenswrapper[4697]: I0127 16:42:25.109657 4697 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wz495" podUID="e9bec8bc-b2a6-4865-83ca-692ae5c022a6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 16:42:30 crc kubenswrapper[4697]: E0127 16:42:30.863116 4697 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/system.slice/rpm-ostreed.service\": RecentStats: unable to find data in memory cache]"